- New features!
- Less bugs!
- Both!
These Discourse (forum software) people are awesome. A serious person could really do something neat with this new RANKING thing they stuck in Polls.
We should try to find a serious person.
To whom?
Good philosophical question.
- Is there a beneficiary to āfairnessā?
- Does it come at a cost to another? Is the cost fair?
- If there is, does that make fair also not fair and thus a paradox? Or does it make fair the opposite of benevolent?
- Is fairness static or dynamic?
- Is fairness universal or subjective?
- If subjective them it can be fair and unfair simultaneously based on perception, making the word pointless .
- Can fairness be weaponized making it no longer existing as fairness?
- Does intent matter in fairness or just outcome?
- How can you even define fairness without the definition including some paradoxes?
- My toddlers only use āfairnessā from a place of jealousy, entitlement and victimstanceā¦ (āNo fair! I want that!ā), And itās rare for many adults to use it for any other purpose that isnāt self-enriching.
Iām actually always impressed when somebody uses fairness in a way that isnāt self-enriching. For example
- A coach reminding players to respect the rules and their opponents.
- Journalism that presents facts without agenda or narrative.
- Steelman-ing a viewpoint or stance that isnāt your own, or especially is opposite to yours (I honestly see this is one of the biggest signs of maturity and high level emotional intelligence that very few people have the ability to ever do at all even when they try, and is one of the best contenders for fairness that isnāt tainted by a purely selfish motivation)
- Constructive feedback
Iām Not even convinced there has to be a āfair to whomā implication of victimstance in all cases. Is it really part of the definition?
Thanks carver, I enjoyed reading that.
@Resist said:
I said:
What I thought you meant in response was:
To be fair to Wyze Co, management and developersā¦
Bugs are inevitable. They are endemic to sofware development. Introducing fewer new features will not necessarily result in a significant reduction in bugs.
We can have Both new features and less bugs by more efficiently tracking down bugs.
Is that roughly correct?
I think these respond (in a sense) to some in your series of questions.
Good sense is innate. Living in a strong, healthy culture, a common sense develops. Disagreements mediated in good faith with good will are resolvable. Fairly.
But you must start with human sense. Do you sense good faith and good will are present? Then a common sense is achievable and a fair outcome possible.
How will you know it is fair? You will sense it.
Close enough.
I donāt knowā¦working customer service in College made me have serious doubts about this.
Too subjective.
This can be too magical like the new cultural phenomenon of āvibesāā¦I donāt mind some degree of psychological profiling based on empirical evidence in certain circumstances, but magical āsenseā or āvibesā that is unfounded on any evidence or statistical reasoning is generally a form of magical superstition and not good sense from a demonstrable empirical standpoint, but confirmation bias can be a finicky mistress that will tell anyone what they want to believe about their āsensingā or āvibesā and thus I donāt generally let that hold weight depending on the implication.
OK, but some peopleās āsenseā is only about whether it enriches themselves in some way. They will thus call something āfairā because they āsenseā it as desireable, when others would not. I am not sure that āsenseā can be an objectively good standard that will hold up to scrutiny since it is too ambiguous. What if 1 person āsensesā 1 one way and the other āsensesā the other way? What if it is deeply divided in half in the world population? If sense were so common then there would not be such hot division, but instead widespread consensus, but we do not have widespread consensus, erto, āsenseā may be flawedā¦
Since @carverofchoiceās current profile pictureāat least in smaller renderingsācould be interpreted as having the appearance of a , one might wonder ifā¦
Do you know that what tech/ai is commiting us to is the best possible future and not a nightmare?
If you donāt knowāand are still willing to commit usāthen you have faith in your vision (a sense.)
ā¦and then @p2788deal said this:
Despite statements to the contrary, Wyze doesnāt seem like it has a formal group testing its releases. Some of the really bad bugs I see, wonāt get past any competent test group. Itās just the developers ātestingā their own code. And relying on beta testers to spot their mistakes.
and I thought what Iāve thought before:
I donāt have the access nor the experience and expertise to fairly judge if Wyze is competent. But at the moment Iām leaning toward theyāre not. Hopefully this will change.
ā¦and that p27 probably has a better basis to judge than I, though I donāt know that for sure.
In the poll I chose:
- Less bugs!
- New features!
- Both!
āBoth!ā third because itās probably not achievable. I didnāt abstain because I think itās possible, but a longshot.
Wrestle with it yourself, how do you see it?

I donāt know that for sure
Itās a long story but a boss once assigned me to a testing group as a punishment because of a perceived slight. It was short-lived but I did get a minor subsystem test fully automated. So yes, I do have experience with software testing.

Do you know that what tech/ai is commiting us to is the best possible future and not a nightmare?
If you donāt knowāand are still willing to commit usāthen you have faith in your vision (a sense.)
Bah. My excitement about AI and my willingness to embrace its potential for humanityās future stem from reasoned optimism, EVIDENCE of AIās benefits and research and demonstrations. It is different from relying solely on āvibesā or āa sixth senseā to make judgments based solely on āfeelingsā. Iām not saying there is no value in feelings, and that we canāt make a list of a billion case studies where unfounded āfeelingsā turned out to be ārightā or whatever, but we could do the exact opposite too and show just as many case studies proving the exact opposite case too where unfounded feelings were 100% wrong and destructive.
I think there are differences between:
- Evidence vs Intuition/fear My support is primarily based on tangible advancements and proven applications, including things like medical breakthroughs, enhanced productivity, and problem-solving at scales humans cannot achieve alone. I can acknowledge the risks, but my perspective is grounded in assessing the evidence and potential benefits. In contrast, relying on an unfounded āsenseā or intuition isnāt backed by data or analysis and can lead to fear without actionable reasoning.
- Calculated Risk vs. Speculative Fear: No transformative technology has ever been without risks. History shows that electricity, the internet, and even vaccines faced skepticism and fear. Even things like BAR CODES and QR codes had people in major panic attacks because of their āfeelingsā and āsenseā about it being the end of the world. The difference is that many of these technologies were still adopted because of their immense benefits, despite uncertainties. Supporting AI while recognizing risks involves managing and mitigating those risksāan approach far removed from superstition, blind faith or blind fear because of a āfeelingā or āsenseā that may be wildly unfounded in some cases. People now look back on most of the other historical technological advancements and think a lot of the protesting panic about most of them was ridiculously unfounded and in hindsight, people canāt even comprehend why people were so terrified of many of them.
- Faith in Process, Not Gut Feelings: My āfaithā in AI isnāt blind but rooted in the collective intelligence of scientists, developers, ethicists, and policymakers working to shape its trajectory. Trusting this process, even without 100% certainty of outcomes, is a reasoned stance. On the other hand, the argument that a āsenseā of impending doom should be weighted equally dismisses the rigorous methodologies that shape AI development. Will it be used for bad things? Sure. Almost all technologies have been used for bad things too. But that doesnāt make people go get rid of every form of human advancement just because someone somewhere misuses it sometimes. Shall we ban fire and electricity now too? We just saw how much chaos and destruction Fire can cause. Maybe itās better to never allow it? Of course not.
- Encouraging Progress: If progress relied only on absolute certainty, humanity would never advance. The key difference is that my excitement for AI aligns with an openness to innovation and improvement. Acting based on an unverified sense often halts progress and leads to inaction.
I see a very big difference between magical āvibesā and āsenseā that lacks grounding, logic, or empirical evidence to contribute meaningfully to clear rationals vs a reasoned, evidence-based perspective that acknowledges both the immense potential and the manageable risks of a transformative technology like AI. While āvibesā and āsenseā may occasionally align with reality, they are far too inconsistent and subjective to serve as the foundation for meaningful decisions about the future of humanity.
Feelings can be a valuable signal, but they are not a substitute for the deliberate, informed action necessary to navigate the challenges and opportunities including those that AI brings.
I donāt mind disagreement though. I am just explaining that āvibesā and feelings as a rationale donāt hold a lot of weight to me even if they are fashionable nowadays. In my experience, nowadays most people use āvibesā as a manipulation tactic so they donāt have to explain anything or use real reasons or have a conversation with reason/logic, etc. My feelings have been āwrongā or āmisplacedā before, as have those of nearly everyone. That doesnāt mean I donāt give them any stock, but it does mean I donāt let them rule my life and every decision, that is why we have inhibitionsā¦things that sometimes STOP us from acting on our every feeling or impulse, especially the negative ones. If our feelings are always right, then why donāt we get rid of all our inhibitions and stop sending people to prison when they follow their every impulse and āsenseā they feel to do or not do something? Because feelings and impulses and āsenseā is not always āgoodā or right. It is not reliable. Inhibitions exist to allow us to use reason, logic, etc to override those feelings and impulse senses with more thought out decisions not always purely based on emotion senses. People who donāt use reason/inhibition logic to control their decisions almost always end up in prison. I think the fruits of the difference can speak for themselves.
Of course, I am using some extreme examples as an illustration to make the point, but itās all partially just in fun to have somewhat of a mock debate with you in the watercooler. Donāt take me too seriously.
We should call you Hector. You really pound things home.
Reducing everything to DATA (dessicating the organic stuff until it falls away) yields a very harsh EVIDENCE. Abstract in the extreme.
This is the way some people like it. Manageable.
āThe assent of the mind to the truth of a proposition or statement for which there is not complete evidence; belief in general.ā
āAn intuitive or acquired perception or ability to estimateā¦ A vague feeling or presentiment.ā
I saw you praying to the neon god you made in a previous lifetime. It buzzes and flashes its message now, incomplete and intermittent.
But still in force.