AI Carver should have a scissor hands profile pic. AI peep would require fewer rewrites.
::sigh::
If it can be done it will be done.
Gold help us. Amen.
AI Carver should have a scissor hands profile pic. AI peep would require fewer rewrites.
::sigh::
If it can be done it will be done.
Gold help us. Amen.
You do the same.
Be safe and enjoy the day. Happy 4th
Itâs already done. ![]()
People prefer machines and its a good thing because theyre gonna get em in spades. ![]()
âOver time, weâll find the vocabulary as a society to be able to articulate why that is valuable,â Zuckerberg predicted.
Meta has publicly discussed its strategy to inject anthropomorphized chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than theyâd like â creating a huge potential market for Metaâs digital companions. The bots âprobablyâ wonât replace human relationships, he said in an April interview with podcaster Dwarkesh Patel. But they will likely complement usersâ social lives once the technology improves and the âstigmaâ of socially bonding with digital companions fades.
âOver time, weâll find the vocabulary as a society to be able to articulate why that is valuable,â Zuckerberg predicted.
https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/
Apparently users are already making these choices for themselves in many cases. In some news-related podcasts I listen to, Iâve heard recently about how many young people are actively engaged with chatbots to combat loneliness or get advice or feedback (or even âpracticeâ) before interacting with another actual human being. One I heard yesterday mentioned the backlash OpenAI saw after the release of ChatGPT-5, and then I found an article that provided some interesting context:
OpenAIâs launch of its new one-size-fits-all ChatGPT 5 model sparked an immediate user rebellion this week. Longtime users flooded social media with complaints about lost functionality, broken workflows, and even lost emotional connection.
The one âemotional connectionâ that comes to my mind when interacting with bots is the frustration I experience when I talk to a Google Home Mini and Google Assistant doesnât respond in the way I think it should, even when I speak more clearly and rephrase the request. This is not where I look for an emotional engagement. AI is not my âbroâ.
The Forbes article taught me a new phrase: âparasocial bondsâ.
The AI noted that users develop âparasocial bondsâ with different model personalities, treating them like âfamiliar colleagues.â
It also provided some suggestions directed toward âproduct leaders and marketersâ that have real validity even absent any discussion of AI, things the decision makers at Wyze might do well to consider. Itâs a brief article and worth a quick read, I think.
Indeed, it is not. (Where is T-shirt Steve when you need him??
)
How anyone can think this trend is positive is just a little bit beyond me but I try to restrain my skepticism because what choice do I have? (sometimes I have to bind it, gag it and stash it in a closet, but that now just barely works, abuse seems to make it stronger!)
Will read on your rec, thanks!
You would probably enjoy watching and listening to me yelling at and cursing to the bots that answer the phone at almost every customer service place I call. ![]()
Well, since youâre
you could certainly point one at yourself the next time you make a call and then post that to Captured on Wyze!
I once yelled at Siri for not waiting for me to finish the request. Also, I raise my voice at Alexa when she says âI donât know how to respond to thatâ ![]()
Alexa (which I summon using a non-default wake word and which speaks with a British male voice) frequently begins responding before Iâve finished a request. Google Assistant is much more forgiving, though Iâve told both where they can stick it a time or two, and Google Assistant tends to scold me when I do that. ![]()
Iâve been known to make snide remarks to Alexa ![]()
I used to unplug Alexa when I went to my daughterâs house. The kids had fun asking her stupid questions.
That was excellent. I donât agree with some of what he said but it was excellent.
I think a few of the techniques he uses will become outdated, obsolete or unreliable really soon. Image creation AI is evolving and progressing extremely rapidly and advanced. It doesnât have to understand vanishing points as a concept in order to statistically recognize and predict and duplicate them in an image, given enough examples and corrective training. Especially if only part of the image is generated while part of it is entirely real.
His last sentence about the CSI âenhanceâ image stuff is only partially real. He made it sound like his team or an AI can reconstruct a face from an image that would be nearly impossible to do as everybody jokes about the CSI shows doing. I donât care how good the AI is, if you have a 1080p Wyze Cam that records a person from such a distance away that their face only takes up one to 4 pixels on the camera recording, image forensics professional or AI can magically zoom in and turn one to four pixels of a head into a recognizable face that will tell them exactly who a person was. It is just not possible to enhance an image infinitely like the fake CSIs and secret agents do on TV/movies. They can do a little bit of it, to maybe guess a blurry license plate if there are enough colored pixels, etc, but the enhance stuff is realistically limited.
But yeah, thereâs a lot of fake stuff on social media for sure. The problem is that there are huge incentives to do it, both monetary and social incentives. Attention is profitable. I think the root cause has to be addressed to stop that issue. As long as âcreatorsâ are rewarded to generate ANY kind of engagement at any cost, they will keep doing it. Creators Just need to be penalized for things like intentionally misleading people. It is possible to enact and enforce many of these kinds of rules. Thereâs plenty of content that gets censored or punished for, and so it isnât quite as common when a user knows they will get demonetized and maybe lose their account if they are violating important rules.
I rarely agree completely with anyone, but I think he made some good points. I understand your points about the techniques he described, too. That seems like the same sort of cat and mouse game that has always happened between bad actors and âgood guysâ. I wouldnât expect this arena to be any different.
Thatâs one way to interpret it. I actually liked his response at the end, because it didnât really give anything away and may not have actually answered the question that the interviewer thought he was asking.
Interviewer: In CSI crime shows, when they say âenhanceâ, uh, can you do that?
Speaker (laughing): Yes.
Itâs possible that he was answering âcan you do thatâ as in âcan you say âenhanceâ?â âYes, of course. Anybody can say âenhanceâ.â Itâs also possible that he was having some fun with the audience (he was laughing, because he probably gets questions like that a lot) with that answer by perpetuating the belief that the de-pixelation seen in popular entertainment is accurately depicted by Hollywood. The way that particular exchange went down, I didnât interpret his answer as being an absolute statement that achieving amazing clarity from a low-resolution image is exactly as shown on TV, and I understand and agree with your points about that.
I think thatâs part of the issue, particularly when it comes to the AI-generated âslopâ. That garbage is just infuriating. I think another component that isnât stressed enough is the willing participation of the audience and the failure of far too many people to exercise any sort of critical thinking skills. Thatâs the part that bothers me a lot, and I donât have a solution for it, because it seems like a broader problem that involves parenting and education, and Iâm not sure how to get societal buy-in to recognize the importance of using our human brains in this way.