Post your dam AI art here and keep it out of the other threads, it's GROSS 🤮

AI Carver should have a scissor hands profile pic. AI peep would require fewer rewrites.

::sigh::

If it can be done it will be done.

Gold help us. Amen.

2 Likes

Happy July 4th to all of my American Friends.

Enjoy the day however you choose.

2 Likes

You do the same.

Be safe and enjoy the day. Happy 4th

2 Likes

Enjoy your dinner.


Medium ribeye for me. :cow:

2 Likes

It’s already done. :grin:

People prefer machines and its a good thing because theyre gonna get em in spades. :slight_smile:

1 Like

“Over time, we’ll find the vocabulary as a society to be able to articulate why that is valuable,” Zuckerberg predicted.

Meta has publicly discussed its strategy to inject anthropomorphized chatbots into the online social lives of its billions of users. Chief executive Mark Zuckerberg has mused that most people have far fewer real-life friendships than they’d like – creating a huge potential market for Meta’s digital companions. The bots “probably” won’t replace human relationships, he said in an April interview with podcaster Dwarkesh Patel. But they will likely complement users’ social lives once the technology improves and the “stigma” of socially bonding with digital companions fades.

“Over time, we’ll find the vocabulary as a society to be able to articulate why that is valuable,” Zuckerberg predicted.

https://www.reuters.com/investigates/special-report/meta-ai-chatbot-death/

1 Like

Apparently users are already making these choices for themselves in many cases. In some news-related podcasts I listen to, I’ve heard recently about how many young people are actively engaged with chatbots to combat loneliness or get advice or feedback (or even “practice”) before interacting with another actual human being. One I heard yesterday mentioned the backlash OpenAI saw after the release of ChatGPT-5, and then I found an article that provided some interesting context:

OpenAI’s launch of its new one-size-fits-all ChatGPT 5 model sparked an immediate user rebellion this week. Longtime users flooded social media with complaints about lost functionality, broken workflows, and even lost emotional connection.

The one “emotional connection” that comes to my mind when interacting with bots is the frustration I experience when I talk to a Google Home Mini and Google Assistant doesn’t respond in the way I think it should, even when I speak more clearly and rephrase the request. This is not where I look for an emotional engagement. AI is not my “bro”.

The Forbes article taught me a new phrase: “parasocial bonds”.

The AI noted that users develop “parasocial bonds” with different model personalities, treating them like “familiar colleagues.”

It also provided some suggestions directed toward “product leaders and marketers” that have real validity even absent any discussion of AI, things the decision makers at Wyze might do well to consider. It’s a brief article and worth a quick read, I think.

2 Likes

Indeed, it is not. (Where is T-shirt Steve when you need him?? :wink: )

How anyone can think this trend is positive is just a little bit beyond me but I try to restrain my skepticism because what choice do I have? (sometimes I have to bind it, gag it and stash it in a closet, but that now just barely works, abuse seems to make it stronger!)

Will read on your rec, thanks!

1 Like

2030

:face_with_spiral_eyes: Over time, I have found the vocabulary to be able to articulate why this is valuable.

You would probably enjoy watching and listening to me yelling at and cursing to the bots that answer the phone at almost every customer service place I call. :laughing:

Well, since you’re

you could certainly point one at yourself the next time you make a call and then post that to Captured on Wyze!

I once yelled at Siri for not waiting for me to finish the request. Also, I raise my voice at Alexa when she says “I don’t know how to respond to that” :laughing:

Alexa (which I summon using a non-default wake word and which speaks with a British male voice) frequently begins responding before I’ve finished a request. Google Assistant is much more forgiving, though I’ve told both where they can stick it a time or two, and Google Assistant tends to scold me when I do that. :grimacing:

I’ve been known to make snide remarks to Alexa :winking_face_with_tongue:

I used to unplug Alexa when I went to my daughter’s house. The kids had fun asking her stupid questions.

1 Like

This is excellent, especially his 3rd and 4th points toward the end (~9:36 mark):

2 Likes

That was excellent. I don’t agree with some of what he said but it was excellent.

I think a few of the techniques he uses will become outdated, obsolete or unreliable really soon. Image creation AI is evolving and progressing extremely rapidly and advanced. It doesn’t have to understand vanishing points as a concept in order to statistically recognize and predict and duplicate them in an image, given enough examples and corrective training. Especially if only part of the image is generated while part of it is entirely real.

His last sentence about the CSI “enhance” image stuff is only partially real. He made it sound like his team or an AI can reconstruct a face from an image that would be nearly impossible to do as everybody jokes about the CSI shows doing. I don’t care how good the AI is, if you have a 1080p Wyze Cam that records a person from such a distance away that their face only takes up one to 4 pixels on the camera recording, image forensics professional or AI can magically zoom in and turn one to four pixels of a head into a recognizable face that will tell them exactly who a person was. It is just not possible to enhance an image infinitely like the fake CSIs and secret agents do on TV/movies. They can do a little bit of it, to maybe guess a blurry license plate if there are enough colored pixels, etc, but the enhance stuff is realistically limited.

But yeah, there’s a lot of fake stuff on social media for sure. The problem is that there are huge incentives to do it, both monetary and social incentives. Attention is profitable. I think the root cause has to be addressed to stop that issue. As long as “creators” are rewarded to generate ANY kind of engagement at any cost, they will keep doing it. Creators Just need to be penalized for things like intentionally misleading people. It is possible to enact and enforce many of these kinds of rules. There’s plenty of content that gets censored or punished for, and so it isn’t quite as common when a user knows they will get demonetized and maybe lose their account if they are violating important rules.

1 Like

I rarely agree completely with anyone, but I think he made some good points. I understand your points about the techniques he described, too. That seems like the same sort of cat and mouse game that has always happened between bad actors and “good guys”. I wouldn’t expect this arena to be any different.

That’s one way to interpret it. I actually liked his response at the end, because it didn’t really give anything away and may not have actually answered the question that the interviewer thought he was asking.

Interviewer: In CSI crime shows, when they say “enhance”, uh, can you do that?
Speaker (laughing): Yes.

It’s possible that he was answering “can you do that” as in “can you say ‘enhance’?” “Yes, of course. Anybody can say ‘enhance’.” It’s also possible that he was having some fun with the audience (he was laughing, because he probably gets questions like that a lot) with that answer by perpetuating the belief that the de-pixelation seen in popular entertainment is accurately depicted by Hollywood. The way that particular exchange went down, I didn’t interpret his answer as being an absolute statement that achieving amazing clarity from a low-resolution image is exactly as shown on TV, and I understand and agree with your points about that.

I think that’s part of the issue, particularly when it comes to the AI-generated “slop”. That garbage is just infuriating. I think another component that isn’t stressed enough is the willing participation of the audience and the failure of far too many people to exercise any sort of critical thinking skills. That’s the part that bothers me a lot, and I don’t have a solution for it, because it seems like a broader problem that involves parenting and education, and I’m not sure how to get societal buy-in to recognize the importance of using our human brains in this way.

1 Like