AI gives me a rash.[1] Is it still safe to use?
That is one smart AI ![]()
That phrase in this context makes me cringe a little and reminds me of the recent “Sickofancy” episode of South Park, where ChatGPT was a target of their satire.
From a privacy search engine assistant set to forget all convos so when it says ‘you’ substitute ‘humanity’ or ‘mankind’ I reckon. It hardly know me. ![]()
Presidential Press Conference
1/21/2025
$500B for Stargate (AI)
Speakers
Donald Trump
https://youtu.be/ZHi32V0MqBc
Larry Ellison
https://youtu.be/ZHi32V0MqBc?t=240
Masayoshi Son
https://youtu.be/ZHi32V0MqBc?t=350
Sam Altman
https://youtu.be/ZHi32V0MqBc?t=510
Larry Ellison
https://youtu.be/ZHi32V0MqBc?t=597
Donald Trump Q&A (general)
https://youtu.be/ZHi32V0MqBc?t=708
Worth a listen given the implications.
(And a watch. Humans behaving.)
Artificially ![]()
Young lady does not like AI
What makes it ‘worth it?’ ![]()

Dastardly deeds… done cheap. ![]()
Imagine bazooms lurking below this face.
![]()
The face would stop anyone from looking any further bazooms involved or not. ![]()
and fix these damn tiny emojis.
I heard this quote (from an X post) in a news podcast:
Now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.
The response inside my head may have involved a word beginning with the sixth letter of the alphabet, with the implication that Mr. Altman should engage in carnal congress with himself. He has almost certainly not solved the problem I mentioned in another topic, and I don’t believe he has any qualification to address that, but I’ve argued that his company (and others) should take responsibility and deliberately try to craft their tools to avoid amplifying those kinds of issues instead of preying on peoples’ insecurities with their algorithms like those [self-redacted] at Meta have been doing for years.
I’m so glad to see this being used! ![]()
![]()
I’d rather not. I think @bryonhu has already attempted that.
Take a looky here and share some feedback:
Here is where I have an issue and a question. Why on earth would someone create an app or whatever you want to call it that removes clothes from a picture and creates a demented image? What is the purpose and end game here? If you ask me, I would hold responsible the creator of such an app responsible and punish them to the full extent of the law. That goes for DeepFakes.
What has become of our society…
IMO, this is a case of using AI to Cyberbully. I don’t understand it either, but this link may shed some light as to why it happens.
Courage frog here (appearing in lieu of the simpering ‘peep.’)
Collateral damage. In a global war, there will be casualties. Creative destruction is not for sissies. The ends justify any meanness on the way.
The Golden Age beckons. Who are we to tarnish its glory? Buck up. Many shall die as they throb upriver to spawn.
Things are well broken. Focus now on moving ever faster. Profits uber alles! Dip deep thy beak! Courage frogs, courage!
LEAP!!
Comments:
-One user lamented (not worth recounting)
–Pffft. We’re not here to mollycoddle. LEAP!! -cf
Those poor froggos! Hoomans don’t know when to leave well enough alone. You must feel like this every day you visit the Forum.
That might be the case, but any company or entity that creates a software that helps to disrobe anyone, should be shot down, pun intended.
Does Canada have any laws on DeepFakes like in the US?


