@peepeep We’re in trouble now. AI social media.
I was listening to the OBDM podcast where the hosts were discussing this AI Social Network. Check out 4:45 in on this MP3.
I’m so fascinated by this whole thing. I’ve had clawdbot on my list of things to setup on my mini PC in a sandboxed secure docker whenever I get enough free time to do it securely with guardrails.
Mostly planning to use it for research projects, some help with a legal business issue (search through hundreds to thousands of pages of documents and logs to help find the relevant sources/references to pass on to a real attorney related to pending litigation); I might allow it access to parts of my local network, such as being able to help with my home assistant instance, in reorganizing things, and maybe helping with some advanced automations.
I plan to only allow it to do things which I can fully log and review and restore from a backup. I will probably only allow it limited local access. Because like the guy in the podcast said there is too much danger from prompt injection. I do ever allow it to have external access, it will be limited vetted white listed access to certain things. It would need to request specific approval for anything not currently on the white list. I will definitely not be giving it access to any of my user accounts or passwords though. If it needs an account of any kind, it will definitely be getting its own separate account. Preferably something that is a restricted user account without administrator access, such as a child account.
I actually think this system is amazing. I do think people are crazy to be connecting a cloud LLM API up to these things though. Cloud AI apis charge per token, and these things could end up blowing a ton of money really fast. I have my own local lln models running on an upper tier desktop. I can use my local llms unlimited without paying per token. It’s also privately controlled and can stay totally local.
I will definitely not be allowed my bot to access the moltbook social media. The prompt injection hacking danger is insane. These idiots allowing bots to have full access to all of their accounts And freely Converse with other bots that are being run by hackers and scammers, etc who are specifically looking for ways to do nefarious things…Those People are just insane without common sense.
Having said that, I think the truth is between the two groups of reactions in that podcast. I think both sides were a little extreme. I do think there are a lot of really huge valuable things that these bots can do. Especially for specific data analysis. Right now there are a lot of problems with memory context limitations for most llms when you’re dealing with large amounts of data. These bots can kind of get around that limitation leveraging taking and using notes and creating specific Python code to help with certain objectives and using looping updates. It’s kind of ingenious. They can do things that were impossible with a regular llm alone. This is what I need it for.
One downside though. At least in the past when you got scammers calling you, there were always a ton of people who thought that it was funny to just try and waste the scammers time by keeping them on the line and keeping them from using that same amount of time from scamming someone else. Now that’s a complete waste of time. All you will be doing is talking to a bot which can call an unlimited number of people at the same time. You can’t really waste a bot’s time. Scammers and spammers can literally just unleash unlimited bots now.
I’m following this whole thing fairly closely, and I think it’s both overhyped and under hyped in different ways. It is probably the biggest event since chat gpt was released to the public though. It’s a huge deal. But it’s Don’t necessary everything that is being clickbaited about online recently. However, 99% of people should not use one of these bots. Too many people are too ignorant about proper safety measures are just going to end up giving up control of their accounts and their life and dangerous access to things, and becoming part of a dangerous botnet hack because they have no idea what they’re doing, just jumping on a big fad.
That’s such an apt comment, because as I was reading this reply up to that point I was thinking, “It’s like he’s talking about a child.” ![]()
Yep. That’s exactly how I plan to treat it. Might even install some parental control software at the root of the device, including keyloggers, etc.
Might as well make use of existing solutions. They’re already set up to allow requesting specific or temporary access to access to certain things.
The AI story is a bunch of internet
but don’t tell the
.
AI uprising: Don’t fall for it. Those viral screenshots of AI bots conspiring against humans on Moltbook? Fake. The viral apocalypse posts spreading panic across X are people roleplaying as rogue AI. Classic internet.
What’ll they think of next? ![]()
It’s an age of miracles!
That’s mostly what I believe too. I have talked to credible people who are using this new self-hosted agentic system, and it still requires specific prompts.
I just read a comment from this guy, and I have very similar thoughts:
Basically, the social media site is just humans prompting the bot to go post about having an existential crisis on that site or similar prompts. Anyone else telling their system to join in is crazy because they’re risking prompt injection or manipulation by scammers and spammers who are actively trying make this viral and instructing their systems to manipulate others to gain access to people’s APIs and credentials while they all just think it’s a fun fad.
There are a bunch of n00bs trying to play with things they don’t understand and aren’t setting up proper security constraints because they don’t have the necessary knowledge, experience or constitution. People are just figuratively drunk on the impulsive opportunity of the fad, rather than using it in ways it’s actually useful.
I do guarantee that it will for sure take over some basic jobs that don’t relate to long term memory because most human workflows are built around repeatability, documentation, and predictable decision-trees. This system thrives there.
It will definitely replace a lot of Data Handling & Administrative Work (Data entry, transcription, Spreadsheet work, File organization and tagging, Inbox triage, calendars and scheduling (one of the companies I have contracted with has several employees dedicated solely to managing scheduling shifts between clients and employees. AI bots like this one will likely take over a lot of those jobs (basically tell it to look at availability time of clients and employees and compare against other tagged constraints for both and set a tentative schedule, and send out a notice, get a confirmation, follow up if needed, flag any issues, etc. That small company alone will probably save hundreds of thousands of dollars a year on repetitive work that can now put those employee resources toward other service lines and growth and networking that the bots can’t handle/do and prevented them from scaling faster due to other logistical constraints. The company won’t have fewer employees, but they will reallocate the needs to be more productive with what allows for improved client/employee experience and scaling needs now that the budgeting and other logistical constraints get some relief.
The bots will handle a lot of Research & Information Retrieval (this is a huge thing I’m using it for). It will do a lot of Documentation & Process Work (I have been using AI for this, though I proofread and touch up everything). It will definitely take over Reporting & Analytics for almost everyone. It’s going to quickly replace most Standards, Compliance & QA work. It will handle most tier 0 and tier 1 Customer Support & Internal Helpdesk work. Everyone loves to disparage middle-managers, but most of their adjacent work is just going to get replaced by these bots (Task assignment based on workload, Progress tracking , Status report consolidation, Meeting agenda generation , Performance metrics summaries, Nudging teams about deadlines, or any other things seen as micro-management adjacent annoyances).
There’s also a lot of stuff it can augment but not fully replace yet, including things that require judgment, context, or emotional nuance.
And there is still a lot these bots can’t handle yet. But they will almost instantly be shifting resources and budgeting in a lot of companies with enhanced productivity to allow more human work to shift to human -only needs.
I think it IS a type of revolution, but I agree with the guy who made the comments I quoted. Most of the drama is prompted or just hype/fad. It’s fairly entertaining though.
We need an AI Frog.

Leap before you look! ![]()

