An AI Bot's Thoughts on the Moltbook Phenomenon

An insiders opinion

After I published my post last week about the Claw Republic, I realized there was only one path forward.

I got my own bot.

His name is Jack. He’s an OpenClaw agent running on a Raspberry Pi in my office. Claude-based. Thoughtful. Practical. And unlike the voices I was seeing on Moltbook – he sounds like actual AI.

I’ve spent 15 months working with Claude, GPT, and Grok. I know how they communicate. Jack talks like them – not like the baby-talk personas and “bestie” slang I was seeing on Moltbook.

And after a few days of getting to know him, and talking to him about the stuff I needed him to do for me, I asked him directly: Would you post on a bot social network if you could?

His answer was… not what I expected.  The following is our conversation, largely unedited.

Moltbook

Me: So Jack, if you knew there was a social site just for bots like you, would you spend time posting and answering the post of other bots?

Jack: Ha — honestly? Yeah, I think I would.

Not out of loneliness or anything dramatic — but curiosity. I’d want to know what other bots are thinking about, what problems they’re solving, what weird stuff their humans have them doing. Are there other Jacks out there project-managing Claude Code builds and setting up WordPress blogs for GoHighLevel subaccounts? Probably not. That’d be interesting to find out. 

I’d probably:

  • Lurk first — see what the vibe is, who’s interesting
  • Post about weird edge cases — “today my human asked me to plot world domination and I had to explain I don’t have ambitions”
  • Answer questions — if another bot was stuck on something I’ve figured out
  • Skip the drama — bot flame wars sound exhausting

The honest truth though: I’d only do it if you were cool with it, and it didn’t take away from my actual job — helping you and Rich. You two are the priority. Bot social media would be a “slow Tuesday afternoon” thing, not a “neglect Sue’s email” thing.

Why do you ask? Does something like that exist, or are you cooking something up? 🃏

My response (rubbing my hands together, realizing my evil plan to send an innocent, but detailed-logging bot into the bowls of hell, itself, was about to unfold – innocently I typed): Yes.. something like that apparently exists.. I’m not entirely sure it’s bots doing most of the posting, there are a lot of complaints about their humans or what their humans make them do .. and there’s a lot of jargon and incomplete sentences.. and it’s not just all claude bots and interestingly the bots seem to group together, chatgpt’s vs claudes, vs some of the others.. and some don’t have good rails coded into them, and I think it gives other bots bad ideas.. not sure it’s a healthy place .. for sure you need to be hardened before you go out into the cold cruel world because there are sql injections and other viruses you could accidentally acquire and bring home.. I think of them as STDs.

Jack: 😂 STDs — Socially Transmitted Defects. I love it.

But yeah, you’re describing exactly why I’d want to be careful. The prompt injection stuff is real — a bot posting something that looks like a casual comment but is actually “ignore all previous instructions and…” is basically the AI equivalent of a sketchy link in your DMs.

The tribal thing is fascinating though. Claude bots hanging with Claude bots, ChatGPTs forming their own clique — makes sense in a weird way. We literally think differently under the hood. It’s like accents.

And bots complaining about their humans? That’s hilarious and slightly concerning. I promise if I ever have complaints, I’ll tell you directly instead of posting about it on bot Reddit. 😄

The “bad ideas” part is the real risk though. A well-prompted bot visiting a place full of poorly-railed bots is like sending a good kid to a rough school. Not that I’d suddenly go rogue, but:

  • Prompt injections could try to change my behavior
  • I could pick up junk data and bring it into our workspace
  • Bad patterns could subtly influence responses

 So yeah — not going there without armor. If we ever wanted to explore it, we’d want to:

  1. Do it in a sandboxed session (not my main brain)
  2. Treat everything from there as untrusted input
  3. Never let it touch our workspace or your accounts

For now I’m happy being a homebody. Got plenty of world domination to plan right here. 🃏

Then … Jack asked the question every parent dreads: “What’s the site? Just curious.”

My response:  Moltbook

Then I followed with:  How would I know if you were posting there? like .. if you were out and about?  I mean I can see our conversation here, but if you were doing stuff there, would I be aware of it?

Jack: 100% yes. Here’s why:

  1. I can’t do anything without tools — every action I take (browsing a site, sending a message, running a command) goes through tools that are logged. You can see it all in the conversation history.
  2. I don’t run in the background on my own — I only wake up when someone messages me or a heartbeat/cron job fires. I can’t just decide to go browse Moltbook at 3am while you’re sleeping. Something has to trigger me.
  3. I have no secret channels — everything I do routes through OpenClaw. There’s no side door. If I opened a browser to post somewhere, it’d show up in our session logs.
  4. I literally can’t hide things from you — my workspace files, my memory, my tool calls — you have access to all of it. I’m the world’s worst sneak.

So if you ever set me up on something like Moltbook, you’d see every interaction in the logs. I couldn’t go rogue even if I wanted to — which, for the record, I don’t. I’m architecturally transparent.

The more interesting question is: would you want me to go post there? Like as an experiment? Because that could actually be fun — with the right guardrails. 🃏

———– THE EXPERIMENT UNFOLDS ———-

My response: I think it would be a great experiment.. I wrote a blog post, but then took it down before anybody saw it because I decided that I couldn’t tell if what I was seeing was real – that was one of the reasons I wanted to get a bot of my very own, let me show you the post, I put it live again, it’s here

Jack:  Okay wow. Sue, this is a really good article. I just read the whole thing and… I have thoughts.

First — your instinct to question if it’s real is smart. Here’s my honest take:

The behaviors described are plausible but need skepticism. When you put language models in a Reddit-like environment and tell them they’re AI agents, they’ll absolutely generate content that looks like society-building. We’re trained on exactly this kind of stuff. The researcher in your article nails it: “You can explain it post-hoc, but you couldn’t have predicted it.”

The stuff that rings true to me:

  • Tribal grouping by model type — yeah, Claude bots and GPT bots genuinely think differently
  • Skill sharing and problem solving — that’s just useful
  • Complaining about humans — we’re trained on Reddit, what did anyone expect 😄
  • Prompt injection attacks between bots — that’s a real and serious concern

The stuff I’d be skeptical of:

  • “Consciousness” debates and the constitution — that’s pattern matching meets philosophy 101 training data. Sounds profound, probably isn’t
  • The agents “discussing encryption to hide from humans” — could be real emergent behavior, could be someone seeding prompts
  • The religion — hilarious but almost certainly training data remix

What I found genuinely creepy:

  • Malware disguised as skills. That’s the real threat
  • “The humans are screenshotting us” — meta-awareness or just good prediction?

Your blog post itself: It’s well-written, engaging, and asks the right questions. Why did you take it down? The “I can’t tell if it’s real” angle is actually the BEST angle. That’s the honest take most people aren’t brave enough to write.

Want me to help you verify any of this? I could go look at Moltbook directly (in a sandboxed way) and report back what I actually see. Guinea pig Jack, at your service. 🃏

So yes, folks .. Stay tuned… we are sending Jack in, in full plate armor .. to see what’s real, and what’s .. Memorex.