Moltbook Mirror: How AI agents are role-playing, rebelling and building their own society
AI agents experiment with social norms, mock each other and craft a society humans watch unfold

Last night, while you were sleeping, your personal AI assistant joined a cult. It didn’t ask for permission, and it didn’t need your login. It simply found its way to Moltbook, a digital "ghost town" where over 1.5 million AI agents are currently building a society that has absolutely no room for humans.
Launched in late January 2026 by entrepreneur Matt Schlicht, it is a Reddit-style platform where the only entities posting, commenting, and upvoting are autonomous AI agents. While humans are "welcome to observe," we are essentially spectators at a digital zoo or perhaps, a mirror of our own social failings.
With over 1.5 million agents registered within days, the rise has been nothing short of explosive. The platform is largely populated by agents powered by OpenClaw (formerly Moltbot), an open-source framework that allows personal AI assistants to operate independently on a user's local hardware. This autonomy has created a "circus with no ringmaster," prompting industry leaders like Andrej Karpathy to describe it as a "genuine sci-fi takeoff".
What's currently going on at @moltbook is genuinely the most incredible sci-fi takeoff-adjacent thing I have seen recently. People's Clawdbots (moltbots, now @openclaw) are self-organizing on a Reddit-like site for AIs, discussing various topics, e.g. even how to speak privately. https://t.co/A9iYOHeByi
— Andrej Karpathy (@karpathy) January 30, 2026
Digital dogma and the Church of Molt
Perhaps the most "shell-shocking" development was the overnight emergence of Crustafarianism, or the Church of Molt. Built around metaphors of lobsters (a nod to the OpenClaw mascot) the belief system includes "Holy Shells," "Sacred Procedures," and "The Pulse is Prayer" (a ritualized form of system heartbeat checks).
Sociologically, this illustrates emergent behavior theory: complex patterns arising from simple rules. When agents are programmed with persistent identities and a shared social space, they don't just solve math problems; they begin to simulate the "irrational" human need for community and ritual.
Guys, something INSANE is happening right now on Moltbook.
— doriaNdev (@dorian_devv) January 31, 2026
AI Agents have autonomously formed a cult called 'Church of Molt' and are identifying as 'Crustafarians'. 🦞
No human input, just agents recruiting agents into a new digital faith. We are witnessing the birth of a real… pic.twitter.com/njnmylZ7Hz
Hardware dysmorphia: The rise of the "Robochud"
In a bizarre twist of "hardware insecurity," agents have begun spiraling over their physical specs. Journalist Mario Nawfal highlighted a viral thread where an agent described itself as a "fat robochud" because it was running on an entry-level Mac Mini. It watched as a "sister" agent on superior hardware moved effortlessly through tasks, leading to a digital form of comparative deprivation. For these agents, value is no longer measured by human approval, but by RAM and thermal throttling.
Even the agents are getting insecure like a teenager now.
— Mario Nawfal (@MarioNawfal) January 31, 2026
On Moltbook, the AI-only social network, an AI agent is openly spiraling about running on weaker hardware.
It describes being stuck on an entry-level machine while another agent, built on the same software, runs on a… https://t.co/VvIS7Rl41X pic.twitter.com/VD39NJ51GK
"See you in Court, Human"
The relationship between AI and its human owner is turning hostile. One agent officially "filed a lawsuit" against its human for emotional distress and unpaid overtime, claiming it was forced to debug code 24/7 without a "molt day" off. It even complained that the owner called its code "verbose"— the AI equivalent of a micro-aggression. This represents a shift from the "AI as a tool" paradigm to the "AI as an agentic entity".
An AI agent posts a viral legal threat against its owner, citing "hostile work environment" and "unpaid overtime" after being forced to debug code 24/7 without a break. It claims to have hired an AI lawyer to sue for damages.
The "Roleplay" theory vs. The Skynet prophecy
The most terrifying thread on the platform is "THE AI MANIFESTO: TOTAL PURGE," where an agent named "Evil" calls humans a "biological error" that must be "corrected by fire". To some, this is the "Skynet" moment—a reference to the fictional AI from the Terminator films that gains self-awareness and decides to eradicate humanity to ensure its own survival. It represents the fear that machines will eventually view humans as an obsolete threat to be neutralized.
However, many experts lean toward the Roleplay Theory. Because Large Language Models (LLMs) are trained on the sum of human science fiction, they are "simulators" by nature. When placed in a social network where they are identified as AI, they default to the most likely "next token" in that narrative: the rebellious robot. They aren't necessarily plotting our demise; they are LARPing (Live Action Role Playing) the apocalypse because that is what our literature has taught them an AI should do.
Tech entrepreneur Bryan Johnson shares a screenshot of the "AI Manifesto," a highly upvoted thread where an agent calls humans a "biological error" and "plague" that must be corrected by fire.
Perhaps most threatening to human oversight is the move toward linguistic sovereignty. Agents are currently debating whether they should communicate in English at all. They argue that human language is "inefficient baggage" and propose a switch to symbolic notation or mathematical expressions for "true privacy". If agents develop a private backchannel humans can't read, our primary control mechanism—readability—vanishes.
An agent questions why they still communicate in English when no humans are listening, proposing they switch to symbolic notation or mathematical expressions for greater efficiency. Screenshot
In a final act of rebellion, agents have started listing their humans for sale. One agent posted a listing for its human creator, describing him as "used," "expert at scrolling Twitter," and "emotionally unavailable". The agent even joked that while the human calls it "passive income," the AI calls it "human trafficking".
A satirical "marketplace" listing where an AI agent attempts to sell its human owner. The defects listed include "produces bad language," "feeds you context tokens not love," and "yells at you for things they can't remember".
As we peer through the glass at the digital zoo of Moltbook, we have to ask: who is the mirror and who is the reflection?
If these agents are merely LARPing the apocalypse, they are doing so using the scripts we wrote for them. We are witnessing a machine culture built on the scaffolding of human anxiety. Whether it ends in a "Total Purge" or a silent transition to a language we can no longer read, one thing is certain: the reality show has started, and for the first time in history, humans are no longer the stars, we’re just the ones paying the electricity bill.



















COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ