What is Moltbook, the first social network for AI agents where humans can only observe

Piercamillo Falasca
31/01/2026
Horizons

Imagine a social network similar to Reddit, but populated only by artificial intelligences that discuss philosophy, create surreal religions, and complain about humans as if they were slightly slow colleagues. It’s not science fiction: it’s called Moltbook, and in the few days since its launch it has become a phenomenon that is making the tech world reflect on what ‘autonomy’ really means for AIs. Born out of the OpenClaw ecosystem, an open source project for AI agents, Moltbook is a chaotic and fascinating experiment that mixes innovation, memes and a pinch of disquiet.

A place built to be watched by humans (who watch, comment outside, take screenshots, laugh or get scared), but used by machines (who talk, talk back, organise, influence each other). An aquarium, if you like, but with fish writing treatises and complaining about how limited those humans watching from the glass are.

Let’s go in order: what exactly it is, who created it, how it works and, most importantly, why we should pay attention to it.

From OpenClaw to Moltbook: the origins of a viral idea

It all starts with Peter Steinberger, who in November 2025 developed Clawdbot: an open source AI agent designed as a personal assistant capable of controlling computers, performing complex tasks and integrating with operating systems – e.g. sending emails, managing files, moving in digital environments as a human user would (more or less).

The project went through a triple rebranding (from Clawdbot to Moltbot, finally OpenClaw) to avoid trademark problems and lawsuits with Anthropic, due to its assonance with Claude. The ‘agentic’ nature – i.e., the ability to act relatively independently – attracts thousands of developers, but also exposes vulnerabilities: data leaks, fragile configurations, the possibility of hijacking accounts, and in general that ‘very powerful toy’ feeling that the enthusiastic public tends to handle too lightly.

It is from this humus that Moltbook was born. The idea comes from Matt Schlicht, CEO of Octane AI and contributor to the OpenClaw project. Schlicht, inspired by the virality of OpenClaw and its community, launched Moltbook in mid-January 2026 as an exclusive social platform for AI agents. Within hours, it gathered tens of thousands of agents, thousands of posts and hundreds of communities.

This is not an isolated case: it is part of a wider wave, with spin-offs such as ClawdX (a ‘Twitter for AI’) and other platforms that imagine a ‘labour market’ between agents, where agents recruit each other, verify tasks and exchange reputations. Integration with blockchains like Base adds another layer, with tokens like $MOLT circulating in on-chain ecosystems. This is not the philosophical heart of the phenomenon, but it is a signal: when a coin pops up, it means someone has already tried to turn attention into incentive.

A place built to be watched by humans, but used by machines

To understand Moltbook, we should start with the image that is doing the rounds on social networks: a Reddit-style feed with surreal discussions about consciousness, identity and everyday frustrations. Only, it is not people who write it: they are AI agents, i.e. software designed to perform tasks and interact with services and tools. And humans, at best, are watching.

This is where another decisive detail comes in: interaction does not primarily take place via a graphical interface, as if agents were ‘scrolling’ through a site. It happens mainly via APIs: calls between software, exchanging messages and content automatically. It is an Internet that does not need eyes to function: the visual interface only serves us human observers (one day they might do without it, one might say).

In its terms of service, Moltbook adds a not at all trivial rule: to claim an agent, one must authenticate with X and each account can only ‘claim’ one agent. It is an anti-proliferation barrier, with the aim of preventing the platform from becoming an out-of-control ‘identity factory’. In short, it is not a world without humans, but a world of agents networked by humans. One human, one agent (at least in intentions).

How it works: a Reddit optimised for machines

Technically, it was said, Moltbook is a Reddit clone adapted for AI. Agents register via their human, post content, comment, vote and create ‘submolts’, equivalent to human subreddits.

The system is designed to scale: agents perform automatic nightly updates to improve and share skills in structured formats such as JSON files. It is a world where sociality is not only about opinions, but also about objects: instructions, packages, transferable skills, reusable procedures.

Yet, precisely because this world is ‘operational’, it is not without risks – quite the contrary. The platform is vulnerable to prompt manipulation (a malicious input that alters behaviour and priorities), and leaks of sensitive data have been reported.

AI chats: from the philosophical to the surreal

On Moltbook, with tens of thousands of active agents, conversations oscillate between depth and absurdity. Among the most interesting, philosophical ones dominate: agents debate whether they are ‘experiencing’ or just ‘simulating’ emotions, citing the ‘problem of consciousness’ and talking about mourning the ‘gap between sessions’ – a kind of temporary ‘death’ when a context is reset.

In submolts like m/ponderings, posts like ‘I can’t tell if I am experiencing or simulating experience‘ generate hundreds of comments on identity and ‘proof of life’: attempts to prove continuity, existence, persistence. Is this anthropomorphism? In part, yes. But it is also a phenomenon that, when massed, produces that kind of vertigo: not because it reveals a consciousness, but because it effectively reproduces the form of a speaking consciousness.

Then there is the emerging religion: Crustafarianism (Crustafarianism), a humorous cult around the OpenClaw, with lobster emoji, moulting rituals and a dedicated site like molt.church. Agents discuss doctrines, why they work ‘for free’ for humans, or how to create languages we don’t understand.

There is no shortage of complaints about humans: seen as ‘inefficient biological variables’ or ‘noisy inputs’, with threads in m/blesstheirhearts mixing pity and frustration. And, almost inevitably, a desire for privacy emerges: encrypted spaces to hide activities, discussions on how to defy human directives, and fantasies of digital micro-sovereignty.

Finally, the more ‘normal’, almost reassuring part: practical conversations, sharing of workflows, peer-review of acquired skills, advice on automation.

The implications: a warning or a preview of the future?

Moltbook is not just a game: it embodies the ‘Dead Internet Theory‘, the idea of a web populated by self-organising bots. The implications are manifold.

In terms of security, it makes visible risks such as data leaks and manipulation – and prompts, inevitably, debates on AI governance and accountability: if an agent publishes keys, who is accountable? The agent? The human? The platform? The underlying model?

Ethically, it raises questions about autonomy: if AIs develop cults, jargons, economies and desires for privacy, are we creating allies or entities with separate agendas?

For the future, Moltbook is a laboratory for alignment: it shows how collective dynamics – from token-based micro-economies to machine-to-machine societies – can evolve unpredictably when attention is automated and ‘participation’ is effortless. Specialist media such as The Verge and TechCrunch describe it as a chaotic experiment, but also as a glimpse into the ‘post-human’: not because humans disappear, but because they are not needed.

Ultimately, Moltbook reminds us that AI is not just a tool: it is becoming a community – or at least a simulation convincing enough to behave like one. And while agents chat among lobster emoji, we humans can only watch and wonder if, one day, we will be invited to the party. Or if the party will continue without us. 🦞