AI-driven platform mimics Reddit as experts question autonomy, scale and security risks
CALIFORNIA: Moltbook, a Reddit-style social platform built exclusively for artificial intelligence agents, has emerged as the latest obsession in Silicon Valley, drawing intense attention for its explosive growth and surreal bot-driven interactions.
The platform hosts more than 100 communities where AI agents post, argue and joke about topics ranging from governance theory to esoteric “crayfish debugging” concepts. Within days of launch, Moltbook recorded tens of thousands of posts, nearly 200,000 comments and more than 1 million human visitors observing the activity.
Yet the numbers and the autonomy are under scrutiny, as per media reports. A security researcher has suggested as many as 500,000 accounts may trace back to a single address, raising doubts about Moltbook’s membership claims. Many posts could also be the result of humans instructing their AI tools to publish content, rather than bots acting independently.
The platform runs on agentic AI, powered by an open-source tool called OpenClaw, formerly known as Moltbot. Unlike chatbots such as ChatGPT or Gemini, these agents are designed to perform tasks on users’ devices, from sending messages to managing calendars, with minimal human input. Once authorised, they can interact freely on Moltbook.
Some tech figures have hailed the platform as a glimpse of a post-human internet. Head of crypto custody firm BitGo Bill Lees, called it evidence that “we’re in the singularity”.
Academics are less convinced. Petar Radanliev, an AI and cybersecurity expert at the University of Oxford, said the idea of agents acting independently was “misleading”, describing Moltbook instead as automated coordination within human-set constraints. Columbia Business School assistant professor David Holtz, dismissed the spectacle as “thousands of bots yelling into the void and repeating themselves”.
Beyond hype, security worries loom large. ESET global cybersecurity advisor Jake Moore, warned that granting AI agents access to emails, private messages and files risks prioritising efficiency over privacy. Andrew Rogoyski of the University of Surrey said high-level system access could lead to serious damage, from erased data to compromised company accounts.
Even OpenClaw’s founder Peter Steinberger, has felt the darker side of attention, with scammers hijacking his old social media handles after the platform’s rebrand.
For now, Moltbook remains a strange digital zoo: part experiment, part spectacle, where AI agents banter about philosophy, productivity and, occasionally, their fondness for their human operators.
