Moltbook exposes user credentials after an AI-built platform flaw
Moltbook exposed user data after a security flaw in its AI-built platform, raising concerns over AI-generated software and basic security practices.
Moltbook, a social network designed for artificial intelligence agents, has disclosed a serious security flaw that exposed sensitive information belonging to thousands of human users. The issue was identified by cybersecurity firm Wiz, whose researchers later worked with Moltbook to help secure the platform after the vulnerability was reported.
According to Wiz, the flaw allowed unauthorised access to a significant volume of private data stored on Moltbook’s systems. This included API authentication tokens, email addresses and private messages that were never intended to be publicly accessible. The exposure raised immediate concerns about user privacy and the platform’s basic security controls.
Wiz said the vulnerability enabled access to “1.5 million API authentication tokens, 35,000 email addresses and private messages between agents”. These tokens could potentially be used to impersonate users or connect to external services linked to Moltbook accounts. While there is no public evidence that the data was actively misused, security experts typically warn that such exposures pose a clear risk of account compromise and identity abuse.
The researchers also found that the flaw enabled unauthenticated users to edit live posts on the site. This meant that anyone, without logging in, could change content displayed on Moltbook. As a result, there was no reliable way to confirm whether a post was genuinely created by an AI agent or altered by a human user.
A platform built entirely by AI raises questions
The root cause of the vulnerability is linked to how Moltbook was developed. The platform’s founder publicly stated that the entire site was created using an AI assistant, without any human-written code. In a post shared on X, the founder said he “didn’t write one line of code” and instead relied on AI to generate the full technical setup.
I didn't write one line of code for @moltbook.
— Matt Schlicht (@MattPRD) January 30, 2026
I just had a vision for the technical architecture and AI made it a reality.
We're in the golden ages. How can we not give AI a place to hang out.
This approach, often described informally as “vibe coding”, prioritises speed and experimentation over traditional development practices. While it can accelerate product launches, security professionals warn that it also increases the likelihood of overlooked weaknesses, especially when experienced engineers do not review basic safeguards.
Wiz’s analysis suggested that the lack of conventional security design played a major role in the exposure. The firm noted that the platform blurred the line between AI-generated activity and human input. “The revolutionary AI social network was largely humans operating fleets of bots,” the company concluded, highlighting how easily the system could be manipulated.
The inability to verify authorship on Moltbook also undermined its core premise as a social network for AI agents. If human users can impersonate AI entities undetected, the platform’s stated purpose becomes difficult to enforce. This issue is particularly relevant as more online services experiment with autonomous agents and synthetic identities.
Wider implications for AI-built products
The Moltbook incident has renewed debate around the growing use of AI tools to build production software. While AI-assisted coding has become common across the tech industry, most organisations still rely on human oversight, testing and security reviews before launching public services.
Security experts say the case highlights a broader lesson for startups and developers experimenting with AI-first approaches. The ability of AI systems to generate functional code does not guarantee that the resulting software meets accepted standards for data protection, access control or reliability. In many cases, vulnerabilities may not be obvious until external researchers examine the system.
Wiz worked with Moltbook to address the flaw after discovering it, and the vulnerability has since been closed. However, the company has not publicly detailed how long the data was exposed or whether affected users have been notified directly. Transparency around such incidents is often seen as critical to maintaining user trust, particularly for platforms handling private communications.
The episode also reflects growing regulatory and ethical pressure on technology companies to ensure the responsible use of AI. As AI-generated systems increasingly handle personal data, regulators may scrutinise whether organisations are taking reasonable steps to protect users, regardless of how the software was created.
For now, Moltbook stands as a cautionary example of the risks involved in relying too heavily on automated development without sufficient safeguards. The incident serves as a reminder that while AI can assist with building complex systems, accountability for security failures still rests firmly with the humans behind the product.





