Meta’s latest AI app may have just turned your private thoughts into public posts — without you even realising it. The stand-alone Meta AI app, launched on April 29, has already attracted 6.5 million downloads, but it’s not just popularity that’s spreading fast. Instead, a major privacy issue has emerged that’s turning user conversations into viral content.
Shared by mistake: Users unaware their chats go public
Your conversation with an AI assistant is between just you and the app. But on the Meta AI platform, that’s not always the case. When you ask the AI a question, a share button appears, allowing you to post the interaction. However, many users unintentionally publish private messages, images, and audio clips without realising they’re going public.
Worse still, the app gives no clear warning about what is being shared, where it’s being shared, or who can see it. If you’re logged into Meta AI via Instagram and your Instagram account is set to public. Your AI conversations — from personal confessions to unusual search queries — could be visible to everyone online.
One user recently discovered an audio recording where someone asked, “Hey, Meta, why do some farts stink more than other farts?” While this might seem silly or harmless, other examples have been far more serious. People have unknowingly shared information about possible tax evasion, legal concerns, family involvement in white-collar crime, and even home addresses and court records.
Security expert Rachel Tobac highlighted several instances where private and sensitive information—including court case details—had been exposed. This raises significant questions about the app’s design and Meta’s responsibility to protect user data.
If a user’s expectations about how a tool functions don’t match reality, you’ve got yourself a huge user experience and security problem.
— Rachel Tobac (@RachelTobac) June 12, 2025
Humans have built a schema around AI chat bots and do not expect their AI chat bot prompts to show up in a social media style Discover feed —… https://t.co/CJ41THUXCE
A feature that invites trouble
What’s making matters worse is the app’s core feature: the ability to share interactions. Meta appears to have believed users would enjoy seeing one another’s conversations with the AI. But in practice, it’s creating chaos.
There’s a reason major platforms like Google don’t treat search queries as social media content. Back in 2006, AOL faced backlash for releasing anonymised search data, which still resulted in people being identified. Meta’s current approach echoes this mistake — only now, the stakes feel higher, with AI interactions revealing far more than simple search terms.
The Meta AI app is now filled with questionable content. Some posts are clear jokes or pranks — a user with a Pepe the Frog avatar asked how to build a water bottle bong, and another shared their résumé while seeking a cybersecurity job. But mixed among the trolling are genuine cries for help and admissions of crimes, all made visible without proper user awareness.
The cost of going viral the wrong way
If Meta was looking to gain attention for its AI app, it certainly has — but not in the way it intended. While the 6.5 million downloads might look strong, it’s low for a company of Meta’s size and investment level. And now, the headlines surrounding the app focus more on its privacy flaws than on its technological advances.
This rollout could have been different. A clearer user interface, more prominent privacy settings, and the removal of automatic sharing features would have saved a lot of trouble. Instead, users are waking up to discover their questions, confessions, and curiosities exposed to the world.
In an age when people are increasingly cautious about online privacy, Meta’s decision to socialise AI interactions feels like a misstep. It’s not just embarrassing—it’s potentially dangerous.