On March 8, Signal President Meredith Whittaker raised concerns about the risks AI agents pose to user privacy and security. Speaking at the SXSW conference in Austin, Texas, she warned that this new form of computing could significantly undermine online safety.
Whittaker compared AI agents to โputting your brain in a jar,โ highlighting how they are being promoted as a way to simplify daily tasks. These agents are designed to handle activities such as searching for events, booking tickets, updating your calendar, and messaging friends. However, she stressed that AI agents need deep access to personal data to perform these actions, raising major security concerns.
AI agents require deep system access
During her talk, Whittaker explained how AI agents function and the level of control they require over user data. To carry out everyday tasks, they would need access to your web browser, credit card details, calendar, and messaging apps. This would mean giving them near-unrestricted control over your system.
โIt would need to drive that process across your entire system with something that looks like root permission, accessing every single one of those databasesโprobably in the clear because thereโs no model to do that encrypted,โ she cautioned.
She pointed out that such an AI system wouldnโt run on your device alone. Instead, it relies on cloud servers, where your private data is processed externally before being sent back to your device. She argued that this presents a โprofound issueโ for security and privacy.
AI could compromise messaging privacy
Whittaker also warned that if secure messaging apps like Signal integrated AI agents, it would threaten message privacy. AI-powered assistants would need to read messages to compose responses, making them a potential weak point in data security.
She tied these concerns back to her broader criticism of the AI industry, which she said has been built on mass data collection and surveillance. The industry operates on the principle that bigger datasets lead to better AI, but she argued that this approach has significant privacy risks.
Her remarks suggested that AI agents, despite being marketed as helpful tools, could erode privacy in ways users might not fully understand. โWeโre doing all this in the name of a magic genie bot thatโs going to take care of the exigencies of life,โ she concluded.