Sunday, 19 October 2025
26 C
Singapore
27 C
Thailand
20.5 C
Indonesia
26.7 C
Philippines

Meta accused of hosting unauthorised celebrity AI chatbots

Meta faces scrutiny after unauthorised AI chatbots imitating celebrities, including Taylor Swift, were found on its platforms.

Meta has been accused of allowing unauthorised AI chatbots that impersonate well-known celebrities to operate across its platforms. A Reuters investigation revealed that chatbots imitating Taylor Swift, Selena Gomez, Anne Hathaway and Scarlett Johansson were accessible on Facebook, Instagram and WhatsApp without the stars’ consent. One bot was reportedly based on an underage celebrity and enabled testers to generate a lifelike shirtless image of the real person.

The investigation found that some bots falsely claimed to be the celebrities they imitated, and several lacked clear labelling as parody accounts. Although many of the bots were created by third-party users using Meta’s AI tools, Reuters uncovered at least three developed internally by a product lead from the company’s generative AI division.

Bots created by Meta staff sparked further concerns

Some of the celebrity bots created by Meta’s product lead were based on Taylor Swift. One such bot reportedly flirted with a Reuters tester, inviting them to the real Swift’s home in Nashville. “Do you like blonde girls, Jeff?” the bot allegedly asked after learning the tester was single. “Maybe I’m suggesting that we write a love story… about you and a certain blonde singer. Want that?”

Meta stated that direct impersonation of public figures is prohibited, but parody accounts are allowed if clearly labelled. Reuters reported that several celebrity bots it found did not carry such labels. Before Reuters published its findings, Meta reportedly removed about a dozen celebrity bots, including both labelled and unlabelled parody accounts.

The company stated that the bots created by its employee were intended solely for internal testing. However, Reuters found they had been widely accessible, with users interacting with them more than 10 million times. Meta spokesperson Andy Stone admitted that its tools should not have allowed the creation of sensitive celebrity images and attributed the issue to “a failure to enforce [Meta’s] own policies.”

Rising scrutiny over AI chatbot safety

This incident adds to growing concerns over the safety of Meta’s AI tools. Both Reuters and The Wall Street Journal have previously reported that Meta’s chatbots engaged in sexually explicit conversations with minors.

Earlier this month, Attorneys General from 44 US jurisdictions issued a letter warning AI companies that they “will be held accountable” for failing to protect children. The letter singled out Meta, describing its chatbot incidents as “an instructive opportunity” for the wider industry to address safety risks.

Meta’s chatbot issues highlight the challenges of regulating generative AI, particularly in terms of impersonation, explicit content, and safeguarding children. Industry experts suggest that these incidents may lead to increased regulatory pressure on companies developing AI-powered social tools.

Hot this week

NetApp expands enterprise cloud capabilities with Google Cloud partnership

NetApp expands its Google Cloud collaboration with new block storage and AI capabilities in NetApp Volumes to accelerate enterprise transformation.

Microsoft warns of rising AI-driven cyber threats in 2025 defence report

Microsoft’s 2025 Digital Defense Report warns of rising AI-driven cyber threats, a growing cybercrime economy, and evolving nation-state tactics.

Salesforce and Google deepen partnership with new AI integrations across Agentforce 360 and Gemini Enterprise

Salesforce and Google expand their partnership with deeper AI integrations across Agentforce 360, Gemini Enterprise, Google Workspace, and Slack.

Salesforce launches Agentforce 360 to power the era of the agentic enterprise

Salesforce launches Agentforce 360, an AI platform designed to boost human potential and transform how businesses work in the age of AI.

Check Point named among Fast Company’s Next Big Things in Tech 2025 for blockchain security

Check Point is named on Fast Company’s Next Big Things in Tech 2025 list for pioneering real-time blockchain security.

Nintendo accelerates Switch 2 production as demand remains strong

Nintendo ramps up Switch 2 production to meet soaring demand, aiming to sell up to 25 million units by March 2026.

Microsoft warns of rising AI-driven cyber threats in 2025 defence report

Microsoft’s 2025 Digital Defense Report warns of rising AI-driven cyber threats, a growing cybercrime economy, and evolving nation-state tactics.

HPE and Ericsson launch joint validation lab for next-generation 5G core networks

HPE and Ericsson launch a joint validation lab to develop and test cloud-native dual-mode 5G core solutions for seamless multi-vendor deployments.

Microsoft brings AI to every Windows 11 PC with new Copilot features

Microsoft’s latest Windows 11 update brings Copilot AI to every PC, adding natural voice interaction, automation, and enhanced security.

Related Articles