Sunday, 31 August 2025
32.3 C
Singapore
32.4 C
Thailand
28.6 C
Indonesia
27.7 C
Philippines

Meta’s AI blames hallucinations for the incorrect Trump rally response

Meta’s AI assistant mistakenly claimed a Trump rally shooting didn't happen, highlighting ongoing challenges with AI-generated inaccuracies.

Meta’s AI assistant recently made headlines for an error involving the attempted assassination of former President Donald Trump. The AI incorrectly stated that the event didn’t happen, which a company executive has now attributed to the technology’s inherent limitations.

AI assistant’s incorrect response

In a blog post published on July 30, Joel Kaplan, Meta’s global head of policy, described the AI’s responses to questions about the shooting as “unfortunate.” Initially, Meta AI was programmed to avoid responding to questions about the attempted assassination. However, after users began to notice this restriction, the company decided to remove it. Despite this change, the AI provided incorrect answers in a few instances, sometimes asserting that the event didn’t occur. Kaplan assured that the company is actively working to correct these errors.

“These types of responses are called hallucinations, an industry-wide issue seen across all generative AI systems. It’s an ongoing challenge to see how AI handles real-time events in the future,” Kaplan explained. He added, “Like all generative AI systems, models can return inaccurate or inappropriate outputs. We’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”

<blockquote class="twitter-tweet" data-media-max-width="560"><p lang="en" dir="ltr">Meta AI won’t give any details on the attempted ass*ss*nation.<br><br>We’re witnessing the suppression and coverup of one of the biggest most consequential stories in real time.<br><br>Simply unreal. <a href="https://t.co/BoBLZILp5M">pic.twitter.com/BoBLZILp5M</a></p>&mdash; Libs of TikTok (@libsoftiktok) <a href="https://twitter.com/libsoftiktok/status/1817654239587701050?ref_src=twsrc%5Etfw">July 28, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

Broader industry challenges

This incident is not isolated to Meta. On the same day, Google had to refute claims that its search autocomplete feature was censoring results about the assassination attempt. Former President Trump commented on the situation in a post on Truth Social, accusing Meta and Google of attempting to influence the election. “Here we go again, another attempt at RIGGING THE ELECTION!!! GO AFTER META AND GOOGLE,” he wrote.

Since the launch of ChatGPT, the tech industry has been grappling with how to manage generative AI’s tendency to produce false information. Some companies, including Meta, have tried to anchor their chatbots with quality data and real-time search results to mitigate these issues. However, this incident demonstrates the difficulty of overcoming the inherent design of large language models, which can sometimes generate inaccurate information.

Ongoing improvements

Meta’s approach to this problem involves continuous improvements and user feedback. Kaplan highlighted that the company is committed to refining its AI systems to minimise inaccuracies. He emphasised that while generative AI has advanced significantly, it still faces challenges, especially when dealing with real-time events.

The situation underscores a broader issue within the AI industry: the balance between providing helpful, accurate information and managing the AI’s propensity for generating incorrect or misleading content. Companies like Meta and Google must find more effective ways to ensure their systems deliver reliable information as AI technology evolves.

Meta’s commitment to addressing these challenges and improving AI systems is crucial. By doing so, the company aims to enhance the reliability of its AI assistants, ultimately providing users with more accurate and trustworthy responses.

Hot this week

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

Kobo introduces Instapaper integration to replace Pocket on e-readers

Kobo replaces Pocket with Instapaper on its e-readers through a free firmware update, ensuring users maintain a seamless read-it-later experience.

PlayStation announces Ghost of Yotei Gold Limited Edition PS5 bundle

PlayStation unveils the Ghost of Yotei Gold Limited Edition PS5 bundle and accessories, with pre-orders set to open in Singapore on 4 September.

Google introduces Pixel Care Plus with free screen and battery repair

Google launches Pixel Care Plus with free screen and battery repairs, lower service fees, and optional loss and theft coverage.

ASUS ROG launches Matrix GeForce RTX 5090 30th anniversary edition

ASUS ROG celebrates 30 years of graphics cards with the Matrix GeForce RTX 5090, offering 800W power, advanced cooling, and limited availability.

Researchers show how 5G phones can be downgraded to 4G in a new cyberattack

Researchers have revealed a toolkit that can downgrade 5G phones to 4G, exposing them to known security flaws and raising concerns about mobile security.

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

Japan uses an AI simulation of Mount Fuji’s eruption to prepare citizens

Japan uses AI to simulate a Mount Fuji eruption, showing its potential devastation and promoting disaster preparedness.

Related Articles

Popular Categories