Sunday, 26 October 2025
28.7 C
Singapore
24.8 C
Thailand
22.1 C
Indonesia
28.2 C
Philippines

Meta’s AI blames hallucinations for the incorrect Trump rally response

Meta’s AI assistant mistakenly claimed a Trump rally shooting didn't happen, highlighting ongoing challenges with AI-generated inaccuracies.

Meta’s AI assistant recently made headlines for an error involving the attempted assassination of former President Donald Trump. The AI incorrectly stated that the event didn’t happen, which a company executive has now attributed to the technology’s inherent limitations.

AI assistant’s incorrect response

In a blog post published on July 30, Joel Kaplan, Meta’s global head of policy, described the AI’s responses to questions about the shooting as “unfortunate.” Initially, Meta AI was programmed to avoid responding to questions about the attempted assassination. However, after users began to notice this restriction, the company decided to remove it. Despite this change, the AI provided incorrect answers in a few instances, sometimes asserting that the event didn’t occur. Kaplan assured that the company is actively working to correct these errors.

“These types of responses are called hallucinations, an industry-wide issue seen across all generative AI systems. It’s an ongoing challenge to see how AI handles real-time events in the future,” Kaplan explained. He added, “Like all generative AI systems, models can return inaccurate or inappropriate outputs. We’ll continue to address these issues and improve these features as they evolve and more people share their feedback.”

<blockquote class="twitter-tweet" data-media-max-width="560"><p lang="en" dir="ltr">Meta AI won’t give any details on the attempted ass*ss*nation.<br><br>We’re witnessing the suppression and coverup of one of the biggest most consequential stories in real time.<br><br>Simply unreal. <a href="https://t.co/BoBLZILp5M">pic.twitter.com/BoBLZILp5M</a></p>&mdash; Libs of TikTok (@libsoftiktok) <a href="https://twitter.com/libsoftiktok/status/1817654239587701050?ref_src=twsrc%5Etfw">July 28, 2024</a></blockquote> <script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script>

Broader industry challenges

This incident is not isolated to Meta. On the same day, Google had to refute claims that its search autocomplete feature was censoring results about the assassination attempt. Former President Trump commented on the situation in a post on Truth Social, accusing Meta and Google of attempting to influence the election. “Here we go again, another attempt at RIGGING THE ELECTION!!! GO AFTER META AND GOOGLE,” he wrote.

Since the launch of ChatGPT, the tech industry has been grappling with how to manage generative AI’s tendency to produce false information. Some companies, including Meta, have tried to anchor their chatbots with quality data and real-time search results to mitigate these issues. However, this incident demonstrates the difficulty of overcoming the inherent design of large language models, which can sometimes generate inaccurate information.

Ongoing improvements

Meta’s approach to this problem involves continuous improvements and user feedback. Kaplan highlighted that the company is committed to refining its AI systems to minimise inaccuracies. He emphasised that while generative AI has advanced significantly, it still faces challenges, especially when dealing with real-time events.

The situation underscores a broader issue within the AI industry: the balance between providing helpful, accurate information and managing the AI’s propensity for generating incorrect or misleading content. Companies like Meta and Google must find more effective ways to ensure their systems deliver reliable information as AI technology evolves.

Meta’s commitment to addressing these challenges and improving AI systems is crucial. By doing so, the company aims to enhance the reliability of its AI assistants, ultimately providing users with more accurate and trustworthy responses.

Hot this week

Whisker introduces Litter-Robot 5 Pro with AI facial recognition for cats

Whisker introduces the Litter-Robot 5 Pro, featuring AI facial recognition and new smart features for advanced cat care.

GigaDevice opens new Tokyo office to strengthen Japan presence and global collaboration

GigaDevice opens a new Tokyo office to strengthen local services, deepen collaboration, and drive innovation in Japan’s semiconductor market.

ChatCut secures US$1.35 million to reinvent video editing with conversational AI

ChatCut raises US$1.35 million from ZhenFund and Antler to expand its conversational AI video editing platform for global creators.

Lenovo unveils agentic AI to power the next generation of AI-enabled workforces

Lenovo expands its AI-Enabled Workforce with new agentic AI capabilities to boost productivity, security, and measurable ROI.

Leica launches new M-mount camera that ditches the rangefinder

Leica unveils the M EV1, its first M-series camera with an electronic viewfinder, marking a bold step beyond its iconic rangefinder design.

Samsung One UI 8.5 may introduce a new notification prioritisation tool

Samsung’s upcoming One UI 8.5 update may include a new tool that prioritises important notifications to improve alert management.

Neato cloud shutdown leaves robot vacuums limited to manual operation

Neato’s cloud services are shutting down, leaving its robot vacuums without app control and limited to manual operation.

New Nomad Stratos Band blends titanium durability with everyday comfort

Nomad launches the Stratos Band, a hybrid Apple Watch band combining titanium and FKM rubber for durability and everyday comfort.

Red Hat: Building a secure foundation for hybrid cloud and AI in APAC

Red Hat Enterprise Linux 10 strengthens security and compliance for hybrid cloud and AI in APAC, helping enterprises navigate complex regulations.

Related Articles