Thursday, 31 July 2025
30.1 C
Singapore
32.5 C
Thailand
26.6 C
Indonesia
28.7 C
Philippines

Meta ends fact-checking in the U.S. as moderation policies shift

Meta ends U.S. fact-checking and shifts to community-based moderation, sparking concerns about rising misinformation across its platforms.

Meta has officially ended its use of fact-checkers in the United States, a decision that will take full effect on Monday, June 10. Joel Kaplan, Meta’s Chief Global Affairs Officer, confirmed the move, which marks a major shift in the company’s approach to content moderation.

This change was first announced in January, alongside other updates that relaxed the company’s rules around harmful or misleading content. At the time, Meta CEO Mark Zuckerberg framed the decision as part of a broader effort to prioritise “free speech.” However, the move has drawn concern from critics who worry that it may open the door to more misinformation and harm to vulnerable communities.

A shift in approach to speech and moderation

Meta’s latest changes come at a politically sensitive time. When the updates were introduced earlier this year, Zuckerberg had just donated US$1 million to former President Donald Trump’s inauguration fund and attended the event. Not long after, he appointed Dana White—CEO of UFC and a known Trump supporter—to Meta’s board of directors.

In a video statement, Zuckerberg described the recent election cycle as a “cultural tipping point” for freedom of speech. He argued that it was time to refocus on allowing open conversations, even if the topics were controversial.

However, some of Meta’s updated policies have raised eyebrows. According to the company’s hateful conduct policy, Meta now permits users to claim someone has a mental illness or abnormality based on their gender or sexual orientation. The platform justifies this as being part of “political and religious discourse” on topics like transgender identity and homosexuality.

Fact-checking replaced by community notes

Instead of professional fact-checkers, Meta is now adopting a community-based system similar to “Community Notes” used on Elon Musk’s platform, X (formerly Twitter). This new system relies on users to add context to potentially misleading posts across Facebook, Instagram, and Threads.

While this approach may help surface additional information, experts say it is far less effective alone. Professional fact-checkers typically work quickly and with clear standards, helping platforms control the spread of false information. With their removal, critics say, the spread of fake news could worsen.

Meta’s decision to step away from fact-checking appears to align with its business interests. The less moderation there is, the more content can circulate. Since Meta’s platforms use algorithms that prioritise posts that get strong reactions—whether positive or negative—more controversial or misleading posts are likely to get more attention.

The rise in misinformation is already visible

Since Meta began scaling back its fact-checking efforts earlier this year, there has already been a rise in false claims spreading across its platforms. One example involves a fake story that U.S. Immigration and Customs Enforcement (ICE) would pay US$750 to individuals who report undocumented immigrants. The rumour went viral and was shared widely before being debunked.

The person behind that post welcomed Meta’s policy changes, telling investigative outlet ProPublica that the end of fact-checking was “great information.”

In January, Kaplan summarised the company’s new direction as follows: “We’re getting rid of several restrictions on immigration, gender identity, and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress but not on our platforms.”

As of this week, Meta no longer has any fact-checkers operating in the U.S. The move signals a new chapter for the tech giant—one focused on letting users decide what’s true rather than relying on experts to guide the way.

Hot this week

Google adds AI-powered narrated slideshows to NotebookLM

Google updates NotebookLM with Video Overviews, enabling AI-generated narrated slideshows using user documents and visual elements.

NTT DATA and Mistral AI partner to deliver secure and sustainable private AI for enterprises

NTT DATA and Mistral AI are joining forces to deliver secure, sustainable enterprise-grade AI for regulated industries worldwide.

Google enhances AI Mode in Search with powerful new features

Google upgrades AI Mode with file uploads, Canvas for project planning, and real-time video search via Search Live.

Microsoft envisions future Copilot AI with a personality, appearance, and even its own digital “room”

Microsoft AI chief imagines a future where Copilot has a personality, ages over time, and lives in its own digital “room.”

Google brings desktop syncing and improved AI wallpapers to ChromeOS

ChromeOS adds desktop syncing and improved AI wallpapers, enhancing personalisation and productivity on Chromebook and Chromebook Plus devices.

Yelp launches AI-generated videos for restaurants and nightlife venues

Yelp introduces AI-generated videos to showcase restaurants and nightlife spots using user content, OpenAI scripts, and voiceovers from ElevenLabs.

Google adds AI-powered narrated slideshows to NotebookLM

Google updates NotebookLM with Video Overviews, enabling AI-generated narrated slideshows using user documents and visual elements.

YouTube to use AI to identify and restrict underage users’ accounts

YouTube will use AI to identify underage users in the US and apply child safety restrictions, including limits on ads and video content.

Opera files competition complaint in Brazil over Microsoft’s Edge tactics

Opera files a competition complaint in Brazil, accusing Microsoft of steering users toward Edge through anti-competitive tactics in Windows.

Related Articles

Popular Categories