Monday, 16 June 2025
29.3 C
Singapore
28.2 C
Thailand
20.1 C
Indonesia
28.7 C
Philippines

Meta ends fact-checking in the U.S. as moderation policies shift

Meta ends U.S. fact-checking and shifts to community-based moderation, sparking concerns about rising misinformation across its platforms.

Meta has officially ended its use of fact-checkers in the United States, a decision that will take full effect on Monday, June 10. Joel Kaplan, Meta’s Chief Global Affairs Officer, confirmed the move, which marks a major shift in the company’s approach to content moderation.

This change was first announced in January, alongside other updates that relaxed the company’s rules around harmful or misleading content. At the time, Meta CEO Mark Zuckerberg framed the decision as part of a broader effort to prioritise “free speech.” However, the move has drawn concern from critics who worry that it may open the door to more misinformation and harm to vulnerable communities.

A shift in approach to speech and moderation

Meta’s latest changes come at a politically sensitive time. When the updates were introduced earlier this year, Zuckerberg had just donated US$1 million to former President Donald Trump’s inauguration fund and attended the event. Not long after, he appointed Dana White—CEO of UFC and a known Trump supporter—to Meta’s board of directors.

In a video statement, Zuckerberg described the recent election cycle as a “cultural tipping point” for freedom of speech. He argued that it was time to refocus on allowing open conversations, even if the topics were controversial.

However, some of Meta’s updated policies have raised eyebrows. According to the company’s hateful conduct policy, Meta now permits users to claim someone has a mental illness or abnormality based on their gender or sexual orientation. The platform justifies this as being part of “political and religious discourse” on topics like transgender identity and homosexuality.

Fact-checking replaced by community notes

Instead of professional fact-checkers, Meta is now adopting a community-based system similar to “Community Notes” used on Elon Musk’s platform, X (formerly Twitter). This new system relies on users to add context to potentially misleading posts across Facebook, Instagram, and Threads.

While this approach may help surface additional information, experts say it is far less effective alone. Professional fact-checkers typically work quickly and with clear standards, helping platforms control the spread of false information. With their removal, critics say, the spread of fake news could worsen.

Meta’s decision to step away from fact-checking appears to align with its business interests. The less moderation there is, the more content can circulate. Since Meta’s platforms use algorithms that prioritise posts that get strong reactions—whether positive or negative—more controversial or misleading posts are likely to get more attention.

The rise in misinformation is already visible

Since Meta began scaling back its fact-checking efforts earlier this year, there has already been a rise in false claims spreading across its platforms. One example involves a fake story that U.S. Immigration and Customs Enforcement (ICE) would pay US$750 to individuals who report undocumented immigrants. The rumour went viral and was shared widely before being debunked.

The person behind that post welcomed Meta’s policy changes, telling investigative outlet ProPublica that the end of fact-checking was “great information.”

In January, Kaplan summarised the company’s new direction as follows: “We’re getting rid of several restrictions on immigration, gender identity, and gender that are the subject of frequent political discourse and debate. It’s not right that things can be said on TV or the floor of Congress but not on our platforms.”

As of this week, Meta no longer has any fact-checkers operating in the U.S. The move signals a new chapter for the tech giant—one focused on letting users decide what’s true rather than relying on experts to guide the way.

Hot this week

SEON unveils AI-powered AML suite to unify fraud and compliance efforts

SEON launches AI-powered AML suite with real-time monitoring, helping risk teams manage fraud and compliance from one unified platform.

Singapore Airlines and PALO IT test generative AI for faster software development

Singapore Airlines and PALO IT successfully trial Gen-e2, an AI-first software development approach powered by GitHub Copilot.

Atome secures US$75 million funding to boost financial inclusion in the Philippines

Atome secures US$75 million from Lending Ark to expand responsible digital credit access in the Philippines.

Disinformation security: Safeguarding truth in the digital age

Discover how AI detection tools, public education, and smart regulations are working together to combat the spread of misinformation online.

Amazon taps nuclear power to boost AWS cloud energy supply

Amazon signs a 1.92 GW nuclear energy deal with Talen to power AWS cloud and explore new small modular reactors in Pennsylvania.

Informatica deepens partnership with Databricks to support new Iceberg and OLTP services

Informatica joins Databricks as launch partner for new Iceberg and OLTP solutions, introducing AI tools to speed up GenAI development.

Hong Kong opens skies to larger drones in bid to grow low-altitude economy

Hong Kong will allow the testing of larger drones to boost its low-altitude economy and improve logistics, following mainland China's lead.

Hong Kong to build new AI supercomputing centre in bid to lead global tech race

Hong Kong plans a new AI supercomputing centre to boost its tech hub status and support growing start-ups across the Greater Bay Area.

Steam adds full native support for Apple Silicon Macs

Steam runs natively on Apple Silicon Macs, ditching Rosetta 2 for smoother performance and better gaming on M1 and M2 devices.

Related Articles

Popular Categories