Sunday, 31 August 2025
32.3 C
Singapore
32.4 C
Thailand
28.6 C
Indonesia
27.7 C
Philippines

Meta revises content moderation policies and ends fact-checking

Meta loosens content rules, ends fact-checking, and focuses on free speech. Learn how these changes impact moderation and user experience.

Meta, the parent company of Facebook, Instagram, and WhatsApp, has announced sweeping changes to its content moderation policies. The changes, described as a move towards fostering free expression, include ending its third-party fact-checking programme and relaxing restrictions on controversial topics. These decisions are part of a broader strategy to streamline content policies while focusing on high-severity violations.

In a blog post titled “More Speech, Fewer Mistakes,” Meta’s new global affairs officer, Joel Kaplan, outlined the changes. The update is intended to reduce over-enforcement, which Kaplan claimed had led to unnecessary censorship and limited political debate.

Major changes to content moderation policies

The overhaul focuses on three key areas:

1. Ending third-party fact-checking: Meta is phasing out its partnership with third-party fact-checking organisations and introducing a “Community Notes” model. This approach, similar to the one used by X.com (formerly Twitter), allows users to provide context and flag misinformation directly.

2. Lifting topic restrictions: Meta will now ease restrictions on topics considered part of “mainstream discourse.” Instead of tightly moderating these discussions, the company will prioritise addressing illegal content and high-severity violations, such as terrorism, child exploitation, and fraud.

3. Personalised political content: Users will have more control over the political content they see, enabling them to tailor their feeds to their preferences. This move embraces a more individualised experience, even if it creates echo chambers.

The timing of these changes is notable, as they come just weeks before a new US presidential administration takes office. Former President Donald Trump, whose accounts were once banned by Meta, has emphasised a broader interpretation of free speech, which aligns with Meta’s new approach.

Meta has faced criticism from all sides in recent years. While some have accused the platform of not doing enough to curb misinformation, others argue that its rules were overly restrictive and politically biased. Kaplan highlighted these concerns, stating that Meta had made mistakes by over-enforcing policies, leading to censorship of content that didn’t violate guidelines.

The third-party fact-checking programme, introduced in 2016 after accusations that Facebook spread misinformation during the US presidential election, was one of Meta’s most significant efforts to combat fake news. However, the programme has drawn criticism for potential bias in selecting content to fact-check.

Shifting accountability

This policy shift also reflects changes in Meta’s leadership. Joel Kaplan, a prominent Republican, has replaced Nick Clegg as Meta’s global affairs head. Kaplan has signalled a desire to align more closely with the incoming Trump administration’s free speech priorities.

The company is also relocating its trust and safety teams from California to Texas and other US locations to diversify perspectives. This move signals an effort to broaden its approach to content moderation.

Despite these changes, some question whether Meta’s relaxed stance will make the platform more susceptible to misinformation. The Oversight Board, Meta’s independent review body, welcomed the update but stated that it would work closely with the company to refine its approach to free speech in 2025.

Meta’s evolving priorities

Meta’s latest actions depart from its previous commitment to rigorous content moderation. CEO Mark Zuckerberg has expressed interest in a more collaborative relationship with the new administration, potentially influencing these policy adjustments.

In a symbolic gesture, Meta recently added UFC president Dana White, a Trump supporter, to its board of directors. These developments underscore Meta’s focus on repositioning itself during this period of political change.

“Meta’s platforms are built to be places where people can express themselves freely. That can be messy, but it’s the essence of free expression,” Kaplan wrote in the blog post.

As Meta redefines its content moderation strategy, the impact on users and global discourse remains to be seen.

Hot this week

TikTok introduces voice messages and image sharing in direct messages

TikTok is rolling out voice messages and image sharing in direct messages for all eligible users, boosting its social features.

Vivo unveils Vision headset to rival Apple’s Vision Pro

Vivo launches Vision headset, a lighter and cheaper rival to Apple’s Vision Pro, as China’s VR market grows.

Google begins rolling out the August 2025 spam update

Google has begun rolling out its August 2025 spam update, the first of the year, which is set to take several weeks to complete across all languages.

Anthropic updates Claude chatbot policy to use chat data for AI training

Anthropic will utilise Claude chatbot conversations for AI training starting from 28 September, with opt-out options and a five-year data retention policy.

Casio introduces the MR-G MRG-B5000HT as a limited-edition art piece

Casio launches the MR-G MRG-B5000HT, a limited-edition G-Shock featuring hand-hammered titanium and Japanese craftsmanship.

Researchers show how 5G phones can be downgraded to 4G in a new cyberattack

Researchers have revealed a toolkit that can downgrade 5G phones to 4G, exposing them to known security flaws and raising concerns about mobile security.

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

Japan uses an AI simulation of Mount Fuji’s eruption to prepare citizens

Japan uses AI to simulate a Mount Fuji eruption, showing its potential devastation and promoting disaster preparedness.

Related Articles

Popular Categories