Sunday, 31 August 2025
29.1 C
Singapore
28.3 C
Thailand
18.6 C
Indonesia
27.6 C
Philippines

X’s Community Notes feature falls short of tackling misinformation

Reports show that X's Community Notes feature is ineffective at curbing misinformation and addressing misleading high-profile posts.

Since Elon Musk took over, X (formerly Twitter) has faced growing issues with misinformation, misleading posts, and unchecked claims. Recent reports from The Center for Countering Digital Hate (CCDH) and The Washington Post spotlight this rise in misinformation. Both reveal that X’s Community Notes feature, a user-driven reporting tool, struggles to keep up with false and misleading information on the platform.

Community Notes misses the mark on key corrections

The CCDH released findings on X’s Community Notes system, which allows users to propose corrections on misleading posts. By analysing 283 misleading posts related to elections between January 1 and August 25, CCDH found that 209 posts were not flagged with Community Notes corrections for all users. These misleading posts received 2.2 billion views, highlighting the feature’s limited reach and effectiveness.

The CCDH’s report indicates that Community Notes corrections often don’t reach the intended audience, allowing misinformation to circulate freely. The organisation criticised X’s feature oversight, highlighting that most uncorrected posts had high engagement and visibility.

Following the CCDH’s report, The Washington Post conducted its investigation, showing that X’s misinformation issues extend beyond election-related posts. One case involved a false claim made by former President Donald Trump during a presidential debate with Vice President Kamala Harris. Trump falsely asserted that Haitian immigrants were harming people’s pets in Springfield, Ohio. Despite being debunked by moderator David Muir, the claim quickly spread across X.

Community Notes struggles with high-profile users and misleading content

A well-followed account, @EndWokeness, which boasts a sizeable conservative following, posted Trump’s statement about Haitian immigrants, leading to more than 4.9 million views. Although a Community Notes user flagged the post as false, citing reliable news sources to refute it, the correction did not garner enough support to prompt a more comprehensive label, leaving the claim unchallenged on the platform. Four days they were passed before any action was attempted, illustrating the Community Notes system’s slow response rate.

Meanwhile, Musk himself has faced scrutiny over his posts. The Washington Post reports that Musk’s account is often flagged with proposed Community Notes, with one in ten of his posts receiving a correction. An example from July involved a manipulated video falsely portraying Vice President Harris making comments about President Biden. The post, which gained over 136 million views, has no Community Notes, leaving many users to believe it was accurate.

In defence, X’s VP of product, Keith Coleman, stated that Community Notes holds a high standard, with thousands of notes approved in 2024 alone, showing that the system is actively applied to posts. He referenced studies showing that posts with a Community Note are 60% less likely to be shared and 80% more likely to be deleted by their creators. However, the high-profile examples of CCDH and The Washington Post raised questions about whether the feature is as effective as X claims.

X under pressure as watchdogs continue to monitor misinformation

The CCDH has remained vocal about Musk’s oversight of the platform, frequently monitoring his account for misleading posts. CCDH’s CEO, Imran Ahmed, criticised X’s failure to contain what he calls “algorithmically-boosted incitement,” warning of potential real-world consequences. X’s response was to sue the CCDH, alleging that it engaged in a “scare campaign” to impact the platform’s advertising revenue, but a US district court judge dismissed the case in March.

CCDH and The Washington Post have shown that X’s approach to misinformation remains inadequate. Community Notes has potential but is not yet compelling enough to address the rising tide of false claims, especially among influential accounts with high engagement. This highlights the importance of continued oversight as misinformation remains prevalent on the platform, shaping user perception without sufficient checks.

Hot this week

Microsoft’s Copilot AI to debut on Samsung TVs and monitors in 2025

Microsoft’s Copilot AI will launch on Samsung’s 2025 TVs and monitors, offering personalised support, recommendations, and voice-activated features.

Microsoft designer reveals concept for tiny Surface-inspired laptop

Microsoft designer Braz de Pina unveils a colourful, compact laptop concept that reimagines portable computing with bold design choices.

Casio introduces the MR-G MRG-B5000HT as a limited-edition art piece

Casio launches the MR-G MRG-B5000HT, a limited-edition G-Shock featuring hand-hammered titanium and Japanese craftsmanship.

Apple set to bring back Touch ID with upcoming foldable iPhone

Apple is expected to launch its first foldable iPhone in 2026, featuring Touch ID, four cameras and a slim in-cell display design.

Most Singapore retailers adopt AI but trust remains low

Nearly all Singapore retailers are adopting AI, but only 10% trust it to work independently, monday.com research finds.

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

Japan uses an AI simulation of Mount Fuji’s eruption to prepare citizens

Japan uses AI to simulate a Mount Fuji eruption, showing its potential devastation and promoting disaster preparedness.

Anthropic updates Claude chatbot policy to use chat data for AI training

Anthropic will utilise Claude chatbot conversations for AI training starting from 28 September, with opt-out options and a five-year data retention policy.

Related Articles

Popular Categories