Meta’s Oversight Board is pressing the tech company for more information about recent changes to its hate speech policies. You may have noticed that Meta – the parent company of Facebook, Instagram, and Threads – announced updates to its rules in January. These changes were meant to allow “more speech” across its platforms, but how they were introduced has raised concerns.
The Oversight Board, an independent group created to guide Meta’s content moderation, responded on June 11. The Board says Meta broke from its usual process by rolling out the changes without proper consultation. Now, the Board wants the company to share more details about the new rules and how they affect users, especially those from vulnerable communities.
Questions about transparency and user safety
You might be wondering what’s changed. Meta’s revised policies have scaled back some protections for immigrants and LGBTQIA+ users, which has led to criticism from the Oversight Board. The Board believes Meta moved too fast, without enough care for those most likely to be harmed by hate speech online.
In its response, the Board asked Meta to do three key things:
- Review the impact of the new rules on vulnerable groups.
- Share those findings with the public.
- Report back to the Board every six months.
The Board also recommended 17 steps Meta could take to improve its handling of hate speech. These include clarifying what “hateful ideologies” mean, improving how harassment policies are violated, and testing how well the new community notes system works. The Board also reminded Meta to stick to its 2021 promise to follow the United Nations Guiding Principles on Business and Human Rights.
Content decisions under the spotlight
Although the Oversight Board doesn’t set Meta’s global content policies, it can make binding decisions on individual posts. When Meta allows the Board to give a policy advisory opinion, it opens the door for bigger changes—but that doesn’t happen often.
The Board recently reviewed 11 cases across Meta’s platforms. These included posts about anti-immigrant violence in the U.K., hate speech against people with disabilities and suppression of LGBTQIA+ voices. The Board criticised the slow response from Meta in some of these situations.
In one example, Meta failed to remove posts about anti-immigration riots in the U.K. quickly enough. The Board decided those posts broke Meta’s own violence and incitement rules and should have been taken down sooner.
In two U.S.-based cases involving videos of transgender women, Meta chose to leave the content online, and the Board agreed with that decision. Still, the Board made a suggestion: remove the word “transgenderism” from the Hateful Conduct policy, as it is often used in a harmful way.
Global concerns and future changes
You’re not alone if you feel confused about how Meta handles hate speech. The Oversight Board says it’s now in talks with Meta to help shape fact-checking policies outside the U.S., which could lead to better protection for users worldwide.
The future of Meta’s hate speech rules is still uncertain. However, the Oversight Board is urging the company to take user safety seriously and to be more open about how it makes decisions. As a user, this means you might see more transparent policies in the future—if Meta listens to its independent advisors.