Oversight Board urges Meta to introduce new rules for AI-generated content
Oversight Board urges Meta to introduce dedicated rules and stronger detection tools for AI-generated content on its platforms.
The independent oversight body that reviews Meta’s decisions has called on the social media giant to introduce clearer policies for handling artificial intelligence-generated content. In its latest ruling, the Meta Oversight Board said the company should establish a dedicated rule governing AI-created material rather than relying on its existing misinformation framework.
Table Of Content
The board’s recommendations follow the circulation of a misleading AI-generated video that appeared to show damaged buildings in the Israeli city of Haifa during the Israel–Iran conflict of 2025. The video gained more than 700,000 views after being posted by an account claiming to be a news outlet, though it was later discovered that the page was operated by an individual based in the Philippines.
Meta initially chose not to remove the content or apply a “high risk” AI label that would have clearly indicated the video had been generated or manipulated using artificial intelligence. The Oversight Board overturned that decision, arguing that the case highlights significant gaps in the company’s current management of AI-driven misinformation.
Concerns over current labelling and enforcement
According to the board, Meta’s existing policies are not adequate to address the rapid spread of synthetic media online. The organisation said the company must strengthen its approach to deceptive AI content, particularly when such material relates to public interest issues or unfolding geopolitical events.
“Meta must do more to address the proliferation of deceptive AI-generated content on its platforms, including by inauthentic or abusive networks of accounts and pages, particularly on matters of public interest, so that users can distinguish between what is real and fake,” the board said in its decision.
After the board reviewed the case, Meta disabled three accounts associated with the page that originally shared the video. The board said these accounts displayed “obvious signals of deception”, raising concerns about how long the content had remained online without clear labelling.
The Oversight Board also criticised Meta’s use of “AI Info” labels, which are intended to inform users when media has been generated or altered using artificial intelligence tools. It said the system lacks the strength and consistency required to deal with the speed at which such content can spread.
The board wrote that the current labelling approach is “neither robust nor comprehensive enough to contend with the scale and velocity of AI-generated content”, particularly during periods of crisis or armed conflict. It added that a framework that relies heavily on users voluntarily disclosing their AI use is unlikely to be effective.
“A system overly dependent on self-disclosure of AI usage and escalated review (which occurs infrequently) to properly label this output cannot meet the challenges posed in the current environment,” the board said.
Calls for dedicated policy and better detection tools
One of the Oversight Board’s central recommendations is for Meta to introduce a standalone policy focused on AI-generated content. Such a rule would define when users must label AI-created material and outline penalties for those who fail to comply.
The board said the policy should also clarify how Meta determines whether media has been generated by artificial intelligence and how it communicates those findings to users across its platforms.
In addition, the group urged the company to invest in more reliable technology capable of automatically identifying synthetic media. This includes improved tools for detecting manipulated audio and video, which are becoming increasingly sophisticated as generative AI technology advances.
The board also raised concerns about the inconsistent use of digital watermarks on content produced by Meta’s own AI tools. Digital watermarks can help identify AI-generated media by embedding signals into files that indicate how they were created.
According to the board, reports suggesting these watermarks are not applied consistently raise questions about Meta’s internal standards for transparency. The organisation said the company should ensure that any content generated by its own systems is clearly marked from the outset.
Meta did not immediately respond to requests for comment following the ruling. Under the Oversight Board’s procedures, the company has 60 days to formally respond to the recommendations and explain whether it will adopt them.
Growing concern over AI misinformation during conflicts
The decision adds to a series of criticisms the Oversight Board has directed at Meta regarding its policies on manipulated media. On two previous occasions, the board described the company’s rules as “incoherent” and argued that they fail to reflect the rapidly evolving nature of artificial intelligence tools.
The board has also questioned Meta’s reliance on external partners such as fact-checking organisations to identify misleading posts. In its latest decision, it noted that several of these partners have reported reduced responsiveness from Meta in recent years.
According to the board, some organisations said the company has become slower to address concerns, partly due to a reduction in the capacity of its internal moderation teams. It argued that Meta should be able to assess potential harm itself, particularly during major geopolitical crises.
“Meta should be capable of conducting such assessments of harm itself, rather than relying solely on partners reaching out to them during an armed conflict,” the board wrote.
The issue of AI-generated misinformation has become increasingly urgent amid escalating tensions in the Middle East. Since the beginning of recent military strikes involving the United States and Israel against Iran earlier this month, researchers and analysts have reported a sharp increase in viral synthetic media circulating across social platforms.
The Oversight Board said the problem extends beyond a single company and requires coordinated action across the technology sector. It suggested that social media platforms and developers of generative AI tools should work together to establish consistent ways of identifying and labelling synthetic media.
“The industry needs coherence in helping users distinguish deceptive AI-generated content, and platforms should address abusive accounts and pages sharing such output,” the board said.



