Meta has declined to sign the European Union’s new code of practice for artificial intelligence (AI), describing the voluntary guidelines as excessive and legally ambiguous. The company said the framework, published by the EU on 10 July, introduces uncertainties for developers and includes requirements that extend beyond the scope of the AI Act.
Meta pushes back against EU regulation
Meta’s Chief Global Affairs Officer, Joel Kaplan, announced the company’s decision on Friday, criticising the European Commission’s approach to regulating general-purpose AI (GPAI). “Europe is heading down the wrong path on AI,” Kaplan stated. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI models, and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”
Although the code of practice is non-binding, Kaplan’s public rejection highlights the growing tension between major tech companies and European lawmakers over how AI should be regulated. Meta has previously described the AI Act as “unpredictable,” arguing that it “goes too far,” stifling innovation and delaying product development.
In February, the company’s public policy director expressed concern that these regulations would lead to watered-down products, ultimately harming European users. “The net result of all of that is that products get delayed or get watered down and European citizens and consumers suffer,” he said.
Details of the EU’s AI code of practice
The European Commission’s code of practice outlines a set of voluntary standards to guide companies in complying with the EU’s AI Act. These include requirements to avoid training AI systems using pirated materials and to respect requests from writers and artists to exclude their work from training datasets. Developers are also expected to publish regularly updated documentation about the features and operations of their AI systems.
Although participation in the code is not mandatory, it offers advantages. Companies that sign up may benefit from added legal protection in the future, particularly against potential claims of breaching the AI Act. According to Thomas Regnier, spokesperson for digital affairs at the European Commission, companies that opt out will need to demonstrate compliance with the law in alternative ways and may face closer scrutiny as a result.
“AI providers who don’t sign it will have to demonstrate other means of compliance,” Regnier told Bloomberg. “They may be exposed to more regulatory scrutiny.”
Firms that violate the AI Act may also be subject to significant financial penalties. The European Commission has the authority to fine companies up to seven percent of their global annual turnover. Developers of advanced AI models could face a slightly lower penalty of up to three percent.
Political backdrop and Meta’s broader strategy
Meta’s resistance to the EU’s regulatory direction aligns with broader political dynamics. The company may feel emboldened by support from former US President Donald Trump, who reportedly urged the EU to drop the AI Act in April, calling it “a form of taxation.” With an ally in Washington who favours lighter regulation, Meta may see public opposition to the EU’s policy as a worthwhile strategy.
This is not the first time Meta has challenged European authorities on technology regulation. The company’s public objections appear to be part of a longer-term effort to shape global AI policy more in line with its corporate interests.
While the EU remains committed to advancing its AI framework, the clash with Meta highlights the challenges of establishing international standards in a rapidly evolving technological landscape.