Tuesday, 21 October 2025
27.5 C
Singapore
23.9 C
Thailand
21.3 C
Indonesia
28.3 C
Philippines

Meta refuses to sign the EU’s AI code of practice, citing legal concerns

Meta declines to sign the EU's AI Code of Practice, citing legal concerns and excessive requirements that exceed the scope of the AI Act.

Meta has declined to sign the European Union’s new code of practice for artificial intelligence (AI), describing the voluntary guidelines as excessive and legally ambiguous. The company said the framework, published by the EU on 10 July, introduces uncertainties for developers and includes requirements that extend beyond the scope of the AI Act.

Meta pushes back against EU regulation

Meta’s Chief Global Affairs Officer, Joel Kaplan, announced the company’s decision on Friday, criticising the European Commission’s approach to regulating general-purpose AI (GPAI). “Europe is heading down the wrong path on AI,” Kaplan stated. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI models, and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Although the code of practice is non-binding, Kaplan’s public rejection highlights the growing tension between major tech companies and European lawmakers over how AI should be regulated. Meta has previously described the AI Act as “unpredictable,” arguing that it “goes too far,” stifling innovation and delaying product development.

In February, the company’s public policy director expressed concern that these regulations would lead to watered-down products, ultimately harming European users. “The net result of all of that is that products get delayed or get watered down and European citizens and consumers suffer,” he said.

Details of the EU’s AI code of practice

The European Commission’s code of practice outlines a set of voluntary standards to guide companies in complying with the EU’s AI Act. These include requirements to avoid training AI systems using pirated materials and to respect requests from writers and artists to exclude their work from training datasets. Developers are also expected to publish regularly updated documentation about the features and operations of their AI systems.

Although participation in the code is not mandatory, it offers advantages. Companies that sign up may benefit from added legal protection in the future, particularly against potential claims of breaching the AI Act. According to Thomas Regnier, spokesperson for digital affairs at the European Commission, companies that opt out will need to demonstrate compliance with the law in alternative ways and may face closer scrutiny as a result.

“AI providers who don’t sign it will have to demonstrate other means of compliance,” Regnier told Bloomberg. “They may be exposed to more regulatory scrutiny.”

Firms that violate the AI Act may also be subject to significant financial penalties. The European Commission has the authority to fine companies up to seven percent of their global annual turnover. Developers of advanced AI models could face a slightly lower penalty of up to three percent.

Political backdrop and Meta’s broader strategy

Meta’s resistance to the EU’s regulatory direction aligns with broader political dynamics. The company may feel emboldened by support from former US President Donald Trump, who reportedly urged the EU to drop the AI Act in April, calling it “a form of taxation.” With an ally in Washington who favours lighter regulation, Meta may see public opposition to the EU’s policy as a worthwhile strategy.

This is not the first time Meta has challenged European authorities on technology regulation. The company’s public objections appear to be part of a longer-term effort to shape global AI policy more in line with its corporate interests.

While the EU remains committed to advancing its AI framework, the clash with Meta highlights the challenges of establishing international standards in a rapidly evolving technological landscape.

Hot this week

Veeam launches new data cloud platform for managed service providers

Veeam launches Data Cloud for MSPs, a new SaaS platform that simplifies data resilience, strengthens security, and helps providers scale services.

Facebook’s new AI feature scans users’ camera rolls for unpublished photos

Facebook’s new AI tool scans users’ camera rolls to suggest edits and collages, raising questions about data use and privacy.

Samsung reportedly cancels Galaxy S26 Edge plans after weak sales of S25 Edge

Samsung is reportedly cancelling the Galaxy S26 Edge after weak S25 Edge sales and plans to discontinue the model once stocks run out.

Specialised AI roles drive compensation surge as firms rethink talent strategies

Specialised AI roles in Singapore now earn up to 25% more as equity-heavy pay structures rise and a gender pay gap of US$21K persists.

Semperis unveils cyberwar documentary spotlighting global defenders and reformed hackers

Semperis unveils Midnight in the War Room, a documentary revealing the human stories behind the global fight against cyber threats.

Oura redesigns app with enhanced stress tracking and hypertension study

Oura unveils redesigned app with advanced stress tracking and begins FDA-backed study to develop early hypertension detection features.

Shadow of the Colossus turns 20: Exploring the moral depth of gaming’s quietest hero

Shadow of the Colossus marks its 20th anniversary, celebrated for its quiet heroism, moral depth, and enduring emotional power.

Samsung partners with Nvidia to develop custom CPUs and XPUs for AI dominance

Nvidia partners with Samsung to develop custom CPUs and XPUs, expanding its NVLink Fusion ecosystem to strengthen its AI hardware dominance.

NVIDIA unveils first US-made Blackwell wafer as domestic chip production expands

NVIDIA unveils its first US-made Blackwell wafer at TSMC’s Arizona facility, marking a major milestone in domestic AI chip production.

Related Articles