Friday, 5 September 2025
26.7 C
Singapore
28.3 C
Thailand
19.4 C
Indonesia
28.3 C
Philippines

Meta refuses to sign the EU’s AI code of practice, citing legal concerns

Meta declines to sign the EU's AI Code of Practice, citing legal concerns and excessive requirements that exceed the scope of the AI Act.

Meta has declined to sign the European Union’s new code of practice for artificial intelligence (AI), describing the voluntary guidelines as excessive and legally ambiguous. The company said the framework, published by the EU on 10 July, introduces uncertainties for developers and includes requirements that extend beyond the scope of the AI Act.

Meta pushes back against EU regulation

Meta’s Chief Global Affairs Officer, Joel Kaplan, announced the company’s decision on Friday, criticising the European Commission’s approach to regulating general-purpose AI (GPAI). “Europe is heading down the wrong path on AI,” Kaplan stated. “We have carefully reviewed the European Commission’s Code of Practice for general-purpose AI models, and Meta won’t be signing it. This Code introduces a number of legal uncertainties for model developers, as well as measures which go far beyond the scope of the AI Act.”

Although the code of practice is non-binding, Kaplan’s public rejection highlights the growing tension between major tech companies and European lawmakers over how AI should be regulated. Meta has previously described the AI Act as “unpredictable,” arguing that it “goes too far,” stifling innovation and delaying product development.

In February, the company’s public policy director expressed concern that these regulations would lead to watered-down products, ultimately harming European users. “The net result of all of that is that products get delayed or get watered down and European citizens and consumers suffer,” he said.

Details of the EU’s AI code of practice

The European Commission’s code of practice outlines a set of voluntary standards to guide companies in complying with the EU’s AI Act. These include requirements to avoid training AI systems using pirated materials and to respect requests from writers and artists to exclude their work from training datasets. Developers are also expected to publish regularly updated documentation about the features and operations of their AI systems.

Although participation in the code is not mandatory, it offers advantages. Companies that sign up may benefit from added legal protection in the future, particularly against potential claims of breaching the AI Act. According to Thomas Regnier, spokesperson for digital affairs at the European Commission, companies that opt out will need to demonstrate compliance with the law in alternative ways and may face closer scrutiny as a result.

“AI providers who don’t sign it will have to demonstrate other means of compliance,” Regnier told Bloomberg. “They may be exposed to more regulatory scrutiny.”

Firms that violate the AI Act may also be subject to significant financial penalties. The European Commission has the authority to fine companies up to seven percent of their global annual turnover. Developers of advanced AI models could face a slightly lower penalty of up to three percent.

Political backdrop and Meta’s broader strategy

Meta’s resistance to the EU’s regulatory direction aligns with broader political dynamics. The company may feel emboldened by support from former US President Donald Trump, who reportedly urged the EU to drop the AI Act in April, calling it “a form of taxation.” With an ally in Washington who favours lighter regulation, Meta may see public opposition to the EU’s policy as a worthwhile strategy.

This is not the first time Meta has challenged European authorities on technology regulation. The company’s public objections appear to be part of a longer-term effort to shape global AI policy more in line with its corporate interests.

While the EU remains committed to advancing its AI framework, the clash with Meta highlights the challenges of establishing international standards in a rapidly evolving technological landscape.

Hot this week

Apple tipped to launch lighter and cheaper Vision Air headset

Apple is reportedly developing a lighter, cheaper Vision Air headset, expected in 2027, with a price around S$2,650.

Yooka-Replaylee remaster launches on consoles and PC on 9 October

Yooka-Replaylee launches on 9 October with digital and physical editions, Switch 2 full cartridge, enhancements, and a 30% upgrade discount.

YouTube tests gift goals to increase live-stream donations

YouTube is testing gift goals for live-stream donations, allowing creators to set donation targets and reward supporters on mobile streams.

Researchers show how 5G phones can be downgraded to 4G in a new cyberattack

Researchers have revealed a toolkit that can downgrade 5G phones to 4G, exposing them to known security flaws and raising concerns about mobile security.

Anthropic updates Claude chatbot policy to use chat data for AI training

Anthropic will utilise Claude chatbot conversations for AI training starting from 28 September, with opt-out options and a five-year data retention policy.

HubSpot unveils Loop Marketing playbook to drive growth in AI era

HubSpot launches Loop Marketing playbook and over 200 AI updates to help businesses grow in the era of AI search and zero-click results.

One in three Australian workers expose company data to AI platforms, Josys warns

Over a third of Australian workers upload sensitive data to AI tools, with Josys warning of rising risks from shadow AI and weak governance.

Singapore Polytechnic partners ESGpedia to strengthen sustainability efforts for local businesses

Singapore Polytechnic and ESGpedia partner to help Singapore businesses cut emissions, boost energy efficiency, and support the Green Plan 2030.

Veeam launches first software appliance for instant, secure data protection

Veeam has launched its first hardware-agnostic software appliance, offering instant, secure data protection with built-in resilience.

Related Articles

Popular Categories