Sunday, 31 August 2025
28.7 C
Singapore
27.3 C
Thailand
22.9 C
Indonesia
27.6 C
Philippines

Chinese tech companies face higher costs due to new EU AI rules

New EU AI rules, effective August 1, will raise compliance costs for Chinese tech firms, affecting innovation. Learn the implications.

The European Union (EU) is set to implement the world’s first comprehensive artificial intelligence (AI) regulations on August 1. These new rules are expected to significantly increase compliance and assessment costs for Chinese tech companies operating within the EU’s 27 member states. Industry experts have highlighted the challenges of these regulations, especially in terms of innovation.

New rules and their implications

The Artificial Intelligence Act was approved by the EU Council in May, following its passage by the European Parliament in March. The Act aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI applications. At the same time, it seeks to boost innovation and establish Europe as a leader in AI technology.

Some Chinese AI firms, such as Hong Kong-based Dayta AI, are already preparing for the financial impact of these regulations. Patrick Tu, co-founder and chief executive of Dayta AI, predicts compliance and assessment requirements will increase the company’s research and development (R&D) and testing costs by 20 to 40 percent. This increase will cover additional documentation, audits, and technological measures.

Balancing regulation and innovation

The introduction of the AI Act reflects a global push to establish AI regulations amidst the rise of generative AI (GenAI) services. GenAI refers to algorithms that create new content, such as audio, code, images, text, and videos, in response to short prompts. Despite concerns about overregulation, Tanguy Van Overstraeten, a partner at Linklaters and head of the law firm’s technology, media, and telecommunications (TMT) group in Brussels, believes the EU’s goal is to create an environment of trust.

The AI Act categorises AI technology based on potential risks and impacts. It covers prohibited practices, high-risk systems, transparency obligations, governance, post-market monitoring, information sharing, and market surveillance. The regulation also requires member states to establish regulatory sandboxes for real-world testing, which allow companies to test AI applications within set boundaries for up to 12 months.

Non-compliance with certain AI practices can lead to administrative fines of up to 35 million euros (US$38 million) or up to 7 percent of the offending firm’s total worldwide annual turnover, whichever is higher.

Comparing global regulations

Dayta AI’s Tu noted that the EU’s focus on data quality will ultimately enhance the performance and fairness of AI solutions. He also compared the EU’s user rights-focused approach with the regulations in China and Hong Kong, which he believes focus more on enabling technological progress and aligning with government priorities.

China’s GenAI regulations, implemented on August 15 last year, require AI service providers to adhere to core socialist values and avoid generating content that threatens national security or promotes terrorism, extremism, or other harmful ideologies. Alex Roberts, a partner at Linklaters in Shanghai, pointed out that these regulations can confuse multinational corporations that are unfamiliar with such requirements.

Roberts also mentioned that China’s AI regulations are more state-led, whereas the EU’s regulations focus on user rights. Despite these differences, he believes the core principles of both regulatory frameworks are similar, including transparency, data protection, accountability, and providing clear guidance on AI products.

The State Council, China’s cabinet, has listed a comprehensive AI law in its legislation plans for 2023 and 2024, though a draft law has yet to be proposed. Other Asian countries, like South Korea, are also working on AI regulations. South Korea’s draft “Act on Promotion of AI Industry and Framework for Establishing Trustworthy AI” is still under review.

Roberts concluded that governments in the Asia-Pacific region increasingly look to the EU’s AI regulations as a model for their legislation. This trend allows businesses to advocate for more consistent and harmonised cross-market rules.

Hot this week

Airwallex wins three honours at Asia FinTech Awards 2025

Airwallex wins three awards at the Asia FinTech Awards 2025, including Banking Tech of the Year, Best Employer, and Director of the Year.

Xbox introduces cross-device play history for seamless gaming

Xbox rolls out cross-device play history, syncing recently played and cloud-enabled games across consoles, PC, and handheld devices.

TechInnovation 2025 returns with focus on real-world solutions

TechInnovation 2025 returns to Singapore from 29 to 31 October, showcasing over 100 technologies and fostering cross-border collaboration.

ATPI expands in Asia to support growing business travel demand

ATPI expands in Asia with new offices in India and planned growth in China and South Korea to meet rising regional business travel demand.

China Changan Automobile Group officially launches with global ambitions

China Changan Automobile Group launches with a global strategy to sell five million vehicles annually by 2030, led by NEVs.

Meta introduces new AI safeguards to protect teens from harmful conversations

Meta is strengthening AI safeguards to prevent teens from discussing self-harm and other sensitive topics with chatbots on Instagram and Facebook.

ChatGPT to introduce parental controls as AI safety concerns rise

OpenAI is introducing parental controls for ChatGPT, addressing growing concerns about the safety of AI chatbots and their impact on young users.

Japan uses an AI simulation of Mount Fuji’s eruption to prepare citizens

Japan uses AI to simulate a Mount Fuji eruption, showing its potential devastation and promoting disaster preparedness.

Anthropic updates Claude chatbot policy to use chat data for AI training

Anthropic will utilise Claude chatbot conversations for AI training starting from 28 September, with opt-out options and a five-year data retention policy.

Related Articles

Popular Categories