Thursday, 1 May 2025
26.8 C
Singapore
28.5 C
Thailand
20.6 C
Indonesia
28.6 C
Philippines

Google forms a new industry group to secure AI development

Google forms the Coalition for Secure AI with major tech companies to address AI security risks, aiming for collaborative solutions and safe AI development.

With the rise of generative AI posing significant risks, it seems like major players in the tech industry are establishing new agreements and forums to monitor AI development every other week. This is good for fostering collaborative discussions around AI projects and ensuring that each company is managing its processes responsibly. However, it also feels like these efforts are designed to stave off further regulatory restrictions that could increase transparency and impose stricter rules on developers.

The Coalition for Secure AI

Google is the latest to form a new AI guidance group called the Coalition for Secure AI (CoSAI). This group aims to advance comprehensive security measures to address the unique risks of AI development. According to Google:

“AI needs a security framework and applied standards to keep pace with its rapid growth. That’s why we shared the Secure AI Framework (SAIF) last year, knowing it was just the first step. Operationalising any industry framework requires close collaboration with others  and above all, a forum to make that happen.”

This initiative is not entirely new but an expansion of a previously announced focus on AI security development. CoSAI will guide defence efforts to help avoid hacks and data breaches. Several big tech players, including Amazon, IBM, Microsoft, NVIDIA, and OpenAI, have signed up for this initiative. The goal is to create collaborative, open-source solutions to ensure greater security in AI development.

Growing list of industry groups

CoSAI is the latest addition to a growing list of industry groups focused on sustainable and secure AI development. For example:

  • The Frontier Model Forum (FMF) aims to establish industry standards and regulations around AI development. Meta, Amazon, Google, Microsoft, and OpenAI have signed up for this initiative.
  • Thorn has established its “Safety by Design” programme, which focuses on responsibly sourcing AI training datasets to protect against child sexual abuse material. Meta, Google, Amazon, Microsoft, and OpenAI support this initiative.
  • The U.S. government has created its own AI Safety Institute Consortium (AISIC), which has attracted over 200 companies and organisations.
  • Representatives from nearly every major tech company have agreed to the Tech Accord to Combat Deceptive Use of AI, aiming to implement reasonable precautions to prevent AI tools from disrupting democratic elections.

The need for enforceable regulations

We’re seeing more forums and agreements designed to address various elements of safe AI development. While these initiatives are good, they are not enforceable laws but rather mutual agreements among AI developers to adhere to specific rules. The sceptical view is that these efforts are merely assurances to stave off more definitive regulation.

EU officials are already assessing the potential harms of AI development under the GDPR, while other regions are considering similar measures, including financial penalties for violations. Government regulation seems like what’s genuinely needed, but it takes time. We’re unlikely to see actual enforcement systems and structures in place until after significant harms occur, providing regulatory groups with more impetus to push through official policies.

Until then, we have industry groups where companies take pledges to follow established rules through mutual agreements. Whether this will be enough remains uncertain, but it’s what we have for now.

Hot this week

Nvidia releases another GPU fix to stop crashes on RTX 50-series

Nvidia released hotfix 576.26, its fifth GPU driver update in recent months, to fix RTX 50-series crashes, game bugs, and DisplayPort issues.

M1 launches anniversary sale with zero upfront cost on new phones

M1 celebrates 28 years with a major sale offering $0 phones, low monthly plans, loyalty rewards and roaming perks until 15 June 2025.

Anthropic aims to uncover how AI models think by 2027

Anthropic CEO Dario Amodei aims to understand how AI models work by 2027 and urges industry-wide action for safety and transparency.

Huawei introduces new AI chip to rival Nvidia’s top model

Huawei is developing the Ascend 910D chip to rival Nvidia’s H100 amid growing demand and U.S. export restrictions on AI chips to China.

OpenAI introduces a new lightweight deep research tool for ChatGPT users

OpenAI adds a faster, lightweight deep research tool to ChatGPT, making it easier for users to access web-based summaries and reports.

You can get DOOM: The Dark Ages free with select Nvidia graphics cards

Get DOOM: The Dark Ages Premium Edition free with select Nvidia RTX 50 GPUs until May 21, including in-game extras and early access.

Xiaomi enters China’s AI race with new model to power smart devices

Xiaomi joins China’s AI race with its new MiMo model, aiming to power devices with smarter tech and compete with big tech firms.

Samsung chip profits fall sharply due to US export controls and price drops

Samsung chip profits dropped 40% due to US export rules and price cuts as the company raced to catch up in AI memory production.

Chinese AI and robotics start-ups back Xi’s push for technological self-reliance

Chinese AI and robotics start-ups vow self-reliance after Xi visits Shanghai, showcasing innovation and commitment to homegrown tech.

Related Articles

Popular Categories