Sunday, 19 October 2025
26 C
Singapore
27 C
Thailand
20.5 C
Indonesia
26.7 C
Philippines

AI companies unite to safeguard children in the digital realm

Major AI firms like OpenAI, Microsoft, Google, and Meta pledge to protect children from exploitation through responsible tech practices.

In a landmark collaboration, major artificial intelligence firms, including OpenAI, Microsoft, Google, and Meta, have committed to ensuring their technologies do not facilitate child exploitation. This pledge is part of an initiative spearheaded by the child protection group Thorn and the responsible technology advocate All Tech Is Human.

The commitments by these AI giants mark an unprecedented move in the tech industry, aiming to shield children from sexual abuse as generative AI technologies evolve. According to Thorn, these steps are crucial in countering the severe threats posed by the potential misuse of AI. “This sets a groundbreaking precedent for the industry and represents a significant leap in efforts to defend children from sexual abuse,” a spokesperson for Thorn stated.

The initiative focuses on preventing the creation and dissemination of sexually explicit material involving children across social media platforms and search engines. Thorn reports that in 2023 alone, over 104 million files suspected of containing child sexual abuse material (CSAM) were identified in the US. Without a united effort, the proliferation of generative AI could exacerbate this issue, overwhelming law enforcement agencies already struggling to pinpoint actual victims.

Strategic approach to safety

On Tuesday, Thorn and All Tech Is Human released a comprehensive paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse.” This document offers strategies and recommendations for entities involved in the creation of AI tools—such as developers, social media platforms, search engines, and hosting companies—to take proactive measures against the misuse of AI in harming children.

One key recommendation urges companies to meticulously select the datasets used to train AI models, advocating for the exclusion of datasets that contain not only CSAM but also adult sexual content. This caution stems from the generative AI’s tendency to merge different concepts, potentially leading to harmful outcomes.

The paper also highlights the emerging challenge of AI-generated CSAM, which complicates the identification of actual abuse victims by adding to the “haystack problem”—the overwhelming volume of content that law enforcement must sift through.

Rebecca Portnoff, Vice President of Data Science at Thorn, emphasised the proactive potential of this initiative in an interview with the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees,” she explained.

Some companies have already begun implementing measures such as segregating children-related images, videos, and audio from datasets that include adult content to prevent inappropriate data mingling. Additionally, the introduction of watermarks to identify AI-generated content is being adopted, although these are not foolproof as they can be removed.

This collective effort underscores a significant stride towards safer digital environments for children, leveraging the power of AI for protection rather than peril.

Hot this week

OPPO to launch Find X9 Series globally, redefining mobile photography

OPPO will launch the Find X9 Series globally on 28 October, introducing breakthrough mobile imaging, powerful performance, and refined design.

Global mobile app demand remains resilient as APAC leads growth surge

Adjust’s 2025 Mobile App Growth Report shows global app demand rising, led by APAC’s strong growth in gaming and entertainment.

Apple achieves record Q3 shipments as iPhone 17 series boosts global market growth

Apple posts record Q3 shipments with the iPhone 17 series as the global smartphone market grows 3%, Omdia reports.

Belkin unveils Stage PowerGrip: a magnetic iPhone accessory with built-in power bank

Belkin unveils the Stage PowerGrip, a magnetic iPhone grip that doubles as a multi-device charger with a 9,300mAh battery.

Specialised AI roles drive compensation surge as firms rethink talent strategies

Specialised AI roles in Singapore now earn up to 25% more as equity-heavy pay structures rise and a gender pay gap of US$21K persists.

Nintendo accelerates Switch 2 production as demand remains strong

Nintendo ramps up Switch 2 production to meet soaring demand, aiming to sell up to 25 million units by March 2026.

Microsoft warns of rising AI-driven cyber threats in 2025 defence report

Microsoft’s 2025 Digital Defense Report warns of rising AI-driven cyber threats, a growing cybercrime economy, and evolving nation-state tactics.

HPE and Ericsson launch joint validation lab for next-generation 5G core networks

HPE and Ericsson launch a joint validation lab to develop and test cloud-native dual-mode 5G core solutions for seamless multi-vendor deployments.

Microsoft brings AI to every Windows 11 PC with new Copilot features

Microsoft’s latest Windows 11 update brings Copilot AI to every PC, adding natural voice interaction, automation, and enhanced security.

Related Articles