Tuesday, 29 April 2025
25 C
Singapore
30 C
Thailand
20.6 C
Indonesia
28.5 C
Philippines

AI companies unite to safeguard children in the digital realm

Major AI firms like OpenAI, Microsoft, Google, and Meta pledge to protect children from exploitation through responsible tech practices.

In a landmark collaboration, major artificial intelligence firms, including OpenAI, Microsoft, Google, and Meta, have committed to ensuring their technologies do not facilitate child exploitation. This pledge is part of an initiative spearheaded by the child protection group Thorn and the responsible technology advocate All Tech Is Human.

The commitments by these AI giants mark an unprecedented move in the tech industry, aiming to shield children from sexual abuse as generative AI technologies evolve. According to Thorn, these steps are crucial in countering the severe threats posed by the potential misuse of AI. “This sets a groundbreaking precedent for the industry and represents a significant leap in efforts to defend children from sexual abuse,” a spokesperson for Thorn stated.

The initiative focuses on preventing the creation and dissemination of sexually explicit material involving children across social media platforms and search engines. Thorn reports that in 2023 alone, over 104 million files suspected of containing child sexual abuse material (CSAM) were identified in the US. Without a united effort, the proliferation of generative AI could exacerbate this issue, overwhelming law enforcement agencies already struggling to pinpoint actual victims.

Strategic approach to safety

On Tuesday, Thorn and All Tech Is Human released a comprehensive paper titled “Safety by Design for Generative AI: Preventing Child Sexual Abuse.” This document offers strategies and recommendations for entities involved in the creation of AI tools—such as developers, social media platforms, search engines, and hosting companies—to take proactive measures against the misuse of AI in harming children.

One key recommendation urges companies to meticulously select the datasets used to train AI models, advocating for the exclusion of datasets that contain not only CSAM but also adult sexual content. This caution stems from the generative AI’s tendency to merge different concepts, potentially leading to harmful outcomes.

The paper also highlights the emerging challenge of AI-generated CSAM, which complicates the identification of actual abuse victims by adding to the “haystack problem”—the overwhelming volume of content that law enforcement must sift through.

Rebecca Portnoff, Vice President of Data Science at Thorn, emphasised the proactive potential of this initiative in an interview with the Wall Street Journal. “We want to be able to change the course of this technology to where the existing harms of this technology get cut off at the knees,” she explained.

Some companies have already begun implementing measures such as segregating children-related images, videos, and audio from datasets that include adult content to prevent inappropriate data mingling. Additionally, the introduction of watermarks to identify AI-generated content is being adopted, although these are not foolproof as they can be removed.

This collective effort underscores a significant stride towards safer digital environments for children, leveraging the power of AI for protection rather than peril.

Hot this week

Commvault expands cyber recovery services through CrowdStrike partnership

Commvault and CrowdStrike expand partnership to offer integrated cyber recovery and incident response services for stronger cyber resilience.

ASUS and JustCo introduce experience zones for business travellers and professionals in Singapore

ASUS and JustCo open new tech-enabled workspace zones in Singapore, featuring premium monitors and chairs for modern professionals.

SquareX secures US$20 million to transform browser security

SquareX raises US$20 million to strengthen browser security, offering enterprises an easy way to protect users without disrupting their workflows.

Insta360 unveils X5 camera with upgrades

Insta360's X5 camera boasts larger sensors, replaceable lenses, and AI enhancements, improving image quality and durability.

Ziff Davis takes OpenAI to court over alleged copyright infringement

Ziff Davis sues OpenAI over copyright claims, accusing the AI firm of copying and using its content without permission.

Grouphug brings AI to WhatsApp groups to turn private chats into memes

Grouphug wants to turn your WhatsApp group chats into memes using AI—and that’s only the beginning of this clever new app.

OpenAI brings smarter shopping to ChatGPT with new search features

ChatGPT now offers smarter shopping with visual product picks, reviews, and direct links—no ads, just easier online buying.

Huawei introduces new AI chip to rival Nvidia’s top model

Huawei is developing the Ascend 910D chip to rival Nvidia’s H100 amid growing demand and U.S. export restrictions on AI chips to China.

ASUS teams up with Bethesda to launch ROG Astral GeForce RTX 5080 DOOM Edition

ASUS celebrates 30 years of graphics cards with a limited ROG RTX 5080 DOOM Edition, launched in partnership with Bethesda and id Software.

Related Articles

Popular Categories