Tuesday, 26 August 2025
29.1 C
Singapore
29.7 C
Thailand
20.6 C
Indonesia
26.9 C
Philippines

Microsoft launches Azure AI Content Safety to secure the online space

In APAC and Singapore, Microsoft's Azure AI Content Safety offers multilingual proficiency and severity metrics to boost online safety.

Microsoft has officially rolled out Azure AI Content Safety, a tool designed to monitor and eliminate harmful content from both AI-generated and user-generated platforms. The tool can detect and filter out adult material, gore, violence, and hate speech. The system has key features such as multilingual proficiency, severity indication metrics, multi-category filtering, and text and image detection. This comes as a significant move to make digital literacy and online safety more widespread. The significance of this is particularly felt in regions like Asia Pacific and Singapore, where the dependence on social media and messaging apps is high and could affect public health and safety.

Azure AI’s safety measures reach from classrooms to chatrooms

Earlier this year, the education department in South Australia was looking to introduce generative AI into classrooms. However, they faced a significant concern: how to ensure the technology is used responsibly. The department’s digital architecture director, Simon Chapman, pointed out the limitations in public versions of generative AI, particularly the absence of safety measures.

To resolve this, the department turned to Azure AI Content Safety, with its multilingual proficiency and severity indication metrics. These metrics provide a severity rating of 0-7 for each flagged content, allowing users to gauge the potential threat level quickly. The department recently concluded a pilot test with EdChat, an AI-driven chatbot, in eight secondary schools. The chatbot, tested by nearly 1,500 students and 150 teachers, used Azure AI Content Safety’s multi-category filtering and text and image detection features to provide a safe educational experience.

Broader applications and ongoing development

Azure AI Content Safety was initially part of Azure OpenAI Service but has now been separated to function as a standalone system. This allows it to be used not just with Microsoft’s models but also with AI content from other companies and open-source models. Corporate Vice President of Microsoft AI Platform, Eric Boyd, said that the standalone service would cater to broader business needs as generative AI becomes more prevalent.

Over the years, Microsoft has been actively pushing for responsible AI governance and has nearly 350 staff dedicated to responsible AI. Sarah Bird, who leads responsible AI for foundational technologies at Microsoft, said Azure AI Content Safety is vital to Microsoft’s commitment to responsible AI. The technology is also highly customisable, allowing businesses to adapt policies to fit their unique needs and operating environments.

The tool has already been integrated into various Microsoft products, including Bing, GitHub Copilot, and Microsoft 365 Copilot. Microsoft continues to invest in research and gather customer feedback to enhance the capabilities of Azure AI Content Safety, especially in detecting combinations of harmful images and text.

As we move further into the era of AI, Microsoft aims to bolster public trust by ensuring online safety. Azure AI Content Safety is a pivotal part of this mission, building on Microsoft’s long history of content moderation tools and leveraging advanced language and vision models to create a safer online world.

Hot this week

Meta introduces new AI-powered ad tools for holiday campaigns

Meta introduces new AI-powered ad tools to boost holiday sales, including enhanced creator partnerships, video ads and global expansion.

Google launches Pixel Watch 4 with new design, health tools and AI features

Google unveils Pixel Watch 4 with a new domed display, AI health coach, advanced fitness tools, and satellite emergency connectivity.

Google unveils Pixel 10 Pro Fold, its most durable foldable yet

Google launches the Pixel 10 Pro Fold, its most durable foldable phone yet, featuring upgraded cameras, AI tools and a stronger hinge.

xAI makes Grok 2.5 open source as Grok 3 release nears

xAI makes its Grok 2.5 AI model open source on Hugging Face, with Elon Musk confirming Grok 3 will follow in six months.

Worldpay partners with Trulioo to secure AI-powered commerce

Worldpay partners with Trulioo to introduce new safeguards for AI-powered commerce with trust, consent, and fraud protection.

Elon Musk’s xAI files lawsuit against Apple and OpenAI over chatbot integration

Elon Musk’s xAI sues Apple and OpenAI, alleging their iPhone ChatGPT partnership harms competition and gives OpenAI an unfair advantage.

Most Singapore retailers adopt AI but trust remains low

Nearly all Singapore retailers are adopting AI, but only 10% trust it to work independently, monday.com research finds.

Pan-United expands with AI-powered operations management system

Pan-United expands its AI-powered AiR Digital system to transform concrete and logistics operations across Asia-Pacific.

Instagram introduces retention metrics to help creators improve Reels performance

Instagram introduces retention and skip rate metrics for Reels, enabling creators to track viewer engagement and enhance content performance.

Related Articles

Popular Categories