Friday, 7 November 2025
31.8 C
Singapore
31 C
Thailand
27.7 C
Indonesia
28.9 C
Philippines

TikTok shifts to AI moderation, laying off hundreds, while Instagram blames human error for recent mishaps

TikTok lays off 500 human moderators to favour AI, while Instagram deals with moderation errors, blaming a mix of human mistakes and broken tools.

TikTok, the popular social media platform owned by ByteDance, has laid off hundreds of content moderators globally as part of a move towards AI-driven moderation. According to Reuters reports, the job cuts affected approximately 500 employees, primarily in Malaysia. The company, which employs more than 110,000 people worldwide, said the layoffs are part of an ongoing effort to enhance its global content moderation model.

“We are making these changes as part of our ongoing efforts to strengthen our global operating model for content moderation,” a TikTok spokesperson explained. The social media giant currently uses a combination of human moderators and AI systems, with the AI handling around 80% of the moderation tasks. This blend of human oversight and machine learning ensures that content on the platform complies with community standards.

TikTok’s US$2 billion investment into safety

In 2024, ByteDance plans to invest an estimated US$2 billion in improving its trust and safety efforts. This considerable investment comes amid growing concerns over spreading harmful content and misinformation on social media platforms. As TikTok continues to expand its reach, the company faces increased scrutiny from governments and regulators worldwide, particularly in areas where the impact of social media on public discourse is under the spotlight.

The decision to reduce its human moderation workforce is part of ByteDance’s commitment to refining its processes and ensuring content safety and compliance. However, the changes have also raised concerns about whether AI alone can effectively monitor the vast and varied content uploaded to TikTok daily.

Instagram faces moderation issues

While TikTok moves towards AI, Instagram, owned by Meta, is facing its challenges with moderation. On Friday, Adam Mosseri, Instagram’s head, revealed that recent issues on the platform, which resulted in user accounts being locked or posts being incorrectly marked as spam, were due to human error rather than flaws in the AI moderation system.

Mosseri explained that the errors were made by human moderators who lacked adequate context when reviewing certain posts and accounts. “They were making decisions without the proper context on how the conversations played out, and that was a mistake,” he said. However, he also acknowledged that the tools the moderators were using were partly to blame. “One of the tools we built broke, so it wasn’t showing them sufficient context,” Mosseri admitted.

Users locked out of accounts

Over the past few days, numerous Instagram and Threads users reported that their accounts were locked for allegedly violating age restrictions, which prohibit users under the age of 13 from having accounts. Despite uploading age verification, many users found their accounts remained locked. The issue caused widespread frustration, with users feeling unfairly penalised due to miscommunication between moderation tools and human reviewers.

Although Mosseri took responsibility for the moderation errors, Instagram’s PR team had a slightly different take. The company stated that not all of the problems users encountered were directly linked to human moderation. They said the ongoing age verification issue is still under investigation as the platform works to identify the root cause.

As both TikTok and Instagram navigate their respective moderation challenges, it remains clear that social media platforms are struggling to find the right balance between human oversight and AI-driven technology. With these platforms’ growing influence on everyday life, the pressure to get content moderation right is higher than ever.

Hot this week

Commvault introduces conversational AI to simplify cyber resilience management

Commvault introduces conversational AI for enterprise backup and cyber resilience, allowing natural language management of data protection.

Future-proofing resilience for business continuity

Multi-cloud and event-driven architecture are redefining resilience by helping enterprises maintain seamless operations through global outages.

When your partners become your weakest link: Lessons from Qantas and Mango

The Qantas and Mango breaches reveal how third-party cyber risks threaten Southeast Asian businesses through shared vendors, underscoring the need for continuous monitoring and resilience.

ASUS launches ROG GR70 gaming mini PC powered by AMD Ryzen 9 and NVIDIA RTX 50 Series

ASUS ROG launches the GR70 gaming mini PC with AMD Ryzen 9, NVIDIA RTX 50 Series GPUs, WiFi 7, and advanced cooling for high performance.

Disney Plus to release original Fortnite x The Simpsons animated shorts

Disney Plus releases four new Fortnite x The Simpsons shorts in November, also viewable within the game itself.

Devialet: How Phantom Ultimate reflects the future of compact high-end sound

Devialet’s Phantom Ultimate shows how innovation, software, sustainability, and design are shaping the next era of compact high-end audio.

Ambitionz introduces Cipher, an AI platform built to think like a game developer

Ambitionz launches Cipher, an AI designed to think like a game developer, with early access for Roblox creators worldwide.

Corning and Nokia partner to bring fibre to the edge for enterprise networks

Corning and Nokia partner to deliver fibre-to-the-edge and optical LAN solutions, offering scalable, high-speed, and sustainable enterprise networks.

AI adoption grows 20% in Singapore as 170,000 businesses embrace the technology

AI adoption in Singapore rises 20% in 2025, with 170,000 businesses now using AI across finance, tech, and healthcare sectors.

Related Articles

Popular Categories