Tuesday, 29 April 2025
29.2 C
Singapore
30.3 C
Thailand
26.5 C
Indonesia
28.9 C
Philippines

TikTok shifts to AI moderation, laying off hundreds, while Instagram blames human error for recent mishaps

TikTok lays off 500 human moderators to favour AI, while Instagram deals with moderation errors, blaming a mix of human mistakes and broken tools.

TikTok, the popular social media platform owned by ByteDance, has laid off hundreds of content moderators globally as part of a move towards AI-driven moderation. According to Reuters reports, the job cuts affected approximately 500 employees, primarily in Malaysia. The company, which employs more than 110,000 people worldwide, said the layoffs are part of an ongoing effort to enhance its global content moderation model.

“We are making these changes as part of our ongoing efforts to strengthen our global operating model for content moderation,” a TikTok spokesperson explained. The social media giant currently uses a combination of human moderators and AI systems, with the AI handling around 80% of the moderation tasks. This blend of human oversight and machine learning ensures that content on the platform complies with community standards.

TikTok’s US$2 billion investment into safety

In 2024, ByteDance plans to invest an estimated US$2 billion in improving its trust and safety efforts. This considerable investment comes amid growing concerns over spreading harmful content and misinformation on social media platforms. As TikTok continues to expand its reach, the company faces increased scrutiny from governments and regulators worldwide, particularly in areas where the impact of social media on public discourse is under the spotlight.

The decision to reduce its human moderation workforce is part of ByteDance’s commitment to refining its processes and ensuring content safety and compliance. However, the changes have also raised concerns about whether AI alone can effectively monitor the vast and varied content uploaded to TikTok daily.

Instagram faces moderation issues

While TikTok moves towards AI, Instagram, owned by Meta, is facing its challenges with moderation. On Friday, Adam Mosseri, Instagram’s head, revealed that recent issues on the platform, which resulted in user accounts being locked or posts being incorrectly marked as spam, were due to human error rather than flaws in the AI moderation system.

Mosseri explained that the errors were made by human moderators who lacked adequate context when reviewing certain posts and accounts. “They were making decisions without the proper context on how the conversations played out, and that was a mistake,” he said. However, he also acknowledged that the tools the moderators were using were partly to blame. “One of the tools we built broke, so it wasn’t showing them sufficient context,” Mosseri admitted.

Users locked out of accounts

Over the past few days, numerous Instagram and Threads users reported that their accounts were locked for allegedly violating age restrictions, which prohibit users under the age of 13 from having accounts. Despite uploading age verification, many users found their accounts remained locked. The issue caused widespread frustration, with users feeling unfairly penalised due to miscommunication between moderation tools and human reviewers.

Although Mosseri took responsibility for the moderation errors, Instagram’s PR team had a slightly different take. The company stated that not all of the problems users encountered were directly linked to human moderation. They said the ongoing age verification issue is still under investigation as the platform works to identify the root cause.

As both TikTok and Instagram navigate their respective moderation challenges, it remains clear that social media platforms are struggling to find the right balance between human oversight and AI-driven technology. With these platforms’ growing influence on everyday life, the pressure to get content moderation right is higher than ever.

Hot this week

GumGum reports digital ads up to 90% more carbon efficient than industry average

GumGum cuts digital ad emissions by up to 90% versus industry norms, using global sustainability standards and Cedara’s carbon reporting tools.

Lian Li’s new Lancool 207 Digital case brings a 6-inch LCD screen to your PC

Lian Li's Lancool 207 Digital PC case brings a bright 6-inch LCD screen to your setup, offering style, function, and full customisation.

Meta’s Oversight Board asks for clarity on new hate speech rules

Meta’s Oversight Board is urging more transparency on hate speech policy changes and urging the company to protect vulnerable users.

WhatsApp adds new Advanced Chat Privacy feature to boost group chat security

WhatsApp's new Advanced Chat Privacy feature helps stop group chat content from being shared or saved outside the app.

Anthropic aims to uncover how AI models think by 2027

Anthropic CEO Dario Amodei aims to understand how AI models work by 2027 and urges industry-wide action for safety and transparency.

India could manufacture all US-bound iPhones by the end of 2026

Apple plans to manufacture all iPhones for the US market in India by the end of 2026 to avoid China tariffs and secure its supply chain.

Razer Launches Pro Click V2 and V2 Vertical Mice: Blending Gaming and Productivity

Razer's new Pro Click V2 and V2 Vertical mice offer gaming precision and ergonomic comfort, with AI prompt access and long battery life, available now!

Nintendo Pop-Up Store and Mario Kart Fun Return to Jewel Changi Airport

Experience the magic of Nintendo at Jewel Changi Airport with the return of the Pop-Up Store and the exciting Mario Kart Jewel Circuit Challenge!

Lian Li’s new Lancool 207 Digital case brings a 6-inch LCD screen to your PC

Lian Li's Lancool 207 Digital PC case brings a bright 6-inch LCD screen to your setup, offering style, function, and full customisation.

Related Articles

Popular Categories