Wednesday, 26 November 2025
28.8 C
Singapore
23.4 C
Thailand
21.3 C
Indonesia
27.7 C
Philippines

Microsoft highlights growing AI-assisted scams and offers advice on how to stay safe

Microsoft’s latest report warns of rising AI-driven scams and outlines new tools and tips to help users stay safe online.

Cybercriminals are now using artificial intelligence to create more convincing scams with minimal effort, according to Microsoft’s latest Cyber Signals report. The company says AI has significantly lowered the skill level required to launch fraud campaigns, enabling even low-level attackers to build highly sophisticated phishing schemes, fake websites, and deepfakes in a matter of minutes.

Between April 2024 and April 2025, Microsoft thwarted US$4 billion worth of fraud attempts and blocked roughly 1.6 million bot sign-up attempts per hour. These findings point to the rising threat of AI-enhanced scams targeting online shoppers, job seekers, and individuals seeking technical support.

“Cybercrime is a trillion-dollar problem, and it’s been going up every year for the past 30 years,” said Kelly Bissell, Corporate Vice President of Anti-Fraud and Product Abuse at Microsoft. “Now we have AI that can make a difference at scale and help us build security and fraud protections into our products much faster.”

E-commerce, job, and tech support scams on the rise

In e-commerce, scammers are leveraging AI to set up fraudulent websites within minutes, using AI-generated product descriptions, customer reviews, and brand images to mimic genuine businesses. These sites often spread through social media ads optimised by AI algorithms. Chatbots powered by AI further deceive customers by handling complaints and delaying refunds with plausible but fake customer service responses.

Job scams are also growing in complexity. AI is being used to auto-generate job descriptions, clone recruiter voices, and simulate video interviews. Fraudsters may ask applicants to share sensitive information such as bank details, personal documents, or payments under the guise of onboarding requirements. Microsoft warns that legitimate companies never request such details through informal channels or ask for payment as part of the recruitment process.

Tech support scams remain a persistent threat. Although not always AI-driven, cybercriminal groups like Storm-1811 have abused Microsoft’s Quick Assist remote support tool by impersonating IT staff. Once access is granted, scammers steal data or install malware. In response, Microsoft has implemented additional warning prompts and security checks in Quick Assist to alert users of suspicious activity.

Tools and strategies to protect users

Microsoft has taken a multipronged approach to combating fraud across its platforms. For online shoppers, the Edge browser now includes typo and domain impersonation protection using deep learning. A machine learning-based Scareware Blocker also identifies fake pop-ups and scam alerts designed to frighten users into calling fake support numbers or downloading harmful software.

To tackle job fraud, LinkedIn now features AI-powered systems that detect fake job postings and scam recruiter accounts. Microsoft Defender SmartScreen, integrated into Windows and Edge, scans websites, files, and applications in real time to identify suspicious content.

Quick Assist has been upgraded to require users to confirm understanding of the risks before sharing their screens. Microsoft now blocks over 4,400 suspicious Quick Assist sessions daily, about 5.46% of all global attempts. Digital Fingerprinting technology is used to analyse behavioural patterns and stop fraud attempts in real time.

For enterprise use, Microsoft recommends using Remote Help, a more secure alternative to Quick Assist that limits access to within an organisation’s internal network.

Global collaboration and consumer education

Microsoft is working closely with law enforcement and industry partners to fight fraud at scale. The company’s Digital Crimes Unit (DCU) has played a role in dismantling criminal infrastructure and has helped secure hundreds of arrests globally. Through its partnership with the Global Anti-Scam Alliance (GASA), Microsoft joins forces with governments, financial authorities, consumer protection agencies, and tech companies to raise awareness and tackle scams more effectively.

Bissell, a cybersecurity veteran with experience at Accenture, Deloitte, and the US Department of Homeland Security, highlighted the need for greater collaboration across the tech sector. “If we’re not working together, we’ve all missed the bigger opportunity,” he said. “We must share cybercrime information with each other and educate the public.”

Microsoft has introduced a “Fraud-resistant by Design” policy, requiring all product teams to include fraud prevention measures during development. This includes fraud risk assessments, in-product security controls, and deeper integration of machine learning to detect and prevent suspicious behaviour.

Staying safe in the AI era

Microsoft advises consumers to remain cautious when shopping or applying for jobs online. Urgency tactics such as countdown timers and too-good-to-be-true offers should raise immediate red flags. Users should verify the legitimacy of websites and job listings through secure channels and trusted sources, never sharing personal or payment information with unknown parties.

Job seekers should look out for tell-tale signs of fraud, such as communication through personal email accounts or messaging apps, requests for money, or unnatural video interviews potentially created using deepfake technology.

Microsoft continues to evolve its fraud detection efforts to meet the growing threat landscape, aiming to make its platforms safer and more secure in an increasingly AI-powered world.

Hot this week

When fraud is inevitable, resilience becomes the real defence

As identity scams and deepfakes surge, companies must focus on recoverability. Here’s why resilience now matters most.

DBCS launches global design platform and unveils SG Mark 2025 winners

DBCS celebrates 40 years with the launch of WDBO and SG Mark 2025, spotlighting Singapore’s role in global design and innovation.

Apple’s ring light-style feature reaches Windows first through Microsoft VP’s new tool

Windows users gain early access to a ring light-style screen feature through Microsoft VP Scott Hanselman’s new Windows Edge Light tool.

Google TV may introduce solar-powered remote controls

Google TV may soon feature a solar-powered remote, reducing battery waste and offering an eco-friendly solution for streaming devices.

Singapore sees surge in ransomware attacks during holidays, Semperis study finds

A new Semperis study shows 59% of ransomware attacks in Singapore occur during holidays, driven by reduced staffing and major corporate events.

DBCS launches global design platform and unveils SG Mark 2025 winners

DBCS celebrates 40 years with the launch of WDBO and SG Mark 2025, spotlighting Singapore’s role in global design and innovation.

Chrome tests new privacy feature to limit precise location sharing on Android

Chrome for Android tests a new privacy feature that lets websites access only approximate location data instead of precise GPS information.

OpenAI introduces a new shopping assistant in ChatGPT

OpenAI launches a new ChatGPT shopping assistant that helps users compare products, find deals, and search for images ahead of Black Friday.

OpenAI was blocked from using the term ‘cameo’ in Sora after a temporary court order

A judge blocks OpenAI from using the term “cameo” in Sora until 22 December as Cameo pursues its trademark dispute.

Related Articles

Popular Categories