Concerns are growing that Google’s artificial intelligence tools are unintentionally directing users towards fraudulent customer support numbers. Over recent months, experts have warned that artificial intelligence has enabled increasingly sophisticated scams, including voice duplication fraud and online traps that can lead to major financial losses.
One recent case highlights the dangers. Alex Rivlin, owner of a real estate firm, searched for Royal Caribbean’s customer service number through Google’s AI Overviews. This feature presents summarised information at the top of search results. The number shown did not belong to the cruise company but was instead operated by a scammer.
“I’m sharing this as a public service announcement. With AI-generated results and spoofed numbers, the game has changed,” Rivlin said in a Facebook post. He admitted that he had already shared his credit card details before realising the number was fake and only narrowly avoided being defrauded.
A similar case has been reported with Southwest Airlines, where a fake number appeared in the AI Overview section. The number, which is not listed on the airline’s official website, was allegedly used by fraudsters who tried to charge customers hundreds of dollars for simple corrections, such as fixing a misspelt name on a ticket.
How AI results can mislead users
The problem is not limited to airlines and cruise companies. Users on Reddit have also reported being targeted after searching for support numbers of food delivery services. One individual almost fell victim to a scam while searching for a helpline, and another case involved a 65-year-old man who lost more than US$3,000 after searching for “Swiggy call centre” on Google.
Swiggy itself has repeatedly warned customers that it does not operate any official helpline numbers. Its website clearly states: “We do not have any official customer care phone lines. Beware of fake numbers.” However, when a search is conducted using Google’s AI Mode, the results suggest that Swiggy “primarily” resolves issues within its app but also list additional numbers described as “customer service contact options.”
These numbers are misleading. One belongs only to Swiggy’s partner onboarding service, while the other two do not appear in any official directory. One of these numbers was flagged in a complaint lodged by a misled customer on the Consumer Complaints Court website.
Such discrepancies demonstrate how AI tools can provide confusing and potentially harmful information, creating an environment where scammers thrive.
Calls for greater caution and oversight
Experts believe the issue stems from how scammers manipulate online forums and user-generated content sites. By flooding these spaces with fake numbers, they increase the likelihood that AI-powered search tools will surface their information.
“Scammers have discovered that they can flood user-generated content sites and forums with fake phone numbers for major businesses, then trick callers into sharing their credit card information,” Lily Ray, Vice President of SEO Strategy & Research at Amsive, explained on LinkedIn.
Reports from Odin and ITBrew also suggest that hackers can design prompts instructing Google’s Gemini AI to include fraudulent contact details in its summaries. While Google has acknowledged the issue, the company has said it is working to remove unreliable entries from AI Overviews.
Until stronger safeguards are in place, experts advise users to rely solely on official company websites for helpline numbers and customer service details. This, they argue, is the only reliable way to avoid being caught in increasingly sophisticated AI-driven scams.