Monday, 24 November 2025
30.8 C
Singapore
29.6 C
Thailand
25.1 C
Indonesia
27.7 C
Philippines

ChatGPT trend raises privacy concerns with photo-based location searches

People use ChatGPT to identify photo locations, raising privacy concerns as new AI tools make “reverse location search” easier than ever.

A new viral trend is drawing attention—and concern—as people begin using ChatGPT to figure out the locations shown in random photos. This new use of artificial intelligence, which some call “reverse location search,” is quickly gaining popularity on social media, especially on X (formerly Twitter).

The recent update from OpenAI includes two powerful models, o3 and o4-mini. These AI tools are designed to look at images in ways that go beyond just recognising what’s in them. They can zoom in, crop, rotate, and analyse even blurry or distorted photos. When paired with their ability to search online, the result is a surprisingly effective tool for figuring out where a picture was taken—even without obvious clues.

How people are using the new models

With the launch of o3 and o4-mini, people on X have started experimenting by uploading restaurant menus, photos of neighbourhoods, shop signs, and even selfies. Then they ask ChatGPT to play a version of GeoGuessr—a game where you try to guess a location based on Google Street View.

The results have impressed many. The o3 model is exceptionally skilled at picking up small details—like a specific tile pattern, shopfront, or even the angle of sunlight—to guess cities, landmarks, restaurants, and bars. It often does this without relying on previous conversations, saved data, or the photo’s metadata (EXIF data).

For example, someone uploaded a photo of a purple, mounted rhino head in a dimly lit bar. While GPT-4o incorrectly guessed that the photo was taken in a British pub, the o3 model discovered it came from a speakeasy in Williamsburg, New York. This shows how advanced image reasoning can outperform even earlier versions of ChatGPT.

Tech news site TechCrunch also ran several tests, comparing o3 with GPT-4o. In many cases, both models arrived at the correct answer. Interestingly, GPT-4o was often quicker. However, in some instances, o3 stood out by identifying places the older model couldn’t.

Not always accurate — and not always safe

Of course, the system isn’t perfect. Sometimes, o3 couldn’t confidently guess a location and got stuck or gave a wrong answer. Some users on X pointed out that the AI could be way off the mark in its guesses.

But what’s most concerning is how this feature could be misused. There’s currently nothing stopping someone from taking a screenshot of a person’s Instagram Story and using ChatGPT to try and find out where they are. While the same kind of guessing could have been done before with older tools, these new models make it quicker, easier, and more accurate.

So far, OpenAI hasn’t included any clear safety warnings or tools to limit the use of these features. Their latest safety report for o3 and o4-mini does not mention “reverse location search” or a guide on protecting people’s privacy.

The growing risk of smarter AI tools

The rise of this trend highlights a bigger issue: as AI becomes smarter, the risk of misuse grows. These models weren’t made to invade people’s privacy, but they could easily be used that way. What began as a fun guessing game is quickly becoming a tool that could expose people’s locations without their knowledge.

While it’s exciting to see what AI can do, asking questions is also important. Should tools like this be more restricted? Should there be alerts or blocks when AI is asked to find someone’s location from a photo? These are issues that AI developers, lawmakers, and users will need to face—before the risks get out of hand.

Hot this week

AMD and Eviden to build France’s first exascale supercomputer

AMD and Eviden will build France’s first exascale supercomputer to advance Europe’s AI and high performance computing goals.

Google unveils Antigravity, an agent-first coding tool built for Gemini 3

Google launches Antigravity, a new agent-first coding tool for Gemini 3 designed to enhance autonomous software development.

WhatsApp brings back About with new visibility and privacy updates

WhatsApp reintroduces its original About feature with new visibility, privacy options, and custom timers.

Meta explores an AI briefing tool aimed at Facebook users

Meta is developing Project Luna, an AI tool designed to deliver personalised morning Facebook briefings to users.

Malaysian MSMEs accelerate AI adoption but skills gap threatens progress

Malaysian MSMEs are rapidly adopting AI, but new research shows a widening skills and confidence gap that could slow future progress.

Belkin Zootopia accessories you need before Zootopia 2 arrives

Belkin’s latest Zootopia collection brings fun designs and practical features to power banks, cables, cases and straps for everyday use.

Meta explores an AI briefing tool aimed at Facebook users

Meta is developing Project Luna, an AI tool designed to deliver personalised morning Facebook briefings to users.

Final Fantasy Tactics remake brings renewed challenge to modern consoles

A modern remake of Final Fantasy Tactics brings updated visuals, classic strategy gameplay and steep challenges to today’s major consoles.

HP and Dell turn off HEVC support on selected laptop models

HP and Dell turn off HEVC support on selected laptops, limiting browser playback and prompting users to rely on third-party software.

Related Articles

Popular Categories