Thursday, 20 November 2025
26 C
Singapore
18.8 C
Thailand
23.3 C
Indonesia
27.4 C
Philippines

ChatGPT could soon gain the ability to see

ChatGPT’s Advanced Voice Mode might soon include a live camera feature, enabling AI to identify objects and interact visually.

ChatGPT’s Advanced Voice Mode, known for enabling real-time conversations with a chatbot, might soon include visual capabilities. Code uncovered in the latest beta version of the app hints at introducing a “live camera” feature. This discovery in ChatGPT v1.2024.317, as reported by Android Authority, suggests that the rollout of this exciting feature could be just around the corner. However, OpenAI has yet to confirm an official release date.

A glimpse into the feature’s early tests

The idea of ChatGPT having a visual edge has been introduced previously. During the initial alpha testing phase of Advanced Voice Mode in May, OpenAI demonstrated its potential visual capabilities. In one example, the chatbot used a phone’s camera to identify a dog, recognise its ball, and associate the two in the context of playing fetch. This ability to observe, understand, and link objects to real-world scenarios was widely praised by early testers.

Alpha testers were quick to explore the feature’s uses. A notable example came from a user on X (formerly Twitter), Manuel Sainsily, who utilised the camera to ask questions about his new kitten. This interactive capability showcased how the feature could provide fun and practical benefits.

When Advanced Voice Mode entered beta testing in September for ChatGPT Plus and Enterprise users, its visual functionality was notably absent. Despite this, the voice feature gained immense popularity for enabling natural, dynamic conversations. According to OpenAI, users could interrupt the chatbot at any moment, and it could even pick up on the speaker’s emotional tone.

What sets it apart from competitors?

ChatGPT could have a unique edge over rivals like Google and Meta if the live camera feature is introduced. Google’s conversational AI, Gemini Live, may speak over 40 languages but lacks visual processing capabilities. Similarly, Meta’s Natural Voice Interactions, showcased at the Connect 2024 event in September, cannot use camera inputs. While these systems are competent in their ways, OpenAI’s visual feature could redefine how AI assistants interact with the world.

Desktop users can now enjoy Advanced Voice Mode

In a related update, OpenAI announced that Advanced Voice Mode is now available to paid ChatGPT Plus users on desktop. Previously limited to mobile devices, this update means users can now access this feature directly on their laptops or PCs.

The introduction of the live camera could mark a significant leap forward, combining the ability to see and hear into one seamless AI experience. While the exact timing remains uncertain, the potential impact of this development is already generating excitement among users and industry experts alike.

Hot this week

vivo X300 Pro review: A flagship built for serious photography

A detailed look at the vivo X300 Pro’s camera system, design, battery life and everyday performance in real-world use.

UBS partners with Ant International on blockchain-based cross-border settlement

UBS and Ant International partner to explore blockchain-based cross-border payment and liquidity innovations through a new Singapore-based collaboration.

Businesses report rising revenue loss from inefficient tech as AI adoption grows

New research shows two in five global businesses face revenue loss due to tech inefficiencies, with many turning to AI to improve productivity.

OpenAI introduces GPT-5.1 with improved conversation and customisation

OpenAI launches GPT-5.1 with improved tone, clearer reasoning and new controls that make ChatGPT more conversational and customisable.

Jeff Bezos to co-lead AI startup Project Prometheus

Jeff Bezos will become co-CEO of AI startup Project Prometheus, focusing on manufacturing technologies.

Google unveils Antigravity, an agent-first coding tool built for Gemini 3

Google launches Antigravity, a new agent-first coding tool for Gemini 3 designed to enhance autonomous software development.

TikTok tests new tools to help users manage AI-generated content

TikTok tests an AI content slider and invisible watermarks to help users control and identify AI-generated videos on the platform.

Apple’s ring light-style feature reaches Windows first through Microsoft VP’s new tool

Windows users gain early access to a ring light-style screen feature through Microsoft VP Scott Hanselman’s new Windows Edge Light tool.

Jeff Bezos to co-lead AI startup Project Prometheus

Jeff Bezos will become co-CEO of AI startup Project Prometheus, focusing on manufacturing technologies.

Related Articles

Popular Categories