Monday, 3 November 2025
28 C
Singapore
28.1 C
Thailand
29.6 C
Indonesia
28.2 C
Philippines

OpenAI may soon require a verified ID to access future AI models

OpenAI may soon require verified ID for access to advanced AI models, aiming to boost safety and prevent misuse of its tools.

OpenAI could soon ask you to complete an ID verification process to access its most advanced AI models. This comes as part of a new plan the company calls “Verified Organization,” which was quietly announced on a support page last week.

The process is designed for developers and businesses that use OpenAI’s API, giving them access to future high-level tools. But not everyone will qualify. To become verified, your organisation must submit a government-issued ID from one of the countries OpenAI currently supports. An ID can only be used to verify one organisation every 90 days.

Why OpenAI is adding ID checks

According to OpenAI, this move is all about safety and responsible AI use. While most developers follow the rules, the company says a “small minority” misuse the platform in ways that go against its usage policies. OpenAI says the new ID verification step is meant to help prevent this misuse while keeping access open to responsible developers.

“We take our responsibility seriously to ensure that AI is both broadly accessible and used safely,” reads a message on the support page. “Unfortunately, a small minority of developers intentionally use the OpenAI APIs, violating our usage policies. We’re adding the verification process to mitigate unsafe use of AI while continuing to make advanced models available to the broader developer community.”

The idea is to create a safer environment for people and organisations using OpenAI’s tools daily, especially as the technology becomes more powerful.

What the new process involves

According to OpenAI, verification takes just a few minutes to complete but requires a valid ID. Once approved, your organisation will be granted the new “Verified Organisation” status, which could become a requirement for using future models and features.

OpenAI also hinted that this step is part of a larger push to prepare for the next big model release. If you’re a developer or a business relying on OpenAI tools, you might need to get verified soon to keep using all the platform’s new capabilities.

Tibor Blaho’s example post on April 12 showed a screenshot of the support page, describing the process and encouraging developers to prepare for what’s next.

A response to rising security concerns

This change may involve more than safety and policy enforcement. The move could also improve security as OpenAI’s models grow in complexity. The company has previously published reports on tracking and blocking misuse, including some linked to foreign actors.

In one example, OpenAI highlighted efforts to stop malicious use of its API by groups believed to be based in North Korea. In another case, OpenAI investigated a data breach connected to DeepSeek, an AI lab based in China. According to a Bloomberg report earlier this year, a group possibly linked to DeepSeek used OpenAI’s API to extract large amounts of information in late 2024. That information may have been used to train competing models, which goes against OpenAI’s rules.

This follows OpenAI’s decision to block access to its services in China last summer. As the platform continues to expand, protecting its data, tools, and users has become a top priority.

So, if you’re part of a business or development team using OpenAI’s tools, it might be time to get verified. While the process seems simple, it could become essential for continued access to OpenAI’s most powerful technology.

Hot this week

Singapore Polytechnic spotlights innovation at SWITCH 2025

Singapore Polytechnic showcases student and alumni startups at SWITCH 2025, highlighting innovation and youth entrepreneurship.

TechInnovation 2025 highlights how standards and partnerships drive enterprise growth

Day 2 of TechInnovation 2025 highlighted how standards, collaboration, and cross-border partnerships enable trust and enterprise growth.

Commvault launches Data Rooms to connect enterprise data with AI platforms securely

Commvault introduces Data Rooms, a secure platform enabling enterprises to safely activate and share backup data for AI use.

NTT DATA urges sustainability in AI development amid rising environmental concerns

NTT DATA’s white paper calls for sustainable AI development, highlighting solutions to reduce energy, water, and material consumption.

HPE to build new exascale and AI systems for Oak Ridge National Laboratory

HPE will build Discovery and Lux for Oak Ridge National Laboratory, advancing US leadership in exascale and AI supercomputing.

Future-proofing resilience for business continuity

Multi-cloud and event-driven architecture are redefining resilience by helping enterprises maintain seamless operations through global outages.

Disney Plus to release original Fortnite x The Simpsons animated shorts

Disney Plus releases four new Fortnite x The Simpsons shorts in November, also viewable within the game itself.

Bluesky tests the dislike button and ‘social proximity’ to improve user interactions

Bluesky tests a private dislike button and ‘social proximity’ system to improve conversations and foster more meaningful online interactions.

Innovation drives legacy industries at TechInnovation 2025

Industry leaders at TechInnovation 2025 shared how innovation and collaboration are helping legacy businesses modernise for the future.

Related Articles

Popular Categories