Zoom introduces real-time human verification for video meetings
Zoom launches real-time human verification to combat deepfake fraud in video meetings.
Zoom has introduced a new identity verification feature designed to confirm whether meeting participants are genuine humans rather than AI-generated impersonations. The update, launched on 17 April as part of World’s ID 4.0 rollout, marks a significant step in tackling the growing threat of deepfake fraud in online meetings.
Table Of Content
The feature is the result of a partnership between Zoom and World, the digital identity company founded by Sam Altman and previously known as Worldcoin. The integration allows meeting hosts to verify participants’ authenticity during live calls, helping organisations reduce the risk of fraudulent activity via manipulated video feeds.
The introduction reflects mounting concern across industries about the misuse of artificial intelligence tools capable of producing convincing deepfake video and audio. As businesses rely more on virtual collaboration, the need to verify individuals’ identities on screen has become increasingly important.
How the verified human badge works
The new verification system relies on what World describes as its Deep Face technology, which follows a three-step validation process. It compares a signed image captured during a user’s initial Orb registration, a live facial scan taken from the user’s device, and the video frame currently visible to others in the meeting.
When all three data points match successfully, a “Verified Human” badge appears next to the participant’s name. This visual marker signals to others in the call that the individual has passed the identity check and is recognised as a genuine person rather than an AI-generated representation.
Meeting hosts can control how strictly the feature is applied. They can choose to make human verification mandatory before participants are allowed to join a session, effectively blocking anyone who has not completed the process. In addition, hosts can request real-time verification during a meeting if there is uncertainty about a participant’s identity.
Some early reactions suggest mixed feelings about the idea of proving one’s humanity in routine digital interactions. One participant described the experience by saying, “To me, it feels weird and ironic that I’d need to prove that I’m a human, just to be seen as one in a Zoom meeting.” Such reactions highlight the unusual social shift that accompanies the use of biometric tools in everyday communication.
Rising threat of deepfake fraud
The development of this verification feature comes amid a surge in high-profile cases involving deepfake-enabled fraud. These incidents have demonstrated how convincingly manipulated video can deceive individuals and organisations, sometimes with severe financial consequences.
One widely reported case occurred in early 2024, when global engineering firm Arup lost approximately US$25 million after an employee in Hong Kong authorised a series of wire transfers during what appeared to be a legitimate video call. Investigations later revealed that every other participant in the call had been created using deepfake technology, leaving the employee as the only real person present.
A similar incident took place in 2025 involving a multinational company in Singapore. In that case, executives were deceived into approving financial actions during a video meeting in which several participants later turned out to be AI-generated impersonations. The sophistication of the technology used made the deception difficult to detect in real time.
Financial losses attributed to deepfake-related fraud have continued to rise. Industry data shows that such scams resulted in losses exceeding US$200 million during the first quarter of the previous year alone. Security experts warn that the scale and frequency of these incidents indicate that deepfake fraud is no longer a distant or theoretical risk, but a practical challenge affecting businesses worldwide.
Biometric identity becoming workplace norm
The adoption of biometric identity tools such as facial verification signals a broader shift in how organisations approach digital trust. Companies are increasingly exploring ways to verify not only user accounts but also the physical identity of individuals participating in sensitive discussions or financial approvals.
Advocates argue that systems like the verified human badge could become a standard requirement for high-risk meetings, particularly those involving financial transactions or confidential negotiations. By confirming identity in real time, organisations may be able to prevent fraudulent actions before they occur rather than responding after damage has been done.
However, the widespread use of biometric verification also raises questions about privacy, accessibility, and the balance between security and convenience. Some users may be reluctant to register biometric data, while others may face technical challenges in completing the required scans. These concerns are likely to influence how quickly such tools are adopted across different industries and regions.
Despite these challenges, the direction appears clear. As artificial intelligence tools become more capable of mimicking human appearance and speech, digital platforms are moving towards stronger forms of identity assurance. The introduction of real-time human verification in video meetings suggests that biometric proof of personhood may soon become routine in professional communication rather than an optional feature.





