Tuesday, 24 June 2025
28.1 C
Singapore
29.3 C
Thailand
21.2 C
Indonesia
27.5 C
Philippines

How the rise of deepfakes AI contribute to cybersecurity risk?

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public. Deepfakes come from the […]

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public.

Deepfakes come from the term “deep-learning” and “fake.” It is an AI-technology used to create fake videos and audios that look and sound relatively realistic. Its origin started in 2017, from a Reddit community, where users employed this technology to swap faces of celebrities with other movie characters. The ease of usage and accessibility has increasingly made it even harder to detect and a threat to cybersecurity. 

In recent years, the tools are readily available on GitHub with anyone who can experiment and create their version of deepfakes, using the code available on the git repository hosting service. However, the result may look out of place with its awkward facial expressions and reaction lag. With deepfakes, users can imitate and beat AI detection that can easily fool viewers with more iterations. 

Its impact on the economy

One of the reasons why there is a rise of deepfakes might be due to the reduced hassled to go through the grind of compromising a system. To put it simply, cybercriminals do not need powerful hacking skills to attack a system. With that, hackers can easily corrupt an organization’s financial health simply with a piece of fake information. In 2013, more than US$130 in stock are wiped out when a fake tweet about explosions in the White House that has injured former US president Barrack Obama was published. This shows that by spreading inaccurate information or data, the market can easily be manipulated, leading to instability in an organization’s financial health. It could lead to an inability to secure investors from other companies.

Using deepfakes for political agenda

Politically, deepfakes can pose a danger to voters as the impact of fake videos, and audio may shift the voting result. As the phrase goes, “seeing is believing” voters will trust whatever that is publicized on the network. If information can be inaccurately grappled and misuse, it allows hackers to use this weakness to portray a specific impression of the candidates. One of the prominent examples is when fraudsters use AI to mimic the CEO’s voice in an unusual cybercrime. The CEO of U.K. – based energy company firm thought he was speaking on a phone with his boss and subsequently transfer €220,000 (approx. S$355,237) to the bank account of a Hungarian supplier. Only after scrutinizing, the CEO recognized that the call had been made from an Austrian phone number. Due to the close resemblance of the subtle German accent in the man’s voice and the way of speech, the CEO failed to detect any suspicion. This shows how deepfakes can imitate an authority figure seamlessly and manipulate actions that can be dangerous and unethical. 

Advancement of technology brought about many changes, many being a solution and an asset for a better-connected world. The concept of deepfakes can also be put into good use, such as a memorial service of someone important or a way of paying respect for someone. It is highly similar to the holographic technology to project a 3D image that looks real from any angle. Unfortunately, many unethical cybercriminals choose to use it in a way that poses a threat to the community. As the ideas and activities become more interconnected, does that mean that we will adopt a “zero trust” policy to safeguard our interest? And in a “zero trust” world, how can we be more interconnected?

Hot this week

How Huawei is outpacing US sanctions to lead China’s AI charge

Huawei defies US tech bans with its Ascend AI chips, aiming to lead China’s semiconductor push through system-wide innovation.

Poco F7 to launch globally on June 24 with an S$20 bonus for early reservations

Poco F7 launches globally on June 24 with a S$20 bonus and 100 Mi Points via the Xiaomi site for early sign-ups.

The Blood of Dawnwalker lets you step into a dark, vampire-filled world

Explore the dark world of The Blood of Dawnwalker, a vampire RPG set in 14th-century Europe that will be released for PC and consoles in 2026.

Meta plans AI smart glasses with Prada, expanding its tech-fashion reach

Meta is teaming up with Prada to develop AI smart glasses, expanding its wearable tech beyond Ray-Ban with a focus on luxury fashion.

Alibaba Cloud to open second data centre in South Korea by end-June

Alibaba Cloud will open a second data centre in South Korea by end-June 2025 to support growing demand for AI and cloud services.

How Asia’s innovation is reshaping the global economy

Asia is becoming a global innovation powerhouse, driving sustainable growth through AI, clean energy, and deep tech ecosystems.

Adobe launches LLM Optimizer as AI replaces search engines in content discovery

Adobe unveils LLM Optimizer to help brands appear in AI chats like ChatGPT as AI becomes the new way people discover and shop.

Rising Chinese PC brand iSoftStone is on track to overtake Apple and HP

Due to fast growth in the education and gaming sectors, Chinese PC maker iSoftStone is set to overtake Apple and HP in China.

The Blood of Dawnwalker lets you step into a dark, vampire-filled world

Explore the dark world of The Blood of Dawnwalker, a vampire RPG set in 14th-century Europe that will be released for PC and consoles in 2026.

Related Articles

Popular Categories