Monday, 16 June 2025
29.3 C
Singapore
28.2 C
Thailand
20 C
Indonesia
28.3 C
Philippines

How the rise of deepfakes AI contribute to cybersecurity risk?

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public. Deepfakes come from the […]

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public.

Deepfakes come from the term “deep-learning” and “fake.” It is an AI-technology used to create fake videos and audios that look and sound relatively realistic. Its origin started in 2017, from a Reddit community, where users employed this technology to swap faces of celebrities with other movie characters. The ease of usage and accessibility has increasingly made it even harder to detect and a threat to cybersecurity. 

In recent years, the tools are readily available on GitHub with anyone who can experiment and create their version of deepfakes, using the code available on the git repository hosting service. However, the result may look out of place with its awkward facial expressions and reaction lag. With deepfakes, users can imitate and beat AI detection that can easily fool viewers with more iterations. 

Its impact on the economy

One of the reasons why there is a rise of deepfakes might be due to the reduced hassled to go through the grind of compromising a system. To put it simply, cybercriminals do not need powerful hacking skills to attack a system. With that, hackers can easily corrupt an organization’s financial health simply with a piece of fake information. In 2013, more than US$130 in stock are wiped out when a fake tweet about explosions in the White House that has injured former US president Barrack Obama was published. This shows that by spreading inaccurate information or data, the market can easily be manipulated, leading to instability in an organization’s financial health. It could lead to an inability to secure investors from other companies.

Using deepfakes for political agenda

Politically, deepfakes can pose a danger to voters as the impact of fake videos, and audio may shift the voting result. As the phrase goes, “seeing is believing” voters will trust whatever that is publicized on the network. If information can be inaccurately grappled and misuse, it allows hackers to use this weakness to portray a specific impression of the candidates. One of the prominent examples is when fraudsters use AI to mimic the CEO’s voice in an unusual cybercrime. The CEO of U.K. – based energy company firm thought he was speaking on a phone with his boss and subsequently transfer €220,000 (approx. S$355,237) to the bank account of a Hungarian supplier. Only after scrutinizing, the CEO recognized that the call had been made from an Austrian phone number. Due to the close resemblance of the subtle German accent in the man’s voice and the way of speech, the CEO failed to detect any suspicion. This shows how deepfakes can imitate an authority figure seamlessly and manipulate actions that can be dangerous and unethical. 

Advancement of technology brought about many changes, many being a solution and an asset for a better-connected world. The concept of deepfakes can also be put into good use, such as a memorial service of someone important or a way of paying respect for someone. It is highly similar to the holographic technology to project a 3D image that looks real from any angle. Unfortunately, many unethical cybercriminals choose to use it in a way that poses a threat to the community. As the ideas and activities become more interconnected, does that mean that we will adopt a “zero trust” policy to safeguard our interest? And in a “zero trust” world, how can we be more interconnected?

Hot this week

OpenAI delays the release of new open model until later this summer

OpenAI delayed its new open AI model, now expected later this summer, aiming to rival Mistral and Qwen.

AI helps uncover gender-specific drug combinations to improve heart valve disease treatment

Researchers use AI to find gender-specific drug combinations for AVS, aiming to improve personalised treatment for heart valve disease.

Nintendo’s Switch 2 becomes fastest-selling game console in history

Nintendo’s Switch 2 became the fastest-selling game console in history, with over 3.5 million units sold in just four days.

Apple unveils macOS Tahoe with smarter tools and a new look

Apple reveals macOS Tahoe, which will be released this autumn and feature a fresh design, iPhone link upgrades, and smarter Spotlight tools.

Commvault strengthens data protection with post-quantum cryptography capabilities

Commvault expands post-quantum cryptography support with HQC to protect long-term data from future quantum computing threats.

Informatica deepens partnership with Databricks to support new Iceberg and OLTP services

Informatica joins Databricks as launch partner for new Iceberg and OLTP solutions, introducing AI tools to speed up GenAI development.

Hong Kong opens skies to larger drones in bid to grow low-altitude economy

Hong Kong will allow the testing of larger drones to boost its low-altitude economy and improve logistics, following mainland China's lead.

Hong Kong to build new AI supercomputing centre in bid to lead global tech race

Hong Kong plans a new AI supercomputing centre to boost its tech hub status and support growing start-ups across the Greater Bay Area.

Steam adds full native support for Apple Silicon Macs

Steam runs natively on Apple Silicon Macs, ditching Rosetta 2 for smoother performance and better gaming on M1 and M2 devices.

Related Articles

Popular Categories