Monday, 22 December 2025
30.7 C
Singapore
23.2 C
Thailand
22.9 C
Indonesia
27.5 C
Philippines

How the rise of deepfakes AI contribute to cybersecurity risk?

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public. Deepfakes come from the […]

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public.

Deepfakes come from the term “deep-learning” and “fake.” It is an AI-technology used to create fake videos and audios that look and sound relatively realistic. Its origin started in 2017, from a Reddit community, where users employed this technology to swap faces of celebrities with other movie characters. The ease of usage and accessibility has increasingly made it even harder to detect and a threat to cybersecurity. 

In recent years, the tools are readily available on GitHub with anyone who can experiment and create their version of deepfakes, using the code available on the git repository hosting service. However, the result may look out of place with its awkward facial expressions and reaction lag. With deepfakes, users can imitate and beat AI detection that can easily fool viewers with more iterations. 

Its impact on the economy

One of the reasons why there is a rise of deepfakes might be due to the reduced hassled to go through the grind of compromising a system. To put it simply, cybercriminals do not need powerful hacking skills to attack a system. With that, hackers can easily corrupt an organization’s financial health simply with a piece of fake information. In 2013, more than US$130 in stock are wiped out when a fake tweet about explosions in the White House that has injured former US president Barrack Obama was published. This shows that by spreading inaccurate information or data, the market can easily be manipulated, leading to instability in an organization’s financial health. It could lead to an inability to secure investors from other companies.

Using deepfakes for political agenda

Politically, deepfakes can pose a danger to voters as the impact of fake videos, and audio may shift the voting result. As the phrase goes, “seeing is believing” voters will trust whatever that is publicized on the network. If information can be inaccurately grappled and misuse, it allows hackers to use this weakness to portray a specific impression of the candidates. One of the prominent examples is when fraudsters use AI to mimic the CEO’s voice in an unusual cybercrime. The CEO of U.K. – based energy company firm thought he was speaking on a phone with his boss and subsequently transfer €220,000 (approx. S$355,237) to the bank account of a Hungarian supplier. Only after scrutinizing, the CEO recognized that the call had been made from an Austrian phone number. Due to the close resemblance of the subtle German accent in the man’s voice and the way of speech, the CEO failed to detect any suspicion. This shows how deepfakes can imitate an authority figure seamlessly and manipulate actions that can be dangerous and unethical. 

Advancement of technology brought about many changes, many being a solution and an asset for a better-connected world. The concept of deepfakes can also be put into good use, such as a memorial service of someone important or a way of paying respect for someone. It is highly similar to the holographic technology to project a 3D image that looks real from any angle. Unfortunately, many unethical cybercriminals choose to use it in a way that poses a threat to the community. As the ideas and activities become more interconnected, does that mean that we will adopt a “zero trust” policy to safeguard our interest? And in a “zero trust” world, how can we be more interconnected?

Hot this week

Samsung unveils Exynos 2600 as first 2nm mobile processor

Samsung unveils the Exynos 2600, the world’s first 2nm mobile chip, expected to debut in the Galaxy S26 in early 2026.

Antler invests US$5.6 million across 14 AI startups with early commercial traction

Antler invests US$5.6 million in 14 AI startups with early traction, focusing on applied AI and real-world enterprise adoption.

IATA raises concerns over potential 5G interference with aviation systems

IATA warns uneven global 5G rules could pose aviation risks, even as Singapore reports no interference with aircraft systems.

Damon and Baby offer a devilishly entertaining retro shooter experience

Damon and Baby is a retro-inspired twin-stick shooter that blends fast action, exploration, and quirky co-op gameplay.

Dishonored and Deus Ex lead reflects on Arkane Austin’s closure

Harvey Smith reflects on Arkane Austin’s closure, Redfall’s challenges, and the human cost of layoffs in today’s games industry.

AI designs a Linux computer with 843 parts in a single week

Quilter reveals a Linux computer designed by AI in one week, hinting at a future where hardware development is faster and more accessible.

IATA raises concerns over potential 5G interference with aviation systems

IATA warns uneven global 5G rules could pose aviation risks, even as Singapore reports no interference with aircraft systems.

Thoughtworks: Singapore’s financial OS upgrade, agentic AI and the race for the future of wealth

How agentic AI could reshape wealth management in Singapore by enhancing personalisation, improving responsiveness and elevating the role of advisers.

Google delays Gemini takeover from Assistant on Android until 2026

Google has delayed replacing Google Assistant with Gemini on Android, extending the transition into 2026 as technical challenges persist.

Related Articles

Popular Categories