How the rise of deepfakes AI contribute to cybersecurity risk?

by Eunice Wee
8902 views

In April 2018, comedian and director Jordan Peele used Barrack Obama as a subject for a PSA he released. “Barrack Obama” voiced his opinion about Black Panther and directed insult to President Donald Trump. This is an example of how deepfakes AI is propagating disinformation, which can lead to confusion from the public.

Deepfakes come from the term “deep-learning” and “fake.” It is an AI-technology used to create fake videos and audios that look and sound relatively realistic. Its origin started in 2017, from a Reddit community, where users employed this technology to swap faces of celebrities with other movie characters. The ease of usage and accessibility has increasingly made it even harder to detect and a threat to cybersecurity. 

In recent years, the tools are readily available on GitHub with anyone who can experiment and create their version of deepfakes, using the code available on the git repository hosting service. However, the result may look out of place with its awkward facial expressions and reaction lag. With deepfakes, users can imitate and beat AI detection that can easily fool viewers with more iterations. 

Its impact on the economy

One of the reasons why there is a rise of deepfakes might be due to the reduced hassled to go through the grind of compromising a system. To put it simply, cybercriminals do not need powerful hacking skills to attack a system. With that, hackers can easily corrupt an organization’s financial health simply with a piece of fake information. In 2013, more than US$130 in stock are wiped out when a fake tweet about explosions in the White House that has injured former US president Barrack Obama was published. This shows that by spreading inaccurate information or data, the market can easily be manipulated, leading to instability in an organization’s financial health. It could lead to an inability to secure investors from other companies.

Using deepfakes for political agenda

Politically, deepfakes can pose a danger to voters as the impact of fake videos, and audio may shift the voting result. As the phrase goes, “seeing is believing” voters will trust whatever that is publicized on the network. If information can be inaccurately grappled and misuse, it allows hackers to use this weakness to portray a specific impression of the candidates. One of the prominent examples is when fraudsters use AI to mimic the CEO’s voice in an unusual cybercrime. The CEO of U.K. – based energy company firm thought he was speaking on a phone with his boss and subsequently transfer €220,000 (approx. S$355,237) to the bank account of a Hungarian supplier. Only after scrutinizing, the CEO recognized that the call had been made from an Austrian phone number. Due to the close resemblance of the subtle German accent in the man’s voice and the way of speech, the CEO failed to detect any suspicion. This shows how deepfakes can imitate an authority figure seamlessly and manipulate actions that can be dangerous and unethical. 

Advancement of technology brought about many changes, many being a solution and an asset for a better-connected world. The concept of deepfakes can also be put into good use, such as a memorial service of someone important or a way of paying respect for someone. It is highly similar to the holographic technology to project a 3D image that looks real from any angle. Unfortunately, many unethical cybercriminals choose to use it in a way that poses a threat to the community. As the ideas and activities become more interconnected, does that mean that we will adopt a “zero trust” policy to safeguard our interest? And in a “zero trust” world, how can we be more interconnected?

- Advertisement -

You may also like

X