top of page

What Is Deepfake Technology, And How Dangerous Is It?

  • Sunny Pu
  • 1 day ago
  • 4 min read

Updated: 1 minute ago

A photo showing the differences between a real video and a deepfake one.


For a certain man, Elon Musk had transformed from the world’s richest man to the world’s biggest scammer in just a few short videos. In August 2024, deepfakes—a form of artificial intelligence that utilizes a person’s likeness to create false, hyper-realistic content of them—of Elon Musk convinced Steve Beauchamp, an 82-year-old retiree, to invest over $690,000 into a fake investment opportunity.


Typically, deepfake technology swaps faces or alters voices to make an individual say or do something they didn’t. Hence, the term “deepfake” is a combination of two terms: deep learning, which is a subset of AI, and fake, because it crafts misleading videos of people. 


But before diving into the ethical complications of deepfake technology, we must understand the intensive and detailed technology that drives it. First, a machine model replicates features of human faces, expressions, movements, and even voices by training on large datasets, an organized collection of data that contains thousands of faces, movements, and expressions. The machine then processes and learns these human characteristics much like a real person would; it makes mistakes like a human brain does and corrects those mistakes to create the most hyper-realistic product. 


Next, after being trained, Generative Adversarial Networks (GANs) are used. GANs are machine learning models that are powered by two components, the Generator and Discriminator, and they generate new data based on real-world databases. The Generator component initially generates false videos, and the Discriminator component “responds” to the data collected. Simply put, the Discriminator gives feedback to the Generator component on whether the video produced looks real or not. The Generator’s goal is to eventually fool the Discriminator into thinking it is a real video by gradually improving its realism and details. 


Finally, when the video is completed, it enters the final stage. The video goes through some manual processing, such as editing, to smooth out any artificial movements or inconsistencies. This process might take a few weeks to finish up. The lighting, shadows, skintones, and body movements will be adjusted to make the final video seem virtually indistinguishable from reality. Once video editing is finished, the deepfake will be ready to be shared and distributed all across the internet for millions to view. 


As deepfake technology grows more accessible, it poses serious ethical, political, and economic risks. Similar incidents like that of Beauchamp’s have occurred all across the globe, where scammers create hyper-realistic deep fakes of celebrities, religious figures, or politicians to scam individuals, usually elderly people, out of money. These deepfakes look so highly credible and “true to life” that even the most seasoned internet user would have trouble distinguishing them from reality.


Deepfakes have not only scammed millions of dollars from innocent individuals, but they have also disrupted the stock market and businesses of large companies. For example, a fake video of a CEO committing scandalous crimes or saying something discriminatory would likely result in severe damage to both the company and its stock, as the company will lose its customers’ trust. 


Deepfakes also pose a threat to democracies by giving political opponents the ability to manipulate voters during election season. These worries aren’t baseless, as they have happened in history before. Just days before a pivotal presidential election in Slovakia, a deepfake audio recording spread like wildfire across social media platforms. In the fabricated recording, Michal Simecka, one of the top candidates supporting NATO and the USA, was discussing electoral fraud with a prominent journalist. Although the authenticity was called into question, the virality of the clip and the timing caused Simecka to lose against his pro-Russian candidate. 


Unfortunately, no legal actions were taken on the perpetrator.  This lack of accountability stems from the fact that large-scale regulation is nearly impossible to fully enforce, which presents a myriad of legal issues concerning privacy, intellectual property, defamation, and regulatory compliance. 


Despite this being a major issue, America has barely enforced any regulations on the creation and distribution of these deepfakes. Although some states like Texas and California have created laws against deepfakes, the majority of these regulations are designed to protect a celebrity’s image or endorsement, with virtually no protection for the common people. 

In the end, deepfake technology presents a double-edged sword: it’s both an incredible feat of human ingenuity and a snake pit of ethical dilemmas. Through its revolutionary usage of machine models and artificial intelligence, it has led to new levels of innovation and creativity in filmmaking, gaming, and digital art. However, it has the potential to wreak havoc by spreading misinformation, invading privacy, and decreasing trust in authentic political media. 


So, how dangerous is deepfake technology? Well, it depends. It is up to governments, tech companies, and regular individuals to harness its powers responsibly. Until then, it's probably a good idea to double-check those viral videos you see online because as technology continues to evolve, the line between reality and fiction is going to get a lot blurrier.

bottom of page