A Hijacked Reality: The Rise of Deepfake

Reading Time: 3 minutes
Image: A human face being analyzed for Deepfake. West Hills, CA (Organization for Social Media Safety)

Imagine waking up one morning to a video that went viral of yourself saying something you never said— something offensive, harmful, or even illegal. Without time to react, the video is rampaging online, destroying your career, reputation, or relationship. This is not a made-up scenario, it is the alarming reality of deepfake technology.

Deepfakes are hyper-realistic AI-generated videos, images, and audios that has turned from being a simple curiosity to a dangerous weapon of deception. Although the rise of artificial intelligence has had potential for good and help, the misuse of it through deepfake disturbs privacy and the truth. If left untouched, deepfake could risk the loss of trust as reality could be manipulated with a few lines of code.

One of the most degenerative effects of deepfake is their ability to make humans question reliability and undermining trust in what we see and hear. In the past, video evidence was essential, especially in a court setting, but today, AI could easily manipulate footage so well that humans are unable to detect fakes.

Deepfake is especially harmful in politics. Three years ago in 2022, a fake viral video of Ukrainian President Volodymyr Zelensky telling his soldiers to surrender to Russia circulated the internet. Although it was quickly debunked, the clip caused confusion at the critical moment. Similarly, in January of 2024, many New Hampshire voters believed that former President Joe Biden were telling Democrats to not take place in voting in the state’s primary that was just days away, it was later revealed that it was a deepfake.

As deepfake gets smarter, it becomes more difficult to distinguish the truth from fake. If deepfake continues to become a threat in audio and video evidence, skepticism and paranoia will arise, also making it easy for people to accuse true video evidence as “just another deepfake”. This results in a society where no evidence can be trusted.

Beyond politics, deepfakes create horrifying and harmful public violations. Non-consensual deepfake pornography where someone’s face is imposed digitally onto explicit content has risen. One specific case of this violation is when one of the most famous woman and popstar Taylor Swift was a victim to this explicit deepfake. On January 24th of 2024, deepfake users generated explicit images of Taylor Swift at a football game, spreading like wildfire, gaining over 45 million views. This was all done without her consent, like other cases of deepfake pornography. This shows how easily technology and AI can be used as weapons for harassment.

Deepfakes have not only targeted celebrities. Ordinary people have also been blackmailed, bullied, or impersonated in deepfake videos and audios. Scammers are now able to use AI voice cloning to mimic loved ones in distress, tricking close friends and family to send large amounts of money. With this, victims often have little legal recourse, as laws struggle to keep up with the evolution of AI.

Some may argue that deepfake technology has some beneficial applications, like reviving historical figures for teaching, making filmmaking easier, or creating personal digital avatar. While these benefits do exist, the risks outweigh the positives.

The problem is not the technology itself; the problem is the safety. Scammers and criminals take advantage of this evolving technology to harm others, and without strict regulations, these well-intended tools could be used for fraud, defamation and more.

To mitigate the harm of deepfake technology, laws must criminalize malicious deepfakes and require AI-generated content to be watermarked, and schools and platforms should teach users how to critically access digital content. If we do not act now, we may find ourselves in a society where not even our own eyes can be trusted.

Written by Audrey Limowa

Share this:

You may also like...

X (Twitter)
LinkedIn
Instagram