From Privacy Violations to Political Manipulation: The Legal Battle Against Deep Fakes

Reading Time: 2 minutes
(Illustration by John DiJulio, University Communications)

The sudden explosion of AI into our daily lives has been overwhelming to say the least. Technology most of us had never heard of, was suddenly within our grasp with the growing popularity of AI generative tools. One such application of AI has been ‘Deep Fakes’. Deep Fakes are essentially synthetic media, typically videos, images, or audio, created using AI to manipulate or generate realistic representations of people or events. And with the increasing sophistication of AI it’s becoming harder to tell real material from fabricated ones.

Most countries have laws that either explicitly or indirectly prohibit deep fakes, but as AI continues to evolve, it’s become harder to both identify deep fakes and hold the perpetrators accountable.

Around last year, Florida born tik tok star Brooke Monk was subject to the cruel side of AI when falsely-generated, explicit photos of her were ‘leaked’ anonymously on reddit. The leaking of someone’s intimate photography is a crime in itself under revenge porn or non-consensual pornography statutes that protect individuals from the unauthorized sharing of their private images. But what if these images did not exist in the first place?

When AI-generated content is used to create fake videos or images, it violates an individual’s privacy and autonomy. Brooke Monk’s situation highlights the urgent need for stronger regulations and increased awareness regarding the misuse of AI technology.

Additionally, in recent years, AI-generated videos have been used to create fake speeches and statements of political leaders, making it appear as though they’ve said or done things they never did. This manipulation can have serious consequences, as it can alter public perception and sway political opinions, especially during election campaigns. In many cases, by the time the video is debunked, the damage is already done.
But what measures can legal-systems take to brave this rising threat? The answer isn’t as straightforward as one would hope. The anonymity digital crimes provide perpetrators make it difficult to generate enough evidence to prosecute them. Additionally, the borderless nature of the internet means that deep fake creators can operate from regions with lax or no regulations, making it challenging to enforce laws across different jurisdictions.

However, to quote Justice Thurgood Marshall “The law is a reflection of life; as life changes, so must the law.” We can be sanguine about the eventual establishment of stable legislation that governs the threat of AI. Until then, some other solutions we could adapt include, holding social media platforms accountable, implementing digital watermarking and source verification as well as enhancing international cooperation and jurisdictional collaboration.

Written by Ananya Nambiar

Share this:

You may also like...

X (Twitter)
LinkedIn
Instagram