Published by the Students of Johns Hopkins since 1896
December 30, 2024
editorial-pq-8

Even if you haven’t been keeping up with the news, you’ve undoubtedly noticed the rise of artificial intelligence (AI) over the past few years. From ChatGPT to facial recognition technology, AI is becoming increasingly accessible to even those of us without a computer science degree. 

While many of us grew up using AI when playing chess against a computer, we can now generate entire essays in seconds using ChatGPT (The News-Letter does not recommend doing this). At the same time, scientists are harnessing the power of AI to accomplish great feats, such as creating self-driving cars and predicting protein structure

Despite how AI proves itself useful, it also harbors the potential to be used for far more sinister purposes. Last week, deepfake pornography of Taylor Swift circulated online, sparking public discourse on the potential dangers posed by AI. A deepfake is defined as “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”

The circulation of the explicit images has led many — including the CEO of Microsoft and the White House — to advocate for greater regulation of deepfakes by both tech companies and the federal government. Currently, no legislation on the federal level regulating deepfake pornography exists. Only 10 states — including California, Florida, Texas and New York — have laws against deepfake pornography, with the punishment ranging from fines to jail time. Maryland is notably absent from this list. 

All but two states have laws against non-consensual pornography, but they are behind the times when it comes to regulating deepfake porn. It’s not a new phenomenon: In 2018, deepfake pornography of actress Scarlett Johansson was circulated on the internet and viewed over 1.5 million times. It is high time that lawmakers pick up their pens and write legislation prohibiting the proliferation of deepfake pornography.

The experience of finding non-consensual deepfake porn of one’s self is not restricted to traditional Hollywood celebrities; Streamers Sweet Anita and Pokimane were also victims of deepfake pornography. Further, while deepfakes featuring public figures make headlines, for victims with less fame, influence and money, including minors, solutions are harder to come by. The process of paying lawyers and finding ways to remove deepfake content from the Internet requires resources that many do not have access to.

Legislators are meant to advocate for the people they represent, and as deepfake pornography becomes more common, more of their constituents are at risk of being targeted. A study from 2019 found that 96% of deepfake content on the internet is non-consensual and sexually explicit. Of that 96%, 99% of its victims are women, demonstrating that deepfake technology has quickly become a tool used to target, harass, and sexualize women. The lack of legislation regarding deepfakes provides bad actors with the opportunity to freely spread harmful and damaging content without legal consequences. 

On both the state and federal levels, legislators need to act with urgency to combat the proliferation of deepfake porn before it does even more damage.

While we wait for legal measures that will deter the creation of deepfake porn, researchers and tech companies can work together to decrease the spread of the content on the Internet. For example, X temporarily blocked specific searches containing the phrase “Taylor Swift” on the app. Further, some researchers are creating neural networks to detect deepfake videos and combat their growing influence. 

The effort to stop the spread of deepfake pornography should be a collective one. With great power comes great responsibility. As society benefits from the advancements that AI has to offer, we must also confront the problems it creates. 


Have a tip or story idea?
Let us know!

News-Letter Magazine
Multimedia
Hoptoberfest 2024
Leisure Interactive Food Map