The advent of technology has led to various ethical and legal challenges, particularly in the realm of personal privacy. One significant concern is the nonconsensual sharing of intimate images, commonly referred to as “revenge porn.” Last year, South Carolina joined the ranks of all other states in enacting laws against this practice.
With the rise of generative AI, a new dilemma arises: What are the implications when images, though fabricated, appear convincingly real? The legal discourse surrounding these issues in the United States remains unsettled.
Meanwhile, the U.K. has threatened to ban X and Grok, platforms owned by Elon Musk, for allowing users to produce explicit deepfake images, including those of minors.
Other regions, including the European Union, China, and India, have enacted strict regulations regarding AI-generated content. Just last week, South Korea passed comprehensive legislation attributing responsibility to generative AI for its misuse.
In a notable development, the U.S. Senate approved a bill allowing victims of deepfake technology to initiate lawsuits; however, the House has yet to address the proposal.
In this conversation, Rebecca Tushnet, Frank Stanton Professor of the First Amendment at Harvard Law School and co-director of the Berkman Klein Center for Internet and Society, delves into the complex legal landscape surrounding AI deepfake technology and the regulatory hurdles it poses for the United States.
What makes AI deepfakes distinct?
The fantasy of indulging in images of people without clothes is not new. Historically, however, the ways to create and share realistic representations were quite limited. The current advancements may have introduced a notable shift in both scale and realism.
Today’s AI-generated images possess a level of realism that significantly impacts the emotional distress and invasion of privacy experienced by individuals, as well as the implications for viewers.
Is generating deepfakes straightforward across AI models?
Not all prominent AI models permit the creation of deepfakes.
However, individuals familiar with computer science can train regular models on explicit content, making it relatively accessible for those who, a few years ago, might not have had the skill to do so.
Even if Grok aligns itself with other major models in terms of content restrictions, producing such images will remain feasible—it will just be more challenging.
Additionally, people are experimenting with outrageous content, such as fabricating images of celebrities making offensive gestures. These types of manipulations may not have explicit sexual connotations, but they still pose challenges regarding deception and misrepresentation.
This situation complicates regulation in the U.S., where legislation predominantly addresses issues of falsity. Manipulated images that viewers can identify as doctored escape the regulatory framework, particularly since we typically do not restrict content unless it directly harms or misuses an individual’s image.
“In the U.S., the baseline responsibility is to remove intimate depictions when you’re informed about them.”
Rebecca Tushnet
What do current U.S. laws permit and prohibit?
The legal landscape is quite chaotic. While there is some clarity regarding liabilities associated with distributing nude images—potentially involving false light or invasion of privacy claims—regulations concerning the tools used to generate these images remain murky.
Several states have laws targeting unfair and deceptive business practices, with an argument to be made that possessing such generative capabilities could be deemed unfair. However, most discussions of fairness focus on consumers rather than individuals as human beings.
In essence, U.S. laws generally stipulate that once you are made aware of intimate images, you are responsible for removing them.
The degree of liability attached to the ability to create these images is a relatively unexplored territory. Previous technology lacked the accessible tools that many possess today, as the capability was typically confined to those not interested in producing harmful content.
Moreover, generating explicit depictions of both celebrities and private individuals does not usually generate significant financial profit. Society has not established a precedent suggesting that simply having the ability to create such content renders the underlying tool unlawful.
This is new ground, and given the current context, there may be an obligation to complicate accessibility to these tools.
While there are frequently ways to circumvent restrictions, the legal question revolves around what measures, if any, developers must take to make misuse more challenging—a concept referred to as “guardrails” in AI.
Potential legal responses could range from holding only the user accountable for their actions with the tool, akin to firearm laws, to imposing liability on the toolmaker, making them accountable for any misuse regardless of the precautions taken. Although the latter is discussed, it is not the standard framework for assigning liability, and the expectations of toolmakers remain undefined at this stage. Currently, there are no laws mandating such guardrails.
What are the key arguments for and against regulation?
Advocates for strict legal accountability for generating these types of images emphasize human dignity and the disproportionate impact of this technology, particularly on women and marginalized communities, often used in disparaging ways. This is a compelling argument.
On the other hand, proponents of caution argue that sexualized mockery has historically been a form of political expression. For instance, prior to the French Revolution, numerous images circulated that were both sexual and politically charged regarding figures like Marie Antoinette. They caution against suppressing valid speech or personal expression, noting that private actions should generally escape government scrutiny unless shared publicly.
How do the EU and U.K. compare regarding AI regulation?
While nations are certainly crowding into a regulatory space, their approaches vary significantly. The First Amendment in the U.S. offers stronger free speech protections than those in many other countries. However, the complexities of this challenge often extend beyond the realm of speech rights.
In Europe, regulations are frequently broader, and if an entity attempts good-faith compliance, they may be seen as compliant by regulators. Conversely, in the U.S., private entities may face liabilities even for minor deviations from established legal obligations, which can complicate a straightforward regulatory framework.
This situation makes clear that navigating the responsibilities associated with AI-generated content remains a significant challenge for lawmakers globally. As technology progresses, finding a balance between regulation and personal freedoms will be essential.