The rapid advancement of generative artificial intelligence (AI) has ushered in a new era of digital deception, where deepfakes—highly realistic but entirely fabricated images, videos, and audio—pose profound challenges to society. These AI-generated forgeries are not mere internet curiosities; they threaten the very fabric of social order, democracy, and individual dignity. As deepfake technology becomes more accessible and sophisticated, it undermines our ability to discern truth from falsehood, jeopardizing critical thinking and the functioning of self-governance. To address this growing crisis, legal frameworks must evolve to protect individuals’ rights to their own image and voice, ensuring accountability and preserving democratic integrity.
One country leading the way in this battle against deepfakes is Denmark. In June 2025, the Danish government proposed a groundbreaking amendment to its copyright law that would grant people explicit rights over their likenesses and voices. Under this proposal, creating a deepfake of someone without their consent would be illegal, and violators would face legal consequences. This approach effectively enshrines a simple yet powerful principle: individuals own their own image. By framing the issue within copyright law, Denmark leverages strong corporate incentives to enforce compliance, as companies are wary of copyright infringement liabilities.
The effectiveness of this strategy is underscored by a revealing experiment conducted by researchers in 2024. They posted 50 nude deepfake images on the social media platform X, reporting half as copyright violations and the other half as nonconsensual nudity under X’s content policies. The platform promptly removed all posts reported under copyright claims but ignored those flagged for privacy violations. This disparity highlights how legal rights, particularly around copyright, prompt more decisive action from platforms than privacy complaints alone. Denmark’s proposal utilizes this dynamic, ensuring that victims of deepfakes have a clear legal pathway to have harmful content removed and to seek compensation, addressing the tangible harms these fabrications cause.
The harm inflicted by deepfakes is far from theoretical. Deepfake creators often exploit their victims for financial gain, coercion, or control. Tragically, some victims have been driven to suicide, as seen in documented cases involving teenage boys targeted by scammers. The majority of victims, however, are women and girls. Research indicates that 96 percent of deepfakes are produced without consent, and 99 percent of sexual deepfakes depict women. This disturbing trend reflects deep-seated issues of gender-based abuse and online harassment amplified by AI technology.
The problem is widespread and escalating. A global survey of over 16,000 people across ten countries found that 2.2 percent had experienced deepfake pornography. Meanwhile, the Internet Watch Foundation recorded a staggering 400 percent increase in web pages containing AI-generated deepfake child sexual abuse material in the first half of 2025 compared to the same period in 2024. The rise in reported videos of such abuse—from just two in early 2024 to over 1,200 in early 2025—reveals the alarming scale and sophistication of these AI-generated crimes. Many of these videos are so realistic they are indistinguishable from genuine footage, magnifying the potential for harm.
Beyond personal victimization, deepfakes pose a severe threat to democratic processes. In the months leading up to the 2024 U.S. presidential election, a deepfake video of Vice President Kamala Harris surfaced on X, falsely portraying her as a “diversity hire” unfamiliar with governance. Although the content violated the platform’s own synthetic-media policies, Elon Musk, then overseeing X, dismissed it as parody, allowing the misleading video to remain online. Such incidents illustrate how deepfakes can be weaponized to spread misinformation, erode public trust in leaders, and manipulate electoral outcomes.
Financial security is also at risk. In 2024, criminals exploited deepfake technology to impersonate executives during a live video call, tricking an employee in Hong Kong into transferring approximately $25 million to fraudulent accounts. According to a report by Resemble.ai, a company specializing in AI voice technologies, there were 487 documented deepfake attacks in the second quarter of 2025 alone—a 41 percent increase from the previous quarter—with estimated losses of $347 million in just three months. These figures expose the alarming economic vulnerabilities created by synthetic media.
In response to these threats, the United States has begun to take legislative action. The bipartisan TAKE IT DOWN Act, passed in 2025,
