A recent legal battle in the United Kingdom has added a significant chapter to the evolving global discussion about the intersection of artificial intelligence (AI) technology and copyright law. This case, involving Getty Images and Stability AI, highlights the complex challenges courts face in defining the boundaries of what AI companies can lawfully do with human-created creative content. The outcome of this lawsuit, while nuanced and limited in scope, could influence how AI tools operate and what kind of content they can provide to users.
The case centered on Getty Images’ claim that Stability AI, the company behind the widely used Stable Diffusion image-generation models, had infringed on its copyright and trademark rights. Getty alleged that Stability AI had unlawfully scraped and used a vast trove of Getty’s images—human-produced creative works—from the internet to train its AI models without permission or compensation. This accusation raised critical questions about the legality of using copyrighted materials as data for training generative AI systems, a topic that courts around the world are still grappling with.
On Tuesday, Justice Joanna Smith of the UK High Court delivered a ruling that Stability AI did not violate copyright law in the way it trained its models. Smith’s decision was clear that Stability AI did not store or reproduce Getty’s copyrighted images, and therefore did not infringe upon Getty’s copyright protections. This judgment reflects the court’s finding that training AI models using copyrighted images does not necessarily amount to copyright infringement when the images themselves are not copied or distributed as part of the AI’s output.
However, the ruling was far from a total win for Stability AI. The court found that Getty had succeeded “in part” with its trademark claims. Specifically, Smith ruled that Stability AI infringed on Getty’s trademarks by allowing users to generate images that resembled the iStock and Getty Images logos. This trademark infringement was recognized under certain legal statutes, meaning that while Stability AI’s training methods did not violate copyright law, the company bore responsibility for the unauthorized use of Getty’s trademarks in AI-generated images. Importantly, the court rejected Stability AI’s argument that the users who generated these images should be held responsible, instead placing the liability squarely on Stability AI as the model provider with control over the training data.
Justice Smith described her findings as both “historic” and “extremely limited” in scope. This carefully worded characterization underscores the current legal uncertainties surrounding AI and copyright. It also echoes patterns seen in recent US court decisions, where judges have issued varied and sometimes contradictory rulings on similar issues. The lack of a unified legal consensus means that each case involving AI and copyright infringement is highly fact-specific and can lead to different conclusions depending on the jurisdiction, the precise claims made, and the evidence presented.
The Getty vs. Stability AI case is among the first major lawsuits involving a substantial content library accusing an AI company of unlawfully scraping copyrighted material from the web to train its models. Such cases are becoming increasingly common as AI companies require enormous datasets of human-generated content to develop and refine their generative technologies. In the United States, similar disputes have seen companies like Anthropic and Meta largely prevail against authors who claimed their books had been used without permission in AI training datasets. These precedents highlight the ongoing tension between the interests of content creators seeking control and compensation, and AI developers advocating for the freedom to use publicly available data to advance technological innovation.
The aftermath of Tuesday’s ruling has led both Getty Images and Stability AI to interpret the outcome as a form of victory for their respective positions. Getty hailed the judgment as a win for intellectual property owners, emphasizing the court’s recognition that Stability AI infringed on Getty’s trademarks when those marks appeared in AI-generated outputs. Getty also highlighted the court’s rejection of Stability AI’s attempt to shift liability to the users who generated the images, reinforcing the responsibility of AI companies to manage the training data and outputs of their models carefully.
On the other hand, Stability AI’s legal team framed the ruling as a resolution of the core copyright concerns that had been the focus of the trial. They pointed out that Getty had voluntarily dropped its primary copyright claims earlier in the process, leaving only secondary claims for the court to consider. Stability AI’s general counsel, Christian Dowell, noted that the final judgment effectively addressed these remaining copyright issues, allowing the company to continue operating its models in the UK without fear of infringing Getty’s copyrights.
Justice Smith was careful to emphasize that her ruling applies specifically to the facts, evidence, and legal arguments presented in this particular case
