Jun 3, 2025

AI Copyright Wars: The Battle for Creative Control

Generative AI models learn from vast datasets that inevitably include copyrighted works. This raises two critical questions: (1) can protected material legally be used for AI training, and (2) who, if anyone, owns the AI-generated output?

Legal Complications: Lawsuits and Landmark Cases

High-profile lawsuits are increasingly shaping this debate. Getty Images recently sued Stability AI, alleging the AI company's generative tools unlawfully utilised millions of Getty's copyrighted images. A US federal judge allowed this significant copyright infringement lawsuit to advance, recognising the potential substantial misuse of protected content.

Prominent artists, including Sir Elton John, have also publicly voiced opposition, arguing AI-generated music and imagery may undermine creators' rights and livelihoods. Elton John has called for clearer regulation to safeguard artists from exploitation.

Elton John: I would take government to court over AI plans | BBC News

Initially, tensions also arose between news media organisations and major AI platforms such as Google and OpenAI, with publishers raising concerns about the unauthorised use of their journalism to train language models. However, these disputes have evolved into constructive agreements: many media agencies have now signed licensing deals with AI companies. These arrangements typically allow content to be used as reference material in tools like ChatGPT, with proper attribution and direct links to publisher websites.

They also give publishers access to advanced AI tools to develop their own products and services turning a legal flashpoint into an opportunity for innovation and growth.

Currently, courts rely on assessing "substantial similarity" between AI outputs and original works, a complex task given the extensive remixing inherent in AI-generated content.

Who Owns AI-Generated Content?

Ownership rights for AI-generated content remain ambiguous. Jurisdictions such as the UK and the US require human authorship for copyright protection. The UK Supreme Court underscored this position in the landmark DABUS patent case, declaring AI systems cannot be recognised as inventors. Similarly, the US Copyright Office maintains works produced entirely by AI, without significant human involvement, are not copyrightable.

Article content

Policy and the Path Forward

To move forward responsibly, we need policies that support both creativity and innovation. Here's how:

  • Fair Access with Compensation: Developers can train AI on data legally by paying for licences, while creators get fairly rewarded.
  • Different Rules for Different Uses: Research and experimentation might have more relaxed rules, but commercial uses should come with clear responsibilities like licensing or filtering outputs.
  • Track Where Content Comes From: Using metadata tags and watermarks, we can trace how and where AI-generated content originates building trust and transparency.
  • Safe Testing Spaces: Governments can offer temporary "sandbox" approvals so startups can test AI tools without facing penalties right away. If problems arise, they can step in quickly.
  • Support for Creators: Training programmes and revenue-sharing models can help musicians, writers, and artists use AI tools creatively and get paid when their work is part of the process.

By combining these approaches, we can protect creative rights while still allowing space for breakthrough ideas.

Key Takeaways:

  • For AI companies: Get ready for clearer rules. Start budgeting for licences, transparency tools, and ethical AI practices now to stay ahead.
  • For creators: Don't wait on the sidelines. Explore how to licence your work, opt out of training where needed, and use new AI tools to extend your creative reach.
  • For both sides: Building partnerships between developers and rights-holders isn’t just smart - it earns trust and unlocks new opportunities.
  • Stay informed: Watch key lawsuits like Getty vs. Stability and The New York Times vs. OpenAI. These will shape future rules on data use and output rights.
  • Lead by example: The most successful organisations will be the ones that engage policymakers early, use data responsibly, and show how AI can support human creativity.

For Genfuture Lab’s readers, staying informed and proactively adapting to evolving legal and industry standards will be crucial as the landscape continues to change rapidly.

Sources:

Mel Moeller
Ratings
mel-moeller

Chief AI Officer @ GenFutures Lab

London, UK

Former Sky, BBC & HP, is an AI thought leader who helps organizations rapidly adopt AI, drive measurable ROI, scale innovation, and foster AI literacy across diverse industries.

No items found.
No items found.
Just Curious AI – Ask Us Anything