Proposal for AI Training Compensation & Transparency Act
This policy proposal was written for PR-501: Public Policy Communication in Spring 2026.
Title: The AI Training Compensation and Transparency Act: A Legislative Strategy for Creator Protection and Industry Certainty
Introduction
In late 2022, Polish digital artist Greg Rutkowski discovered that his name had been used as a prompt in Stable Diffusion’s image generation model approximately 93,000 times.1 By that time, searches for his name on the platform returned more algorithmically generated images than his actual work. Rutkowski had made his work particularly discoverable by adding detailed, descriptive alt text to his portfolio website—a practice intended to improve accessibility—which made his images especially easy for web scrapers to extract and process in datasets. Rutkowski requested removal from the training datasets; Stability AI complied. Within weeks, however, community members had created a LoRA model—a specialized adaptation layer—that replicated his artistic style without legal restriction, rendering his removal request moot.2 Rutkowski had no legal recourse.
Rutkowski’s situation is not isolated. As of 2026, thousands of creatives find themselves in similar positions; their work has been incorporated into generative AI training datasets without consent or compensation, and existing copyright law does not clearly address this use case while courts are divided on whether the fair use doctrine extends to machine learning. Meanwhile, the stakes continue to rise. Legislation that provides both clarity and fairness is urgently needed.
This paper proposes the AI Training Compensation and Transparency Act (ATCTA), a federal legislative framework that establishes mandatory dataset disclosure for generative AI companies, creates an opt-out registry for creators, and establishes a blanket licensing fee structure modeled on the Music Modernization Act of 2018. The ATCTA represents a structured compromise: Creators receive compensation and control; AI companies receive legal certainty and safe harbor from litigation. The strategy outlined below demonstrates how this compromise can achieve sufficient policymaker support.
Thesis and Context
The AI Training Compensation and Transparency Act offers a workable, bipartisan framework that resolves the core tension between creator rights and AI industry development through a defined disclosure and licensing system. It is the most achievable legislative path available and represents a compromise that neither side can reasonably reject.
To understand why such legislation is necessary, a brief explanation of how AI training data acquisition works is essential. Generative AI systems learn from vast datasets compiled through web scraping. LAION-5B, one of the largest public training datasets, contains 5.85 billion images scraped from the internet without consent from the original creators or copyright holders.3 These datasets form the foundation of image generation models like Stable Diffusion and text models like those built by OpenAI. The creators whose work filled these datasets—photographers, illustrators, writers, musicians, and voice actors—received no notification, no choice in participation, and no compensation.
Existing copyright law does not address this scenario. The Copyright Act of 1976 was written for an era of photocopying and home recording. The Digital Millennium Copyright Act of 1998 was written for file sharing and digital rights management. Neither statute contemplated machine learning or anticipated that copyright-protected works would be extracted at scale to train proprietary commercial systems. The fair use doctrine, which might theoretically permit some training data use, remains unsettled in this context. In 2015, Authors Guild v. Google Books established that copying entire books for text search and analysis could constitute fair use, but courts have not definitively applied that precedent to generative AI training, particularly where the output is a competing commercial product.4
The ATCTA does not require courts to resolve the fair use question. Instead, it creates an alternative legal pathway that makes the question irrelevant. By establishing a defined licensing system—similar to how the music industry operates under the Music Modernization Act—the bill removes the incentive for litigation by making payment predictable and compensation automatic. For AI companies, this is preferable to the current situation, in which dozens of pending class-action lawsuits threaten unpredictable damages. For creators, it ensures compensation without the cost and delay of litigation.
The core provisions of the ATCTA are as follows: Generative AI companies with annual product revenue exceeding $1 million must disclose to the Copyright Office the categories of copyrighted works included in their training datasets. Any creator can register their work in a federal opt-out registry, which AI companies must consult before compiling new training data. Companies subject to disclosure requirements pay 1.5 percent of annual generative AI product revenue into a Creative Works Licensing Pool (CWLP) administered by the Copyright Office. The pool distributes quarterly payments to registered creators whose work is verifiably included in disclosed datasets. Companies that comply with disclosure and pay into the pool receive a statutory safe harbor from class-action litigation based on training data use. Smaller developers generating under $1 million annually are exempt from the fee but must still honor opt-out requests. Public domain works in training datasets generate pool fees directed to the National Endowment for the Arts. The implementation period is 18 months.
This structure asks each party to give something up and gain something in return. Creators will relinquish the ability to pursue unlimited statutory damages for past training data use; they will accept proportional, pooled compensation. AI companies will relinquish the claim that training data use requires no payment; they will accept a defined, predictable cost of doing business. Both parties will gain legal certainty, reduced litigation exposure, and a workable framework for future development.
Background of Issue
The legal and practical foundation for the ATCTA emerges from four separate developments: the evolution of copyright law, the emergence of LAION-5B as a crisis point, recent litigation that demonstrates legal uncertainty, and the precedent of the Music Modernization Act.
Historically, copyright law’s treatment of technology has lagged behind innovation. The Copyright Act of 1976 extended protection to new media but was still designed for an era in which copying required deliberate human action and carried material cost. The Digital Millennium Copyright Act of 1998 addressed the internet and digital distribution but focused on preventing consumer-level file sharing and protecting digital rights management systems. Neither statute anticipated that a company could acquire billions of copyrighted works without individual licensing agreements and that no court would definitively prohibit this practice.
The dataset LAION-5B crystallized this legal gap into a practical crisis. Released in 2022, LAION-5B contains 5.85 billion images, most of them scraped from the Common Crawl index of the internet without consent or licensing.3 This dataset formed the foundation for Stable Diffusion. LAION-5B demonstrated that the scale of data collection had reached a point where it could no longer be addressed through individual licensing. It was a fait accompli: billions of images, most protected by copyright, assembled without the copyright holders’ knowledge or permission.
This practical impossibility prompted litigation. In January 2023, Andersen v. Stability AI was filed in the Northern District of California, alleging that Stable Diffusion infringed the copyrights of millions of artists by training on their work.4 In February 2023, Getty Images filed a similar suit against Stability AI in Delaware, alleging that the company had scraped 12 million copyrighted Getty photographs without license or compensation.5 In December 2023, The New York Times filed suit against OpenAI and Microsoft, alleging that copyrighted articles were incorporated into training data without permission.6 As of 2026, these cases remain pending, with no definitive ruling on whether training data constitutes copyright infringement or whether the fair use doctrine applies. The legal uncertainty created by these ongoing cases has become the most serious liability facing AI companies.
The Music Modernization Act of 2018 provides a direct legislative precedent for resolving this type of uncertainty through licensing rather than litigation.11 Before 2018, the music streaming industry operated in a state similar to the current generative AI situation: Licensing agreements were fragmented across thousands of individual publishers and songwriters, and ambiguity around mechanical licensing obligations created constant litigation. The MMA established a blanket mechanical licensing system administered by the Mechanical Licensing Collective. Streaming services now pay a defined statutory rate. The system works—Spotify, Apple Music, and other streaming platforms operate profitably and legally; the music industry still generates licensing revenue; and songwriters receive compensation. The MMA demonstrates that a licensing framework does not destroy an industry; it allows an industry to grow sustainably within a regulated legal structure.
A second precedent emerged from labor negotiations. In July 2023, SAG-AFTRA began a strike that lasted through November 2023. The strike centered partly on AI use—specifically, on the right of studios to create digital replicas of actors’ likenesses without consent or compensation. On Dec. 5, 2023, union members approved the agreement with 78.33 percent support, including provisions requiring digital replica consent and creating compensation streams for AI-generated performances.7 SAG-AFTRA’s success demonstrated that organized labor could shift industry practice through structured negotiation. The agreement did not prohibit AI; it required consent and compensation. Studios accepted this framework because the cost of compliance was lower than the cost of extended striking.
The Tennessee ELVIS Act, signed into law on March 21, 2024, extended similar protections to digital replicas created through voice synthesis and deepfake technology.8 The ELVIS Act passed unanimously in both chambers of the Tennessee legislature—93 to 0 in the House and 30 to 0 in the Senate—indicating broad bipartisan recognition that AI-generated replicas require consent. These developments together—copyright law’s historical lag, LAION-5B’s crisis scale, pending litigation’s uncertainty, and the MMA’s successful precedent—create both the necessity and the political opportunity for the ATCTA.
Overview of Current Environment
As of April 2026, no comprehensive federal law governs AI training data sourcing or compensation. The landscape is characterized by fragmentation, uncertainty, and the growing risk of regulatory arbitrage. Litigation remains pending, with no Supreme Court ruling on the applicability of fair use to generative AI training. This means this state of flux persists for all parties: Corporations cannot plan with confidence, and creators cannot rely on courts to protect their interests.
The European Union has moved faster. The EU AI Act, which entered into force in August 2025, requires transparency in AI training data disclosure.9 Article 53(1)(d) mandates that AI companies provide information about copyrighted training data and its sources. However, the EU Act does not establish a compensation mechanism. Legal scholars have observed that transparency requirements alone cannot resolve the fundamental copyright tensions created by AI training, and that disclosure obligations must be accompanied by compensation mechanisms.10 This creates a competitive opportunity: The ATCTA’s compensation structure is more comprehensive than EU requirements, providing U.S. creators with protections the EU has not yet established, while giving U.S. AI companies a more definitive legal clarity than EU disclosure alone provides.
At the state level, legislative fragmentation is accelerating. California, New York, and other states are developing their own AI and creator protection laws. Without federal preemption, companies will face a patchwork of inconsistent requirements. In a 2023 study, the Copyright Office recommended Congressional action to prevent precisely this outcome. The ATCTA, as a federal statute, would preempt state-level requirements and create a single national standard.
Within the AI industry, corporate positions have diverged. Adobe, through its Firefly model, chose to license training data upfront from creators and content libraries. Shutterstock has licensed its image database to OpenAI through a commercial agreement. These companies have demonstrated that licensing is compatible with competitive, profitable AI product development. By contrast, OpenAI, Stability AI, and Google have resisted disclosure and licensing agreements, arguing that training data use falls under fair use. This division within industry creates a political opening: Companies that have already licensed training data have a competitive interest in universal licensing requirements, which would level the playing field and impose costs on competitors that have avoided them. Legal scholars examining U.S. copyright law in this context have characterized it as “industry-oriented” relative to approaches adopted elsewhere, and have noted that no established framework exists for fair remuneration when generative AI systems are trained on copyrighted works.12
The Copyright Office maintains the infrastructure for administering collective licensing systems. Its existing role overseeing statutory licensing and royalty distribution provides direct precedent for administering the ATCTA. This institutional capacity is not hypothetical. It exists and can be extended to AI training data without creating new bureaucratic infrastructure. Creator organizations are mobilized. SAG-AFTRA and the Writers Guild of America have established legislative relationships. The Authors Guild has filed suit against OpenAI and maintains a sustained public position that AI training requires author consent.15 The political moment is time-sensitive: If courts rule that training data use constitutes fair use, legislative leverage would evaporate. The window for legislation that offers industry a compromise is open now.
Summary of Arguments For and Against Proposal
Arguments in Favor of the ATCTA:
Creators’ unpaid labor has generated substantial commercial value for AI companies. Generative image models trained on billions of photographs have displaced photographers from commercial work. Text models trained on copyrighted books generate value that publishers and authors do not share. Basic fairness suggests that commercial appropriation of labor should involve compensation for the laborer.
The 1.5 percent revenue fee is modest. Streaming services operating under the Music Modernization Act pay mechanical licensing rates ranging from 10.1 to 15.1 percent of service revenue, depending on configuration. By comparison, 1.5 percent is substantially lower. It is a cost that large AI companies can absorb and plan for. For companies like OpenAI and Stability AI, it is dramatically lower than their current litigation exposure—which includes statutory damages of up to $30,000 per infringed work under 17 U.S.C. § 504 in the cases currently pending against them.
The safe harbor provision resolves AI companies’ primary current liability. Companies that comply with disclosure requirements and pay into the pool receive statutory safe harbor from class-action lawsuits based on training data use, removing the single largest uncertainty facing the industry. A company can calculate its cost—1.5 percent of revenue—and know that it is protected from liability. The Music Modernization Act established that industrywide licensing does not prevent growth; the MLC has distributed billions in royalties without Spotify, Apple Music, or Amazon Music collapsing; and the opt-out registry gives creators direct control over their work’s inclusion in future datasets. The Authors Guild has formally stated that AI training requires author consent and that publishers do not hold the right to license those rights on authors’ behalf.
Arguments Against the ATCTA:
The fair use objection remains the most legally dire. If courts rule that generative AI training constitutes transformative fair use consistent with Authors Guild v. Google Books, mandatory licensing could be characterized as unconstitutional compelled speech or an unlawful taking. The attribution problem creates real operational difficulty: Unlike music streaming, where individual songs are tracked by play count, identifying which specific works influenced a model’s output is technically intractable. Distribution methodology will necessarily be imprecise, potentially directing funds primarily to creators in the largest, most documented datasets.
Furthermore, the compliance burden may accelerate market concentration. A 1.5 percent revenue fee combined with administrative disclosure requirements may be manageable for companies like OpenAI and Google, but prohibitive for a startup or small developer. If small and mid-size developers exit the market, the result would be greater concentration among large companies—the opposite of stated policy goals of promoting competition. The retroactive safe harbor also raises a justice concern: Companies that scraped billions of images without consent are shielded from liability by this provision, rewarding behavior that many creators and opponents of generative AI consider unethical. Regulatory arbitrage also remains a risk: If the United States imposes a revenue fee while other jurisdictions impose only transparency requirements, development may shift abroad, limiting the law’s practical reach.
Strategy
The core argument underlying every strategic message in this campaign is this: The ATCTA does not ask AI companies to admit wrongdoing. It does not ask Congress to take sides in pending litigation. It offers a deal—AI companies pay a defined, predictable, modest fee and receive legal certainty, while creators receive compensation and control. Everyone exits court. The bill makes the fair use question irrelevant by creating a voluntary compliance pathway that is cheaper than litigation for all parties.
Persuading Members of Congress
Members of Congress persuadable on the ATCTA are primarily moderate Democrats and Republicans concerned with both creative labor rights and technology competitiveness. The key committees are the Senate Judiciary Committee's Subcommittee on Intellectual Property, the House Judiciary Committee, and the Senate Commerce Committee.
For Democratic members, the ATCTA should be framed as a labor rights bill. The fundamental unfairness is that AI companies extracted creative labor—the intellectual work and property of artists, writers, musicians, photographers, and voice actors—and generated billions of dollars of commercial value without compensating the creators of that labor. The Music Modernization Act required streaming services to compensate songwriters for the value created by their work. This bill extends the same principle to visual artists, writers, and voice actors. The precedent is SAG-AFTRA’s 2023 contract. Union actors won the right to consent to digital replicas and to receive compensation for AI-generated performances. Writers and illustrators deserve equivalent protections.
For Republican members, the ATCTA should be framed as a property rights bill. An artist’s distinct style is intellectual property. A photographer’s portfolio represents a property interest. A voice actor’s vocal characteristics are protectable property. The opt-out registry and the licensing framework simply enforce existing property rights principles in a new technological context. Critically, the safe harbor provision reduces regulatory uncertainty and excessive litigation—both harmful to business. The alternative is court-by-court litigation, state-by-state fragmentation, and decades of uncertainty. The ATCTA provides legal clarity that benefits industry planning.
For members in between partisan lines, the Music Modernization Act is the most persuasive precedent. The MMA passed with near-unanimous bipartisan support in 2018 because it resolved a problem that was costing everyone: Streaming services were drowning in licensing disputes, songwriters were not being paid, and no one knew what to do about legal obligations. The ATCTA does exactly the same thing for generative AI. The cost to AI companies is 1.5 percent of revenue—substantially lower than music mechanical licensing rates under the MMA. The benefit is that 30-plus pending class-action lawsuits become moot. This is a bill about creating legal order, not about taking sides in a technological debate.
The most likely opposition argument from Congressional opponents is that the ATCTA would stifle innovation. This objection should be addressed directly and preemptively. Adobe’s Firefly was built on licensing training data. Adobe is profitable and competitive in the generative AI market. Licensing did not prevent Adobe from building a successful, innovative product. If the argument is that innovation requires the freedom to use other people’s work without compensation, that is not an innovation argument. That is a subsidy argument—the argument that AI innovation should be subsidized by creators who are not compensated for the use of their work. Congress should not accept that framing.
Bringing AI Companies to the Table
The primary AI companies currently resisting licensing requirements are OpenAI, Stability AI, and Google. Their stated objection is that training data use constitutes fair use and should not require licensing. Their actual concern is cost and legal precedent.
The ATCTA’s safe harbor provision is designed specifically to move companies from opposition to negotiation. Without the ATCTA, generative AI companies face pending class-action lawsuits with statutory damages of up to $30,000 per infringement under 17 U.S.C. § 504. Getty Images alone alleges 12 million infringed photographs, creating potential exposure in the hundreds of billions of dollars. The Authors Guild represents thousands of writers whose works may have been incorporated into text training datasets.15 The litigation exposure is not theoretical; it is documented in actual pending cases. For any large AI company, the expected value of litigation exposure is substantially higher than 1.5 percent of annual revenue. The message to industry is direct: The ATCTA offers legal certainty that no court ruling can provide. Even a favorable fair use ruling in one case does not stop the next lawsuit. The bill lets a company set a compliance budget, receive a statutory safe harbor, and exit litigation. Adobe chose to license upfront. Stability AI is still in litigation years later.
When AI companies object that 1.5 percent is too high, the appropriate response is that the rate is negotiable. The ATCTA can establish a range—say 0.5 to 2 percent—with the specific rate set by the Copyright Royalty Board through formal rulemaking, the same process that governs music mechanical licensing rates under the MMA. This gives industry a seat at the table in setting the actual rate through a transparent administrative process, which is more influence than any company currently has in litigation. When companies object that attribution is too complex, the response is that the bill does not require per-work attribution; it requires disclosure of dataset categories and sources. Distribution from the pool is proportional across registered creators in the relevant media categories, the same way music performing rights organizations distribute blanket license revenue without song-by-song tracking. The methodology mirrors an existing, functioning system.
The “good actor” wedge is a valuable tactical tool. Adobe and Shutterstock have already accepted that training data requires compensation. These companies have a competitive interest in universal licensing requirements, which would impose costs on competitors that have avoided licensing and level the playing field. Positioning them as industry supporters of the bill isolates OpenAI and Google as the holdouts protecting their cost advantage through the continued absence of regulation. That is a politically difficult position to defend publicly.
Mobilizing Creator Coalitions and Managing Skepticism
Artists and creators represent both the primary constituency and the group most likely to be skeptical of any compromise. Many creators feel that any supplement rewards companies that scraped their work without consent. They want litigation or prohibition, not accommodation.
The message to skeptical creators must be pragmatic. Without the ATCTA, creators have three options: private lawsuits, which can be expensive, slow, and uncertain; state-by-state laws, which are fragmented and unenforceable against large companies; or nothing. The ATCTA is not the ideal outcome for any individual creator, but it is the outcome that is actually achievable in the current political environment and can deliver real compensation and opt-out rights, rather than the theoretical possibility of a favorable court ruling years from now. SAG-AFTRA’s 2023 contract is the model. Actors wanted prohibition of AI-generated digital replicas. They accepted a consent-and-compensation framework because it was so achievable, because it delivered enforceable protections, and because it delivered compensation now rather than a promise of a favorable outcome in the future.
For creators who believe the fee is too low, the concern should be acknowledged plainly. A 1.5 percent industrywide fee distributed across all registered creators will not make anyone wealthy. But zero percent is what creators currently receive. The pool grows with the industry: As generative AI revenue increases, the pool grows proportionally. The opt-out registry gives individual creators an alternative: Remove their work from future datasets entirely if they prefer control over compensation. The creator retains agency in either direction. Individual creator stories are the public face of the campaign. Greg Rutkowski’s story—a respected professional artist whose name was used 93,000 times without compensation, whose request for removal was circumvented by a community workaround—translates a complex intellectual property argument into a clear moral claim. Voice actors whose vocal patterns were cloned without consent. Photographers whose images generated revenue for generative platforms while they received nothing. These human-centric stories move members of Congress because they take the issue from the abstract and make the case for the bill as a floor—the minimum achievable protection—rather than a ceiling.
Coalition Structure
SAG-AFTRA and the Writers Guild of America have established legislative relationships with Congressional offices and have demonstrated capacity to mobilize members and press. Both organizations have experience negotiating AI provisions and can speak credibly about whether the ATCTA represents meaningful protection. The Recording Industry Association of America, ASCAP, and BMI operate existing collective licensing infrastructure and maintain long-standing relationships with the Copyright Office.13 From their experience, they can provide credible testimony about the mechanics of the Music Modernization Act and its actual effects on industry. The Authors Guild has filed suit against OpenAI and maintains a formal position thatAI training requires author consent. The Graphic Artists Guild and the National Press Photographers Association represent visual creators and have the capacity to mobilize grassroots support and provide Congressional testimony. Adobe and Shutterstock participate as industry allies with competitive interests in universal licensing. The Copyright Alliance coordinates messaging among creator organizations and has established Congressional relationships. Academic intellectual property scholars provide peer-reviewed scholarship supporting the proposition that current law contains a gap and that licensing represents a workable solution.
Legislative Entry Point and Phased Implementation
The opt-out registry, maintained by the Copyright Office, should be introduced as a standalone provision first. It has the lowest political cost. It establishes a minimally invasive database—with precedent in the Do Not Call Registry—and requires companies to consult it before compiling new training data. It directly addresses creator control without addressing payment, making it suitable for broad early support. Once the registry passes, the licensing fee becomes a natural second phase. Anticipated opposition from AI companies should be pre-framed as a debate about whether companies that profit from creators’ work should compensate it. State regulatory fragmentation provides a second argument for urgency, because more states will pass their own AI laws if Congress does not act, creating a compliance landscape that is more burdensome for industry than a single federal standard.
Summary
The AI Training Compensation and Transparency Act represents a structured compromise between two positions that cannot otherwise be reconciled through litigation. On one side, creators whose intellectual labor has been appropriated without compensation or consent deserve redress. On the other side, AI companies need legal boundaries to operate and plan. Litigation provides neither side with a solution; it provides only years of court decisions and unpredictable outcomes.
The Music Modernization Act of 2018 resolved a structurally identical problem in music streaming. That act did not destroy Spotify, nor did it prevent innovation in music distribution. The framework allows operating space for both streaming services and artists, and has operated successfully for years and distributes billions in royalties annually.
The strategy to pass the ATCTA depends on three elements. First, a message that frames this as fair compensation for creative labor and as legal certainty for business planning, rather than as anti-technology regulation. Second, a coalition anchored by organized creative labor—SAG-AFTRA, the Authors Guild, the Graphic Artists Guild—and by the collecting licensing organizations that have experience managing statutory licensing. Third, a phased legislative approach that begins with the less-controversial opt-out registry provision, building toward the full licensing structure after that foundation is established. The political window is open now. Courts have not yet ruled definitively on fair use in generative AI, and Congress retains the leverage to structure a solution for both creators and industry—through the ATCTA.
Notes
Tom Simonite, “This Artist Is Dominating AI-Generated Art. And He’s Not Happy About It,” MIT Technology Review, September 16, 2022, https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/.
Decrypt, “Greg Rutkowski Was Removed From Stable Diffusion, But AI Artists Brought Him Back,” https://decrypt.co/150575/greg-rutkowski-removed-from-stable-diffusion-but-brought-back-by-ai-artists.
Copyright Alliance, “Takeaways from the Andersen v. Stability AI Copyright Case,” https://copyrightalliance.org/andersen-v-stability-ai-copyright-case/.
Authors Guild, Inc. v. Google, Inc., 804 F.3d 202 (2d Cir. 2015); CourtListener, Andersen v. Stability AI Ltd., 3:23-cv-00201 (N.D. Cal. 2023), https://www.courtlistener.com/docket/66732129/andersen-v-stability-ai-ltd/.
BakerHostetler, “Getty Images v. Stability AI,” https://www.bakerlaw.com/getty-images-v-stability-ai/.
TechCrunch, “The New York Times Wants OpenAI and Microsoft to Pay for Training Data,” December 27, 2023, https://techcrunch.com/2023/12/27/the-new-york-times-wants-openai-and-microsoft-to-pay-for-training-data/.
SAG-AFTRA, “SAG-AFTRA Members Approve 2023 TV/Theatrical Contracts Tentative Agreement,” December 5, 2023, https://www.sagaftra.org/sag-aftra-members-approve-2023-tvtheatrical-contracts-technical-agreement.
Tennessee Governor’s Office, “Photos: Gov. Lee Signs ELVIS Act Into Law,” March 21, 2024, https://www.tn.gov/governor/news/2024/3/21/photos--gov--lee-signs-elvis-act-into-law.html.
European Parliament and Council, “Regulation (EU) 2024/1689 on Artificial Intelligence,” Article 53(1)(d), https://artificialintelligenceact.eu/article/53/.
Adam Buick, “Copyright and AI Training Data—Transparency to the Rescue?” Journal of Intellectual Property Law & Practice 20, no. 3 (2025): 182–192, https://doi.org/10.1093/jiplp/jpae102.
Music Modernization Act, Pub. L. No. 1150264, 132 Stat. 3676 (2018), https://www.congress.gov/115/plaws/publ264/PLAW-115publ264.pdf.
Hong Wu, “Copyright Protection During the Training Stage of Generative AI: Industry-Oriented U.S. Law, Rights-Oriented EU Law, and Fair Remuneration Rights for Generative AI Training Under the UN’s International Governance Regime for AI,” Computer Law & Security Review (2024), https://doi.org/10.1016/j.clsr.2024.106056.
“American Society of Composers, Authors and Publishers,” Wikipedia, https://www.en.wikipedia.org/wiki/American_Society_of_Composers,_Authors_and_Publishers.
“Authors Guild Reinforces Its Position on AI Licensing,” Publishers Weekly, https://www.publishersweekly.com/pw/by-topic/industry-news/licensing/article/96745-authors-guild-reinforces-its-position-on-ai-licensing.html.
CourtListener, Authors Guild v. OpenAI Inc., 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023), https://www.courtlistener.com/docket/67810584/authors-guild-v-openai-inc/.
Bibliography
Authors Guild, Inc. v. Google, Inc. 804 F.3d 202 (2d Cir. 2015).
Authors Guild v. OpenAI Inc., 1:23-cv-08292 (S.D.N.Y. filed Sept. 19, 2023). CourtListener.https://www.courtlistener.com/docket/67810584/authors-guild-v-openai-inc/.
Andersen v. Stability AI Ltd., 3:23-cv-00201 (N.D. Cal. 2023). CourtListener.https://www.courtlistener.com/docket/66732129/andersen-v-stability-ai-ltd/.
BakerHostetler. “Getty Images v. Stability AI.” https://www.bakerlaw.com/getty-images-v-stability-ai/.
Buick, Adam. “Copyright and AI Training Data—Transparency to the Rescue?” Journal of Intellectual Property Law & Practice 20, no. 3 (2025): 182–192.https://doi.org/10.1093/jiplp/jpae102.
Copyright Alliance. “Takeaways from the Andersen v. Stability AI Copyright Case.”https://copyrightalliance.org/andersen-v-stability-ai-copyright-case/.
Decrypt. “Greg Rutkowski Was Removed From Stable Diffusion, But AI Artists Brought Him Back.”https://decrypt.co/150575/greg-rutkowski-removed-from-stable-diffusion-but-brought-back-by-ai-artists.
European Parliament and Council. “Regulation (EU) 2024/1689 on Artificial Intelligence.” Article 53(1)(d).https://artificialintelligenceact.eu/article/53/.
Music Modernization Act. Pub. L. No. 115-264, 132 Stat. 3676 (2018).https://www.congress.gov/115/plaws/publ264/PLAW-115publ264.pdf.
Publishers Weekly. “Authors Guild Reinforces Its Position on AI Licensing.”https://www.publishersweekly.com/pw/by-topic/industry-news/licensing/article/96745-authors-guild-reinforces-its-position-on-ai-licensing.html.
SAG-AFTRA. “SAG-AFTRA Members Approve 2023 TV/Theatrical Contracts Tentative Agreement.” December 5, 2023.https://www.sagaftra.org/sag-aftra-members-approve-2023-tvtheatrical-contracts-tentative-agreement.
Simonite, Tom. “This Artist Is Dominating AI-Generated Art. And He's Not Happy About It.” MIT Technology Review, September 16, 2022.https://www.technologyreview.com/2022/09/16/1059598/this-artist-is-dominating-ai-generated-art-and-hes-not-happy-about-it/.
TechCrunch. “The New York Times Wants OpenAI and Microsoft to Pay for Training Data.” December 27, 2023.https://techcrunch.com/2023/12/27/the-new-york-times-wants-openai-and-microsoft-to-pay-for-training-data/.
Tennessee Governor's Office. “Photos: Gov. Lee Signs ELVIS Act Into Law.” March 21, 2024.https://www.tn.gov/governor/news/2024/3/21/photos--gov--lee-signs-elvis-act-into-law.html.
Wikipedia. “American Society of Composers, Authors and Publishers.”https://en.wikipedia.org/wiki/American_Society_of_Composers,_Authors_and_Publishers.
Wu, Hong. “Copyright Protection During the Training Stage of Generative AI: Industry-Oriented U.S. Law, Rights-Oriented EU Law, and Fair Remuneration Rights for Generative AI Training Under the UN's International Governance Regime for AI.” Computer Law & Security Review (2024).https://doi.org/10.1016/j.clsr.2024.106056.