News

Update on AI Research and DMCA Section 1201 Exemptions

On October 25, 2024, Librarian of Congress Carla Hayden adopted a final rule under Section 1201 of the Digital Millennium Copyright Act (DMCA), granting exemptions while rejecting a proposed exemption for “trustworthiness” research on generative AI models. This exemption,  which we advocated for to allow researchers to circumvent digital safeguards to investigate AI outputs for harmful biases, explicit content, and other infringing material, was denied. The rule became effective on October 28, 2024. Notably following OpenPolicy advocacy, NTIA supported our proposed exemption, and the Copyright office expressed support of the concept while deferring to Congress - we will keep advocating for AI red teaming! 

However, a major decision in this rulemaking was the rejection of an exemption for “trustworthiness” research on AI. This exemption would have allowed researchers to circumvent TPMs to probe AI models for bias, explicit content, and other harmful outputs. Proponents argued that such an exemption is critical for transparency and accountability in AI development, despite NTIA, DOJ and Congressional support. 

OpenPolicy, alongside others, argued that Section 1201’s restrictions limit valuable research on AI’s potential harms by barring circumvention of authentication and security measures, preventing researchers from fully investigating the integrity and risks of AI systems. They highlighted the importance of such research in identifying biases in AI-generated content.

NTIA’s Input: The National Telecommunications and Information Administration (NTIA) supported a modified version of the exemption modeled on the existing security research exemption, which would allow circumvention without the requirement of lawfully acquired devices or system owner authorization. NTIA emphasized that the restrictions imposed by Section 1201 have a “chilling effect” on researchers, limiting their ability to conduct necessary investigations into AI trustworthiness.

Despite recognizing the value of AI trustworthiness research, the Register and Librarian recommended denying the exemption, explaining that the challenges identified by proponents stem from platform controls rather than from TPMs regulated by Section 1201. They emphasized that broader Congressional and regulatory action is more appropriate for addressing these concerns, given the limitations of Section 1201’s scope. OpenPolicy will continue to advocate for AI red teaming!

The advocacy of the research community remains crucial. Legal and policy discussions only advance when stakeholders actively engage. The impact of community members, who bring their unique perspectives to the conversation, is instrumental in shaping future policy. The AI research community’s voice in forums and future proceedings will be essential in driving legislative efforts that create a safe harbor for AI trustworthiness research, helping build an ecosystem that fosters transparency, security, and accountability.

If you have any questions about the rule or are interested in sharing insights from your AI research team—or simply wish to learn more—please don’t hesitate to reach out.

Latest Resources

AWS

AWS

OpenPolicy's Conversations Episode #6

Podcast

OpenPolicy's Conversations Episode #6

OpenPolicy's Conversations Episode #5

Podcast

OpenPolicy's Conversations Episode #5

Your Gateway to Regulatory Insights and Advocacy is Now Open.

Our platform revolutionizes the ability of innovators to anticipate and shape policy, by offering tech-enabled, AI-powered, advanced policy intelligence and active policy engagement, democratizing the opportunity for anyone to take a seat at the decision-making table.

Get a Demo