AI News Roundup – U.S. Congress mulls moratorium on state-level AI regulations, U.S. Copyright Office releases report on copyright implications of AI training, Google and OpenAI release AI coding agents, and more
- May 20, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- The Washington Post reports that the U.S. House of Representatives is considering a 10-year moratorium on state-level AI regulations. The 1,116-page budget bill currently under consideration contains a provision that would bar states from implementing regulations on AI models for a period of 10 years after the bill is enacted. Such a move has been long wished for by the AI industry, with OpenAI CEO Sam Altman claiming that the current “patchwork” of state-level laws would “slow us down at a time when I don’t think it’s in anyone’s interest.” Indeed, many AI industry figures have framed the regulation fight in terms of competition with China, saying that regulations on the use and making of AI would only make the U.S. fall behind in the AI race. Other groups are more skeptical. An analyst at Consumer Reports told the Washington Post that “Congress has long abdicated its responsibility to pass laws to address emerging consumer protection harms,” and that the bill “would also prohibit the states from taking actions to protect their residents.” The bill remains in the committee stage in the U.S. House.
-
- Axios reports that the U.S. Copyright Office released the third part of its long-awaited report on copyright issues surrounding generative AI. The portion released this past week examined the training of generative AI models on copyrighted materials. AI companies have long argued that the use of copyrighted information to train AI models falls within the bounds of the fair use copyright exemptions in U.S. law. The Copyright Office said that while some generative AI outputs are “transformative” over the original work (another key consideration in determining fair use), the mass acquisition of copyrighted materials for AI training likely doesn’t qualify as fair use: “making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through illegal access, goes beyond established fair use boundaries.” The guidance comes as several major AI-related copyright infringement cases continue to make their way through the courts.
-
- WIRED and The New York Times report on Google and OpenAI’s new coding-focused AI agents, both released this past week. Google’s DeepMind AI subsidiary recently revealed AlphaEvolve, an AI agent based on the company’s Gemini AI model that also includes methods for testing generated code and generating novel ideas. According to the company, AlphaEvolve developed a matrix calculation method that improves upon the Strassen algorithm, often used in linear algebra applications as well as the mathematics that underpin AI. Separately, OpenAI revealed Codex, which according to the company can perform several programming and related tasks in parallel, such as writing code, running tests and performing code repository related tasks. OpenAI’s development follows news that the company is in talks to acquire Windsurf, another AI coding agent, for nearly $3 billion. The shift towards AI agents has been a major trend in the AI industry in recent months, and the release of competing tools from major AI companies demonstrates that the trend is likely to continue.
-
- Reuters reports that AI company Anthropic has been accused of submitting AI-hallucinated information in a court filing. The lawsuit was brought by several music companies, including Universal Music Group, who allege that Anthropic improperly used their copyrighted song lyrics to train its Claude AI chatbot. A federal magistrate judge in San Jose, California ordered Anthropic to respond after attorneys representing the music companies claimed that an Anthropic employee provided an inaccurate citation in support of a pre-trial motion relating to evidence. In response, Anthropic submitted a declaration by an associate at Latham & Watkins LLP, who represents the company in this case, stating that they provided the source to Claude in order to generate a legal citation and that it provided a citation with an erroneous title and author that was not caught by usual citation checks, stating that it was “an embarrassing and unintentional mistake.” It is unclear if the company will face any penalties as part of the ongoing litigation.
-
- The MIT Technology Review reports on a new AI system for use by law enforcement for tracking suspects. Several U.S. states have placed limitations on the use of facial recognition by law enforcement agencies, often citing privacy and security concerns relating to biometric data. Track, developed by California-based company Veritone, uses AI to track people in video footage based on attributes such as “body size, gender, hair color and style, clothing, and accessories.” The use of Track has been growing throughout the United States, with customers including local and state police departments, universities and even the federal Department of Justice. Critics, such as the ACLU, claim that the new system is designed to circumvent facial recognition bans and present the same privacy issues. Veritone CEO Ryan Steelberg told the Technology Review that “if we’re not allowed to track people’s faces, how do we assist in trying to potentially identify criminals or malicious behavior or activity?” Another Veritone executive said that Track was not a general surveillance tool but a way to speed up the analysis of surveillance videos in investigations.