AI News Roundup – OpenAI releases new video generation model, California AI bill signed into law, AI actress sparks backlash in Hollywood, and more
- October 6, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- The New York Times reports on the release of OpenAI’s new video generation model, Sora 2. Released this past week as a smartphone app, Sora 2 is an improved version the model released in February 2024 (as covered by this AI Roundup) that takes in text inputs and creates videos. The new version is much faster, can create more lifelike outputs, and can incorporate a user’s face if provided. Additionally, the app is structured like a short-video social media application, such as TikTok or Instagram Reels, that allows users to swipe though and interact with content created by others and provides suggestions according to an algorithm. The NYT writers were able to generate lifelike videos of themselves skydiving with pizza parachutes and fighting against Ronald McDonald with burgers as weapons. Sora 2, currently available on an invitation-only basis, has been shown to generate videos containing copyrighted material, such as the iconic Pokémon Pikachu and Goku from the animated series Dragon Ball. This has sparked copyright concerns, especially among Japanese companies. The Wall Street Journal has reported that OpenAI has required companies to opt-out if they didn’t want their copyrighted content to appear in Sora’s outputs. One such media conglomerate taking advantage of the opt-out appears to be the Walt Disney Company, as Sora 2 will refuse to generate videos containing characters such as Mickey Mouse, according to the Nikkei Asia article above. The NYT writers admitted that using the app, especially with how lifelike outputs appeared, was “disconcerting,” and particularly pointed to the ability of the model to spread disinformation online.
- POLITICO reports that California Governor Gavin Newsom has signed into law SB 53, a wide-ranging AI regulation bill that appears to be the first measure of its kind in the U.S. This AI Roundup covered the bill and its contents earlier this month. Major AI companies OpenAI and Meta did not oppose the bill, while Anthropic endorsed the measure last month, with an OpenAI spokesman saying that the company was “pleased to see that California has created a critical path toward harmonization with the federal government — the most effective approach to AI safety. If implemented correctly, this will allow federal and state governments to cooperate on the safe deployment of AI technology.” However, one tech lobbyist group, Chamber of Progress, whose partners include Apple, Google, and Nvidia, criticized the new law as sending “a chilling signal to the next generation of entrepreneurs who want to build here in California.” Other states, including New York, are awaiting gubernatorial action on similar AI regulations.
- The Los Angeles Times reports on the emergence of Tilly Norwood, an entirely AI-generated actress. Norwood, unveiled this past week at a film conference in Zurich, appeared in a parody video created by Xicoia, an AI talent studio founded by Dutch actor Eline Van der Velden. SAG-AFTRA, a major actors’ union, greatly criticized Norwood, dismissing it as a “character generated by a computer program that was trained on the work of countless professional performers.” SAG-AFTRA President Sean Astin said in an interview that Xicoia was “taking our professional members’ work that has been created, sometimes over generations, without permission, without compensation and without acknowledgment, building something new.” Van der Velden responded in an Instagram statement that “I see AI not as a replacement for people, but as a new tool” and intended to show Norwood as a demonstration of changing technology, saying that a talent agency was planning on signing it. The controversy is only the latest development in ongoing tensions between professionals in the film and television industry and corporate leaders who have been pressing to adopt AI in order to cut costs.
- The Financial Times reports on possible licensing deals between two major music labels and several AI companies. Universal Music and Warner Music, two of the three largest music companies in the world and home to artists such as Taylor Swift and Kendrick Lamar, may strike deals with several AI companies, including Suno, Udio, and Stability AI to license their intellectual property for AI training. The record labels are seeking a “micropayment” structure similar to that used for music streaming services, and for the AI companies to develop an attribution system that can identify when songs owned by a specific company are used in AI generation in order to calculate such payments. A copyright infringement lawsuit brought against Suno by Universal, Warner, and fellow record giant Sony is still proceeding (as reported by this AI Roundup), though any such licensing deal reached will likely include a settlement for past usage of music. AI-generated music has flooded online streaming services, and industry executives are hoping to avoid a tumultuous change to the industry akin to that of Napster and LimeWire in the early 2000s.
- KPBS reports on the adoption of new AI tools by the police department of Chula Vista, California. The Chula Vista City Council voted unanimously this past week to purchase several AI tools from Axon, a maker of body-worn cameras, for nearly $1 million. One major tool included in the package, Draft One, analyzes audio from a police officer’s body-worn camera, and then uses a modified version of ChatGPT to transcribe the audio and generate a draft police report. The officer may then review the report for accuracy and remove any sensitive information. The Chula Vista Police Department said that several officers had been testing the program, several of whom were able to complete reports “hours faster” than their colleagues who did not use Draft One. However, privacy advocates, such as the American Civil Liberties Union, have criticized AI tools in policing as “too untested, too unreliable, too opaque, and too biased to be used in criminal justice work.”