AI News Roundup – Music labels sue AI music generators, OpenAI creates CriticGPT, AI-generated Olympic coverage, and more

Article co-written by Yuri Levin-Schwartz, Ph.D., a law clerk at MBHB.

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • A group of music labels have sued two AI music generators for copyright infringement, according to a report from WIRED. The suits, brought by a consortium including Universal Music Group, Warner Music Group and Sony Music Group against Suno and Udio, were filed in U.S. federal courts in Massachusetts and New York, respectively. The labels seek damages of up to $150,000 per work infringed for copyright violations on a “massive scale.” Suno and Udio have not publicly disclosed what data they trained their generators on, but observers noted their ability to generate music that “bears a striking resemblance to copyrighted songs.” Despite the allegations, record labels have engaged with AI companies for different projects; however, the labels insist that licensing must be the path forward. The Suno complaint says that “there is room for AI and human creators to forge a sustainable, complementary relationship…[that] can and should be achieved through the well-established mechanism of free-market licensing that ensures proper respect for copyright owners.”
    • OpenAI has developed a “CriticGPT” to evaluate and criticize the outputs of its ChatGPT AI model, moving beyond the reinforcement learning from human feedback (RLHF) process used to fine-tune models and to avoid AI abuse or dangerous outputs. OpenAI researcher Nat McAleese told IEEE Spectrum that RLHF is becoming more difficult as models become more complex and humans find it more difficult to judge the best outputs. So, McAleese says, the company turned to AI to assist the human reviewers. The new CriticGPT model has excited the researchers, who said that “if you have AI help to make these judgments, if you can make better judgments when you’re giving feedback, you can train a better model.” Preliminary results were encouraging. According to the company’s preprint paper, CriticGPT caught around 85% of bugs, while human reviewers alone only caught around 25%. While the paper focuses on the use case of short code blocks, it represents the company’s first major effort towards AI safety since the departures of OpenAI cofounder Ilya Sutskever and AI safety Jan Leike in May 2024, who claimed the company was not taking AI safety efforts seriously.
    • The Washington Post reports that NBC’s coverage of the 2024 Olympic Games in Paris will feature an AI-generated voice clone of famed announcer Al Michaels. Although Michaels left NBC’s broadcast booth in 2022, the network, with Michaels’ permission, will use AI to recreate his voice for narrating personalized Olympic highlight packages on its streaming platform, Peacock. Viewers will be able to select their preferred sports and coverage types for a morning roundup of the previous day’s events, which will be narrated by the AI version of Michaels. In an interview with Vanity Fair, Michaels said that, upon hearing the generated voice, “frankly, it was astonishing. It was amazing. And it was a little bit frightening.” AI clones of actors and other media performers have become a flashpoint in recent months, especially after AI concerns were a key part of the Hollywood writers’ and actors’ strike in 2023.
    • Bloomberg reports that OpenAI has banned access to its AI products in China, leading to a scramble among Chinese AI developers to fill the sudden vacuum. The move, set to take effect in July, has prompted major Chinese tech companies like Baidu, Alibaba and Tencent to offer incentives and discounts to attract developers who previously relied on OpenAI’s tools. It is expected to reshape China’s AI landscape, potentially eliminating smaller startups and consolidating power among larger players. OpenAI’s ban highlights the growing technological divide between China and the U.S., as it coincides with increased U.S. efforts to restrict Chinese access to advanced AI and semiconductor technologies, as the roundup reported on earlier this month.
    • The city of San Jose, California, in the heart of Silicon Valley, has led the charge in municipal use of AI technologies, according to The Wall Street Journal. The city is exploring various applications of AI to improve government services, including using cameras to detect road hazards, assisting staff with document review and analysis, and potentially streamlining permit processes. Khaled Tawfik, San Jose’s chief information officer, emphasized the importance of developing guidelines and safety nets for government AI use to address concerns about privacy and equity, saying that “there’s a big gap between how technology is evolving and how government can build the safety net and the guidelines to develop a responsible and purposeful use of AI.” To support this goal, the city initiated in November of last year the GovAI Coalition to establish standards for responsible AI use in government, with over 300 local, county, state and federal agencies participating. Tawfik said to WSJ that the most effective uses of AI by the city have been for object detection to support safety goals, such as detecting potholes, graffiti, overgrown vegetation and cars parked in bike lanes. “The goal,” Tawfik said, “is to see if we can change our model from being reactive to being proactive. Can we identify potholes so we can address them before people call us to report an issue?”