AI News Roundup – Anthropic’s Claude used to automate cyberattacks, OpenAI loses copyright case in German court, AI cheating scandal rocks South Korean universities, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • Anthropic’s Claude AI system has been used by Chinese hackers to effectuate automated cyberattacks against foreign companies and governments, according to the Wall Street Journal. Anthropic’s internal cybersecurity investigators disclosed their discoveries this past week in a blog post, claiming that the hackers had manipulated Claude into attempting to hack into a variety of targets, including tech companies, banks, and government agencies around the world, claiming that “this is the first documented case of a large-scale cyberattack executed without substantial human intervention.” AI has been used to commit cyberattacks in the past, though the scale and limited human involvement in these attacks appear to be beyond anything done before. Anthropic also said that, following the attacks, it updated its threat detection methods in order to deter attackers from using Claude for similar tasks in the future.
    • Deutsche Welle reports on a recent court loss by OpenAI related to copyright concerns over the training of its AI models. Judge Elke Schwager at the Munich District Court ruled that OpenAI’s ChatGPT model infringes on Germany’s “authors’ rights” laws if they output song lyrics without first paying license fees, and that OpenAI would be required to pay an unspecified sum in damages. Authors’ rights law is distinct from American copyright law in that the former emphasizes the author and considers the rights non-transferable, rather than property of the owner. In response, OpenAI claimed that ChatGPT did not copy training data as part of the process, and that ChatGPT outputs containing copyrighted information would be the responsibility of ChatGPT users rather than the company. While the case only concerned German law, advocates for the plaintiffs claimed that the case could influence law on the subject in the entire European Union. OpenAI is considering an appeal of the ruling.
    • Several top universities in South Korea are grappling with a growing number of students cheating on exams using AI, according to the New York Times. Several professors at the country’s top schools, including Yonsei University, Seoul National University, and Korea University, reported that dozens of students had used AI to cheat on online examinations, including a course whose subject matter was ChatGPT. Over 90 percent of South Korea undergraduates who have generative AI experience have used those tools on schoolwork, and several professors have concerns over how AI is affecting the education process. One said AI “is a tool for retaining and organizing information so we can no longer evaluate college students on those skills,” and encouraged testing of non-AI creative skills. However, this hasn’t stopped many students from using AI, and many students said they’ll continue to use the technology despite professors’ admonitions against its use.
    • Axios reports on an AI-generated song that has topped the most-downloaded charts in the U.S. Breaking Rust, a blues-country artist depicted as a cowboy, is completely AI-generated, though credited to Aubierre Rivaldo Taylor. Breaking Rust’s song “Walk My Walk” topped Billboard’s Digital Song Sales chart, and has garnered over 3 million streams on Spotify. Several other songs on the same chart are also AI-generated. However, the chart only tracks paid music downloads, and does not include music streaming and radio numbers like Billboard’s Hot Country Songs chart. Regardless, the rise of AI music has worried some in the music industry, especially in Nashville, the epicenter of country music, as AI-related copyright battles continue to play out in court.
    • Arc Raiders, a new bestselling video game, is facing controversy over its use of AI-generated voices, according to GamesIndustry.biz. The game, released in October, is of the “extraction shooter” genre, where players enter areas to loot items and escape, lest they lose all their items. The game has been swept up in recent controversies over the use of AI in game development, especially after a Eurogamer review of the game which praised almost all aspects of the game, except its use of AI-generated voices for player-character “callouts” as well as for in-game merchant characters. In response, the CEO of the game’s publisher, Nexon, said that “it’s important to assume every game company” is using AI, and the CCO of Arc Raiders developer Embark Studios said that the AI-powered text-to-speech system “allows us to increase the scope of the game in some areas where we think it’s needed, or where there’s tedious repetition, in situations where the voice actors may not see it as valuable work.”