AI News Roundup – AI investments between large companies continue to grow, OpenAI backtracks on Sora copyright “opt-out” policy, study finds AI models can develop backdoors, and more
- October 13, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- Bloomberg reports on the web of funding that underpins the ongoing AI boom. This past week, OpenAI signed an agreement with chipmaker Advanced Micro Devices (AMD) to deploy the latter’s AI chips in OpenAI’s technical infrastructure projects. This follows a multibillion-dollar investment last month into OpenAI from Nvidia, an AMD rival and the dominant AI chipmaker (as reported by this AI Roundup). OpenAI has also been involved in other major deals with Oracle and Microsoft for AI deployment, creating a complicated web of investment among major players in the AI industry, further increasing spending to fuel the AI boom that has driven the American economy in recent years. OpenAI has come under criticism for some of its spending practices, especially as the firm has never turned a profit. OpenAI CEO Sam Altman said at a developer event that “obviously, one day we have to be very profitable and we’re confident and patient that we will get there, but right now, we’re in a phase of investment and growth.”
- The Wall Street Journal reports on a new OpenAI policy reversing the “opt-out” system for its Sora video generation model. As reported by this AI Roundup, OpenAI’s new Sora 2 system required companies to opt-out if they wished to exclude their copyrighted content from appearing in Sora’s outputs. This policy immediately sparked controversy, and in response OpenAI CEO Sam Altman said in a blog post that the company would adopt an opt-in model, similar to that used for likenesses of public figures. Altman also said that OpenAI is planning on monetizing Sora soon, especially as demand and usage of the model has greatly exceeded expectations.
- Ars Technica reports on new research from Anthropic, the UK AI Safety Institute, and the Alan Turing Institute that found that large language models can be vulnerable to backdoors introduced through malicious documents in their training data. In a preprint research paper, the authors found that “poisoning” attacks of this type could install a type of backdoor into the model that would cause models to output garbled text when encountering a trigger word. For a 13-billion parameter model, only 250 malicious documents were sufficient to introduce this behavior. It is unclear if more complicated attacks could be accomplished using the same methods, though the authors note that the research demonstrates vulnerabilities in large language models that should result in updated security and safety practices.
- The Financial Times reports on Deloitte’s decision to refund the Australian government for a report that contained AI-generated false content. Australia’s Department of Employment and Workplace Relations commission the consulting firm to conduct an “independent assurance review” in December 2024 of a public welfare system. Deloitte’s report, however, contained multiple errors, including several references to nonexistent reports, which were determined to have been caused by the use of generative AI. An updated version of the report corrected the errors, as well as stated that the report “included the use of a generative artificial intelligence (AI) large language model.” Deloitte will partially refund the 439,000 AUD fee charged for the report to the government, and Deloitte Australia said that the ”matter had been resolved directly with the client.”
- NPR reports on the growing use of AI in fashion marketing. During Paris Fashion Week, which concluded this past weekend, observers noted the growing use of AI in the industry, especially in the field of data analytics. One marketing firm used AI algorithms to detect emerging fashion trends for the next year, including dotted prints, which ended up appearing on runways during fashion weeks. However, those involved note that AI, while a useful tool, cannot perform this sort of work on its own, leaving key decisions and the drawing of conclusions to human analysts.