AI News Roundup – Pentagon strikes new AI deals following Anthropic spat, OpenAI and Microsoft rework AI exclusivity agreement, Oscars clarify award eligibility for AI actors and writers, and more
- May 4, 2026
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
- The U.S. Defense Department has finalized agreements with seven leading AI companies to use their technology with classified data, according to The Washington Post. These deals, made with giants such as Microsoft, Amazon, and Google, aim to deploy AI technology to analyze data and improve battlefield decision-making. Notably, Anthropic, the initial AI company to work with secret data and embed its products into the military’s Maven system (as this AI Roundup has reported in recent months), is not among the seven companies. The Pentagon moved earlier this year to blacklist Anthropic as a national security risk after balking at the company’s attempts to limit the use of its Claude model in certain applications, and the Defense Department has since sought alternative providers to prevent reliance on a single partner. The newly finalized agreements still incorporate limits on domestic surveillance and require human oversight for weapons systems. Despite the ongoing litigation regarding its Pentagon blacklist, Anthropic remains engaged with the White House to evaluate the cybersecurity implications of Mythos, a newly developed AI hacking system, though The Wall Street Journal has reported that the Trump administration has pushed back on Anthropic’s plans for a wider release of the technology.
- Bloomberg reports that OpenAI and Microsoft have reworked their exclusivity agreement that granted Microsoft the sole right to sell OpenAI’s models in cloud applications. Under the revised framework, OpenAI is now permitted to distribute its generative AI models through competing cloud infrastructure providers like Amazon Web Services (AWS) and Google Cloud, provided that new OpenAI models continue to debut on Microsoft Azure first. The revised agreement ends revenue sharing payments from Microsoft to OpenAI, caps such payments from OpenAI to Microsoft, but guarantees that Microsoft will receive payments through 2030. Under the previous agreement, Microsoft would lose such payments if OpenAI achieved artificial general intelligence, or AGI, a stated goal of the AI lab. This operational decoupling frees OpenAI to finalize a reported $50 billion cloud pact with Amazon to deploy its autonomous agent platform, Frontier, on AWS (as the Financial Times confirmed this past week).
- The Academy of Motion Picture Arts and Sciences announced this past week that Oscars in acting and writing will only be awarded to human work, not that done by AI, according to BBC News. Under the updated eligibility guidelines, acting must be “demonstrably performed by humans” and scripts must be “human-authored” to secure a nomination. This policy shift addresses escalating industry friction regarding generative artificial intelligence, punctuated by a recent Hollywood writers’ strike over automated scriptwriting and ongoing copyright infringement litigation against AI companies training large language models (LLMs) on decades of human-created media. The guidelines also emerge as commercial deployments of AI expand, including plans to digitally recreate deceased actor Val Kilmer for an upcoming lead role and the introduction of entirely synthetic performers (as this AI Roundup covered in October 2025). However, the Academy stopped short of a blanket prohibition on AI technology in filmmaking. For awards categories outside of acting and writing, the use of AI tools will not inherently help or harm a film’s nomination chances, provided a human remains at the “heart of the creative authorship.” To enforce this standard, the Academy retains the authority to demand technical disclosures regarding the specific nature of generative AI usage during production.
- WIRED interviews Bloomberg’s Chief Technology Officer (CTO), Shawn Edwards, regarding a forthcoming AI overhaul of the company’s legendary Bloomberg Terminal. Edwards detailed a chatbot-style interface named ASKB — currently in beta for approximately one-third of the platform’s 375,000 users — that operates on a basket of language models to synthesize the vast swaths of financial data that flow through the software. To simplify the use of the Terminal, ASKB will allow users to make natural language queries to retrieve data points. Edwards told WIRED that while the new interface aims to become the primary interaction mode for the Terminal, the tool will not directly perform any market actions, requiring human professionals to retain ultimate decision-making authority.
- AI chatbot conversation logs are growing as a source of evidence in criminal investigations, according to reporting from CNN. Law enforcement agencies in the U.S. are increasingly searching through these transcripts to find evidence to establish motive and state of mind for AI users accused of crimes. Unlike consultations with licensed attorneys or medical professionals, AI chatbot conversations do not currently benefit from a legal privilege preventing their disclosure, rendering them vulnerable to discovery during criminal proceedings, similar to how search engine logs are currently treated. Some have argued for establishing an AI privilege in the law given that users are beginning to turn to AI chatbots for legal and medical advice, but legal experts told CNN that, for now, users should be cautious of what they enter into AI chatbots given this lack of protection as well as the general privacy concerns associated with the technology.


