AI News Roundup – Anthropic limits release of Mythos cybersecurity model amid hacking fears, Google releases new open-source AI model, Anthropic suffers setback in legal battle with Pentagon, and more
- April 13, 2026
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
- Anthropic has limited the release of its latest cybersecurity model Mythos, according to Bloomberg, citing concerns that the model could power cyberattacks. The company launched an initiative called Project Glasswing, granting early access of Mythos to major technology partners including Apple, Amazon, Microsoft, Google, Nvidia, and several cybersecurity firms. These organizations will use the new AI model to hunt for vulnerabilities in their own software and share their findings to help inform future safety guardrails. While Mythos is a general-purpose model, it possesses advanced coding and reasoning skills that allowed it to uncover a 27-year-old bug in critical internet software and a 16-year-old flaw in video code. However, Anthropic’s internal red team also demonstrated that the model could be directed to chain together exploits against every major operating system and web browser, leading to the decision to forego a wide public release. Further testing of the model is expected to continue in the coming months.
- Ars Technica reports on Google’s latest open-source AI model. Gemma 4, released at the beginning of this month, comes in four different sizes focused on locally-run AI applications. The company claims that the Gemma 4 models are the most capable models that can be run locally, offering improved reasoning and numerical processing capabilities over its predecessor open-source models. Gemma 4 is based on the technology underlying Google’s Gemini 3 models, which were released in November 2025. Gemma 4 also has an updated open-source license — the popular Apache 2.0 license, commonly used in software projects. Previous versions of Gemma used a Google-specific license that included usage restrictions that the Apache 2.0 license lacks. The new Gemma models are currently available for customer use in Google’s AI studio and Google Cloud products.
- A U.S. federal appeals court has ruled against Anthropic in its legal fight with the Defense Department, according to The Wall Street Journal. A unanimous panel of the United States Court of Appeals for the District of Columbia Circuit denied Anthropic’s motion for a stay of the Pentagon’s ban on Anthropic’s goods and services as the litigation proceeds, saying that while Anthropic will likely experience harm, a balancing test ultimately favored the government, especially as a stay could interfere with the current military conflict in Iran. As this AI Roundup has covered in recent months, the lawsuit arose out of a disagreement between the Defense Department and Anthropic over the use of the latter’s AI technology for military and surveillance purposes. The Pentagon designated Anthropic as a “supply chain risk,” essentially banning the use of its technology. President Trump signed a separate order that banned all federal agencies from using Anthropic’s Claude AI models, but a federal court in California recently granted Anthropic a preliminary injunction to place that on hold as that case continued. The case has the docket number 26-1049 at the D.C. Circuit and was also granted an accelerated schedule on the substantive legal issues, with oral argument set for May 19.
- The New York Times reports on a new survey that found that Generation Z’s opinion of AI is cooling even as many “Zoomers” use the technology. According to a recent study conducted by Gallup, just over half of respondents between the ages of 14 and 29 years report using generative AI on a daily or weekly basis. Despite this relatively high adoption rate, positive sentiment among the age group has sharply declined over the past year, with excitement dropping 14% and hopefulness falling to just 18% (from 27% in 2025). Simultaneously, negative emotions regarding AI have intensified, as nearly a third of young adults now indicate that the technology makes them feel angry. This growing skepticism is driven by Gen Z’s concerns about AI’s potential to negatively impact core skills like creativity and critical thinking, as well as its effect on long-term learning. Employed “Zoomers” are especially wary of the technology’s professional implications, with nearly half stating that the risks of AI in the workplace outweigh its potential benefits. The report recommends that employers and other stakeholders should build trust and address these concerns with Gen Z employees and their AI usage.
- OpenAI has paused its main data center investment project in the UK, according to Reuters. The company cited an unfavorable regulatory environment and high energy costs as the primary reasons for halting the initiative, known as Stargate UK. The project was initially launched last September in partnership with Nvidia and UK-based Nscale to build up Britain’s computing capacity and accelerate domestic AI adoption. This suspension represents a notable setback for Prime Minister Keir Starmer’s strategy to establish the country as a premier global AI hub to help reinvigorate a stagnant economy. Despite the delay, OpenAI emphasized its continued support for Starmer’s ambitions, noting that London remains home to its largest international research hub, and stated it will resume the infrastructure project once the necessary regulatory and energy conditions are met.


