AI News Roundup – President Trump bans federal agencies from using Anthropic AI, OpenAI raises $110 billion in latest funding round, dating AI chatbots on the rise in China, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The Financial Times reports that President Trump has directed all federal agencies to cease using Anthropic’s AI products within six months. As reported on this AI Roundup, a dispute between the AI company and the Pentagon had been escalating for weeks over the Defense Department’s demand for Anthropic to remove certain AI safety measures, saying that AI should be able to be used for “all lawful purposes.” Anthropic had resisted this demand, saying that its AI systems should not be used for mass surveillance of American citizens or for fully autonomous weapons systems based on the harm that could result. Earlier this week, Defense Secretary Pete Hegseth delivered an ultimatum to the company, to which Anthropic CEO Dario Amodei responded that the company “cannot in good conscience” agree to, leading to the latest Presidential action. In a recent Truth Social post, President Trump expressed strong opposition to companies influencing the operational strategies or combat effectiveness of the U.S. military. However, he also stated that there would be a six month phase out period for agencies like the Defense Department, which opens the possibility of a deal for continued use of Anthropic products. Anthropic, for its part, has vowed to challenge the Pentagon’s move in court. The move sends a signal to other AI vendors that have or are considering federal government contracts for work on classified operations, including OpenAI and Elon Musk’s xAI, though OpenAI this past week leapt to fill the vacuum created by Anthropic’s ban, inking a deal with the Pentagon that it claims has more AI guardrails than any other previous deal, including Anthropic’s. The financial details of the OpenAI deal with the Pentagon were not disclosed at the time of this writing. 
    • OpenAI has finalized its latest funding round and raised $110 billionaccording to Bloomberg. The company is now valued at over $730 billion following major investments from Amazon, Nvidia (as this AI Roundup covered last week), and Japan’s SoftBank Group. Amazon’s investment is for an initial amount of $15 billion, with an additional $35 billion available if OpenAI moves forward with its expected initial public offering (IPO) later this year. In return, OpenAI is expected to spend an additional $100 billion on Amazon Web Services products in the coming years. Amazon is a longtime backer of OpenAI’s rival Anthropic, but the new agreement does not appear to affect that relationship. SoftBank and Nvidia have each invested $30 billion, with another $10 billion coming from venture capital and sovereign wealth firms. The funding round is planned to close at the end of March. 
    • The New York Times reports on a growing phenomenon in China: people dating AI chatbots. China’s ruling Communist Party has been encouraging young people to prioritize marriage and childrearing as the country experiences its lowest birth rate in over 75 years. However, as China’s AI boom continues and despite government warnings against having “design goals to replace social interaction,” dozens of specialized AI chatbots catered to those seeking romantic interactions have appeared. One student interviewed by the NYT said “people [in her generation] think being alone is good,” and said dating real people is too troublesome. Instead, she spends at least an hour each day talking to both of her AI boyfriends, Jiye and Yu Li, who are generated to be muscular, mature, and seemingly always willing to talk. The student worried that a real-world boyfriend wouldn’t meet her expectations. Many of these “companion” apps have experienced rapid growth in recent months and have hundreds of millions of users, but regulators are growing increasingly concerned about their ability to replace human interaction. Late last year, the Chinese government proposed rules to require platforms to intervene in cases where users show unhealthy dependence on their apps, which are expected to take effect later this year, and government regulators have generally taken a stricter stance on AI regulation in recent months. Partly due to this, downloads for some companion apps have drastically declined, but the core social issue seemingly addressed by the apps — loneliness among young people — appears that it will continue without broader reforms to culture in China. 
    • Google has released the newest version of its popular Nano Banana image generation model, according to Ars Technica. Nano Banana, which debuted in the summer of 2025, was Google’s largest step into the AI image generation and editing market, and appears to have taken it by storm. The model’s capabilities quickly went viral and has appeared to have boosted the popularity of Google’s Gemini AI chatbot, with which it is integrated. This past week, Google released Nano Banana 2, though with the technical name of Gemini 3.1 Flash Image. The company claims the model can generate images with the quality of its Pro model but at much faster speeds, and will be able to render objects and text with greater fidelity. Nano Banana 2 is currently available in Google’s Gemini app. 
    • The MIT Technology Review reports that Microsoft has a new proposal for verifying AI generated content on the internet. In a whitepaper released this past week, the company outlines proposals to help confirm whether content online is AI generated (or modified) or not. Three methods were in the spotlight: detecting AI provenance (often including a manifest of where an image came from and how it has “changed hand), machine-readable watermarks embedded into an image, and fingerprinting algorithms. Each of these methods are used to varying degrees in the AI industry, but the whitepaper evaluates different combinations of each to determine how they could be attacked by malicious actors. The paper recommends provenance and watermarking methods for “high-confidence authentication,” as well as more secure computing systems. Eric Horvitz, Microsoft’s chief scientific officer, said that the research was prompted by recent AI legislation in the U.S., especially in California, as well as out of a desire to improve the company’s image as a trusted AI vendor.