AI News Roundup – Claude AI used for US strikes on Iran, New York bill would ban AI chatbots from impersonating lawyers, new study investigates AI-induced “brain fry,” and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • Anthropic’s Claude AI model has been central to recent U.S. and Israeli strikes on Iran, according to The Washington Post. The U.S. military has deployed the Maven Smart System, a product of Peter Thiel’s Palantir Technologies, which takes in classified data from satellites and hundreds of other intelligence sources to provide real-time targeting information. Claude is also embedded into Maven, and the combined system was used in the recent strikes on Iran to suggest and prioritize targets and issue location coordinates. The system also can evaluate strikes after they are conducted. The military’s use of Claude comes at a time of strained relations between the federal government and Anthropic. As this AI Roundup has covered in recent weeks, the Trump administration has demanded fewer restrictions on the use of Claude in military work, which Anthropic refused. The Defense Department informed Anthropic this past week that the company is an official “supply chain risk,” which could impact Anthropic’s ability to receive any federal government contracts. However, this designation does not appear to have affected Claude’s use as a component of Maven in Iran. Even if Anthropic directed the military to cease using Claude, one source told the Washington Post that military leaders have become so dependent on Maven that the government would retain the technology until it could be replaced. It is likely that the Maven (and Claude) AI system will continue to be used as long as the current Iran conflict endures. 
    • Reuters reports on a new bill introduced in New York’s legislature that would ban AI chatbots from impersonating lawyers and other licensed professionals in the state. The proposal, S.B. S7263, would mean that AI chatbots would be barred from providing “any substantive response, information, or advice, or take any action” that “if taken by a natural person,” would be considered unauthorized practice of law, medicine, or other fields requiring licensure in the state. The bill also provides a legal cause of action for users against AI makers who violate the law to recover damages. The bill’s sponsor, state senator Kristen Gonzalez, told Reuters that “there is no law that says that a large language model cannot tell you that it is a lawyer, that it is a licensed therapist, and then give you legal advice or therapy accordingly,” and that it was “really concerning.” The bill advanced out of a New York Senate committee last month and is one of several bills proposed in the state that would regulate AI chatbots, including those that seek to protect minors and to require chatbots to display notices that outputs may be inaccurate. Further legislative action on the impersonation bill is likely later this year. 
    • A new study has found that some AI use can lead to cognitive fatigue in users, according to the Harvard Business Review. A new study of nearly 1,500 full-time U.S. workers at large companies found that their AI usage can have deleterious mental effects. After periods of overseeing AI agents, participants described a mental fog in which they had more difficulty focusing, made decisions slower, and had increased headaches, which the researchers dubbed “AI brain fry.” These oversight tasks were found to be the most mentally taxing. Workers in marketing and human resources reported the highest amounts of AI brain fry, and the researchers predicted that the cognitive effects could have millions of dollars in business costs each year. However, AI use for automating mundane tasks was found to reduce user burnout and increase engagement. The researchers recommended designing AI tools with human users in mind to reduce cognitive effects, as well as higher quality worker training regarding AI tool use. 
    • Bloomberg reports that the U.S. is considering requiring AI chipmakers to receive governmental approval to export their products. Proposed regulations at the U.S. Department of Commerce would require companies such as Nvidia and AMD to seek federal government approval to export their AI accelerator chips. Current export restrictions cover around 40 countries, and thus the proposed regulatory scheme would be a massive expansion of restrictions on some of the most in-demand products on the global market. The approval process will reportedly involve an evaluation of how much computing power a potential buyer is planning to purchase, with smaller orders meriting simple, more cursory review while larger orders (for instance, over 200,000 of Nvidia’s latest GB300 processors) would require involvement from the foreign government as well. Bloomberg’s sources claimed that the regulatory framework is not final, and is likely to change in the coming weeks as other administration officials provide their own input, but it would still be the strictest measures yet implemented by the second Trump administration to leverage global dependence on advanced AI chips from American companies like Nvidia. 
    • AI is being increasingly used to rejuvenate local news coverage, according to The Wall Street Journal. The Philadelphia Inquirer has launched in recent months several newsletters focused on certain suburbs of Philadelphia. Reporters use AI tools to search community meeting schedules for newsworthy topics to cover. This work is partially funded through an initiative from OpenAI and Microsoft. Elsewhere in the country, Axios, another company with local newsletters, has also been applying AI: one reporter feeds a city budget into Claude to highlight notable items. Many local newspapers have faced many years of circulation decline and reduced web traffic with competition from social media, and AI now offers a way to ameliorate those issues. However, the AI tools used are still not without flaws: The Inquirer was ridiculed last year when it published a summer reading list with nonexistent books that were products of AI hallucinations. One local newspaper editor still praised AI for helping ease the writing process and cut out extraneous information: “The quicker we get to the point, the happier our busy readers are.”