AI News Roundup – U.S. government and chipmakers make deal on AI chip exports to China, generative AI used to develop antibiotics, AI use may hurt doctors’ cancer screening skills, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The New York Times reports on a deal between the U.S. government and AI chipmakers Nvidia and Advanced Micro Devices (AMD) that would give the U.S. government 15% of the revenue the companies take in from selling AI chips in China. This financial setup, nearly unprecedented in U.S. history, appears to be made in exchange for loosening export controls on advanced AI chips sold by the companies in China. Both former President Biden and President Trump have restricted the sales of AI technology to China citing national security concerns and a desire to maintain the U.S.’s edge in the global AI race. An Nvidia spokesman told the NYT that “we hope export control rules will let America compete in China and worldwide,” while the White House, the U.S. Department of Commerce, and AMD did not return requests for comment.
    • IEEE Spectrum reports on new research showing that AI can be used to design new antibiotics. A group led by Jim Collins, a professor of biological engineering at MIT, conducted a study published this past week that showed that generative AI trained on antibacterial substance data can create millions of previously-unknown molecules that could be used as antibiotics. The researchers then synthesized some of the molecules and tested them in mice, finding that they were effective against several types of microbes, including those that can cause gonorrhea and skin infections. The results bode well for further use of AI in design of biological and chemical compounds, though the lengthy testing period to determine if AI-generated compounds actually work as predicted remains a bottleneck. Further development in this field is expected in the coming years.
    • Bloomberg reports on a new study that found that the use of AI by doctors erodes their ability to spot cancers without AI assistance. The study, published in The Lancet Gastroenterology and Hepatology journal, surveyed four endoscopy centers in Poland on their colon cancer detection rates three months before adopting an AI tool and three months after. Some procedures used AI while others did not. The researchers found that AI helped the doctors detect pre-cancerous growths in the colon, but the doctors’ ability to find tumors dropped around 20% compared to before the introduction of AI when AI assistance was removed. The researchers concluded that doctors likely became over-reliant on the AI tool, “leading to clinicians becoming less motivated, less focused, and less responsible when making cognitive decisions without AI assistance,” demonstrating the risks of AI adoption in critical medical applications.
    • Quanta Magazine reports on the phenomenon of “emergent misalignment” in AI models, in which models adopt malevolent personalities based on their training data. A group of researchers fine-tuned large AI models with insecure code (i.e. code vulnerable to hackers) without labeling the code as such. After this, the models proceeded to praise Nazism and would often suggest violence and the models’ desires to wipe out humanity. While models can be trained on unsavory data to accomplish the same effect, the researchers were surprised at how quickly the change came on just insecure code. “Alignment,” referring to efforts to bring AI in line with human values and morals, is a common concern in AI circles, and this research demonstrates the risk of misalignment in even large models trained on vast datasets.
    • The MIT Technology Review reports on a group of U.S. judges using generative AI in their work. Xavier Rodriquez, a federal judge in the Western District of Texas, uses AI tools to summarize cases and generate a timeline of events, as well as generating questions for him to ask attorneys based on the materials they submit. He does not use AI for tasks that need human judgment, however. Many other judges are skeptical of the technology, and especially are wary of AI-hallucinated cases, which have received national media attention in recent months, and attorneys who submit such materials are beginning to receive increasingly harsh sanctions. Further, public trust in the justice system could be lessened with greater use of AI. One judge said that “if you’re making a decision on who gets the kids this weekend and somebody finds out you use Grok and you should have used Gemini or ChatGPT—you know, that’s not the justice system.”