AI News Roundup – OpenAI closes largest-ever funding round ahead of IPO, source code for Claude AI leaked online, Eli Lilly strikes deal to distribute AI-discovered drugs, and more
- April 6, 2026
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
- The Wall Street Journal reports on the closing of OpenAI’s latest funding round, the largest in Silicon Valley’s history. The company raised $122 billion, $110 billion of which came from Amazon, Nvidia, and Japan’s SoftBank Group (as this AI Roundup covered in early March), with some of the remainder allocated to Cathie Wood’s ARK Invest, which will include OpenAI in its flagship exchange-traded fund products. This would allow individual investors to have a stake in OpenAI ahead of its expected initial public offering (IPO) later this year. However, one of OpenAI’s rivals may burst onto the open market before it: Elon Musk’s SpaceX, which merged with his AI startup xAI in February, has already filed for an IPO scheduled for June, valuing itself at over $1 trillion and hoping to raise over $50 billion. The close of the funding round values OpenAI at $852 billion, making the round the largest in the history of Silicon Valley, and comes amid a change in strategy for the company as it shifts towards enterprise customers (as this AI Roundup covered last week). The company said it expects around half of its revenue to come from enterprise customers by the end of the year.
- Source code for Anthropic’s Claude Code AI system was inadvertently leaked online this past week, according to Bloomberg. A release of Claude Code online included over 500,000 lines of code for the program. Bloomberg also reported that the code leak revealed at least eight unreleased Claude Code features, including “Coordinator Mode,” which allows Claude to delegate components of tasks to other AI agents, and “Auto-Dream,” in which Claude organizes its learned information into structured memory files. Writing on X, Claude Code Developer Boris Cherny said that the company is “always experimenting with new ideas” but noted that many are not included in the final product. Anthropic said in a statement to Bloomberg that the leak was a “release packaging issue caused by human error” rather than a result of hacking, but the event still raises questions about the company’s security practices as it continues its legal fight against the Pentagon’s designation of the company as a supply chain risk, as this AI Roundup has covered in recent weeks.
- CNBC reports that Eli Lilly & Company has made an agreement with the Hong Kong-based Insilico Medicine to distribute AI-developed drugs. Insilico, which went public late last year, has developed over 28 drugs using generative AI, with several already at the clinical trial stage. The Wall Street Journal reports that Lilly will have the exclusive right to market and distribute several of Insilico’s orally-taken drugs globally as part of the deal. The deal is valued at $2.75 billion, and will give Insilico $115 million up front, with the remainder conditional on regulatory compliance and commercial goals. The two companies have cooperated since 2023, when Lilly licensed some of Insilico’s software for its own use. Generative AI tools have been hailed as having the potential to greatly speed up drug development and discover possibly life-saving treatments for rare diseases. One Lilly executive told CNBC that the agreement “allows us to explore novel mechanisms and accelerate the identification of promising therapeutic candidates across multiple disease areas.”
- A new study has found that AI usage in doctors’ offices for transcription tasks results in only moderate efficiency gains, according to The Boston Globe. “AI scribes,” as the technology is known, listens to audio recordings of doctor-patient conversations and creates summaries of appointments and has shown rapid growth in recent years. The technology has been promoted as cutting down on the amount of documentation work physicians must do, freeing them up for more patient care. The study, published recently in JAMA, was conducted by researchers from multiple medical institutions around the U.S. and followed over 8,500 clinicians, 1,800 of whom used an AI scribe in their work. Overall, the study found that AI scribes reduced the amount of time spent documenting appointments in an eight-hour workday by 16 minutes, though medical professionals in the primary care field saved an average of 27 minutes per day. Despite the use of the technology, the primary care practitioners still spent over two and a half hours per day performing documentation tasks, suggesting that AI scribes may have a less dramatic impact on the field than hoped.
- Reuters reports that AI deepfakes are becoming more common in political campaigns ahead of the U.S. midterm elections later this year. Several campaigns have deployed AI-generated videos depicting their opponents either reciting social media posts or saying things they never actually said. Social media is a powerful tool in political campaigning, and AI usage online in the U.S. has little regulation outside of a patchwork of state laws and self-policing by the platforms themselves. Meta, the owner of Facebook and Instagram, as well as Elon Musk’s X, claim to require labeling of AI-generated content, but generally rely on user reports or notes to correct misleading claims. A 2025 study found that AI-generated ads are generally effective and that most people struggle to distinguish AI-generated videos from real ones. One expert on deepfakes has claimed that growing use of deepfakes risks decreasing voter trust in elections and political institutions and chastised politicians who make use of the technology, saying that “it’s harmful for politicians and campaigns to continue normalizing this.”


