AI News Roundup – Florida investigates OpenAI over possible ChatGPT link to campus shooting, DeepSeek releases new open-source AI model, opposition to AI in schools grows, and more
- April 27, 2026
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
- Florida’s government has announced a criminal investigation into OpenAI over allegations that ChatGPT advised a man who stands accused of shooting and killing two people at Florida State University in April 2025, according to The Washington Post. Florida Attorney General James Uthmeier announced this past week that his office was starting the investigation, alleging that ChatGPT had advised the shooter regarding the choice of weapon, ammunition, and what time of day would be best for him to encounter the most people. Uthmeier claimed that similar behavior from a person would lead to that person also being charged with murder. Uthmeier’s office sent subpoenas to OpenAI to obtain the company’s policies around ChatGPT users that threaten harm to others. This is not the first controversy OpenAI has faced with users committing harm, as the company faces several lawsuits from families of those who have died by suicide after conversations with ChatGPT. The company claims that it has a system to monitor and flag ChatGPT conversations that indicate a possibility of harm, but experts say that guardrails are not 100 percent effective given the uncertain nature of generative AI outputs. An OpenAI spokeswoman told WaPo that ChatGPT was not responsible for the Florida shooting, and that the company was cooperating with the investigation.
- The New York Times reports that China’s DeepSeek has released its latest open-source AI model. DeepSeek, among the country’s most prominent AI developers, took the world by storm in early 2025 with its V3 model that could be trained far cheaper than comparable American models, as well as opened a debate about open-source versus proprietary AI systems. Late this past week, DeepSeek released a preview of its V4 model, which will also likely be released as open-source. The new model reportedly excels at writing code, a field that has long been dominated by American rivals like Anthropic. While V4’s coding capabilities still lag behind those of the leading American models, the performance gap has narrowed greatly. The release of V4 has been long-awaited given how groundbreaking V3 was perceived to be, but Anthropic and OpenAI have claimed that DeepSeek has unfairly “distilled” their models into its own and copied its behavior (as this AI Roundup covered in February). Despite this, many Chinese companies have gone all-in on open-source AI, which has helped their models gain popularity around the world. One hedge fund manager who invests in AI companies (but not DeepSeek itself) said that Chinese open-source models were last year’s biggest AI story, saying that the “progress of the models, the pace of the releases and the number of AI labs that both compete with each other but also seem to cheer each other on came fast and furious with no signs of slowing down.”
- Opposition to generative AI use in schools is growing among parents and educators, according to an investigation from The New Yorker. AI products have proliferated rapidly in the education space and have been adopted all around the country, from ChatGPT and Claude for sixth graders in Boston to kindergartners in New York and Los Angeles talking to a reading bot for feedback. Advocates of AI use in K-8 schools claim that early exposure to AI will help prepare students for future careers that will be dominated by AI, and that the technology is already helping teachers save time and individualize instruction to particular students. Leaders from the tech industry, the White House, and public schools have repeated the message that AI is here to stay; draft New York City guidelines stated confidently that the “question is not whether AI belongs in schools. The question is whether we will collectively build a system that governs AI to serve every student and every stakeholder.” Evidence is growing, however, that AI usage may greatly harm learning and cognitive development. A recent study has shown that AI usage to solve math problems led to reduced persistence and worse performance when AI is taken away, which has grave implications for long-term learning capabilities, as further evidence also grows of LLMs causing thought atrophy, which can be particularly harmful in young people who are learning social skills at the same time. However, as AI tools become further integrated into the ever-present laptops and tablets in classrooms, combatting their influence will become increasingly difficult.
- The Financial Times reports on the latest high-profile instance of AI hallucination in legal filings. A lead partner at Sullivan & Cromwell, one of the largest and most elite law firms in the U.S., wrote an apology letter to a federal judge for a multitude of errors, including misquotes from the U.S. bankruptcy code and incorrect case citations, that appear to have been the result of generative AI use. The letter claimed that the firm’s AI policies had not been followed, and that the firm was considering revising its training and filing review processes. However, the letter did not identify the attorneys who prepared the error-filled document or state whether they were still employed at the firm. This event is just the latest example of a law firm grappling with AI technology, especially as billing rates have soared and firms seek further efficiency from its associates, and running into “hallucinations” that have plagued generative AI since its introduction. Judges have imposed increasingly punitive sanctions on lawyers who submit documents with such errors, but it remains to be seen what sanctions, if any, will be placed on Sullivan & Cromwell in this instance.
- America’s largest timber company is introducing AI to streamline forest management and logging operations, according to The Wall Street Journal. Weyerhaeuser has deployed AI to, among other things, monitor equipment and optimize logging truck routes in its vast land holdings. The company is also creating a “digital twin” of its timberlands using satellite and drone images and lidar sensor data. This information can then be fed to AI models to calculate seedling survival rates, which was previously done by human workers in dangerous terrain. Further, the company is considering the use of semi-autonomous logging equipment, such as a remotely-piloted “skidder” (which drags logs), with the assistance of an AI-powered navigation system, as well as an in-cabin AI assistant for harvesters to highlight certain trees to cut during harvest time. The company’s leaders hope that efficiency brought about by AI could boost corporate profits in a time when a slump in housing construction has hurt the lumber industry.


