AI News Roundup – Trump administration releases legislative plan for AI, Nvidia unveils new AI vision at developer conference, fake AI-generated videos proliferate online during Iran conflict, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The Trump administration has released its National Policy Framework for Artificial Intelligence, according to Bloomberg. The legislative proposal, released this past week, calls for further restrictions on children’s use of AI, streamlined permitting for AI data centers, and protections for individuals whose voice or likeness is appropriated by AI, among other provisions. The framework is similar to measures proposed by Senator Marsha Blackburn, a Republican from Tennessee, but any measure faces an uncertain future in Congress where it would need support from the opposition Democratic Party to pass the Senate. Republicans themselves are divided on AI regulation, and many Republican-led states, including Utah and Florida, have implemented their own AI regulations. The Trump administration, on the other hand, has supported calls from AI companies for a federal AI regulatory framework that would preempt state-level laws, especially as more states have begun to adopt such measures in recent months. The AI industry has argued that such measures are excessive and hinder innovation, but those states respond that AI regulation is necessary to ensure AI does not harm their citizens. 
    • CNBC reports on Nvidia’s annual GTC developer conference this past week in San Jose, California. The event, which originally focused on the company’s graphics processors, is now billed by the company as “the premier global AI conference.” Nvidia CEO Jensen Huang announced two new lines of AI-focused processors: the first, a “language processing unit,” or LPU, based on ASIC technology from the startup Groq, and the second: a rack of Vera CPUs, which the company sees as a potential bottleneck for AI agents. Indeed, AI agents were the main focus of the GTC event, especially ones that can be hyperfocused on specific tasks. Given the rise of agents, the company appears to be shifting away from pure GPUs, which are still the industry leader in AI training, to other sorts of chips that can be used for other AI applications, especially focusing on OpenClaw AI agents that generally have full access to users’ devices. OpenClaw has proved to be particularly popular in China (as this AI Roundup covered last week). Huang himself touted the technology, saying in his keynote that “every company in the world today needs to have an OpenClaw strategy.” 
    • False AI-generated videos have rapidly spread on the internet in the first weeks of the Iran conflict, according to The New York Times. The NYT identified over a hundred unique AI-generated images and videos that spread on the internet that purportedly depict events in the current conflict in Iran. The videos falsely show missiles striking Tel Aviv in Israel and strikes on American warships, and are just the latest examples of the power of AI in creating fake videos that can be used to spread misinformation. Recent advancements in AI video generation have resulted in particularly lifelike videos. Many of the videos push pro-Iranian views, attempting to demonstrate military superiority in contrast to the actual military situation. While some AI video generation tools include watermarks in the outputs in an effort to disclose to viewers that the content is indeed produced by AI, such watermarks are often easy to remove or obscure. Elon Musk’s X, which is a common place for such videos to be posted, generally has a permissive approach to misinformation. However, it announced that it would suspend accounts from monetizing content for 90 days if said accounts are found to post unlabeled or mislabeled AI-generated content depicting “armed conflict.” Despite these steps, it is likely that AI-generated videos will continue to proliferate online for the duration of the Iran conflict and in future conflicts as well. 
    • The Wall Street Journal reports that OpenAI’s proposed “adult mode” for ChatGPT is sparking concerns among its own advisors. As this AI Roundup covered last year, OpenAI is moving forward with plans to allow sexually explicit conversations with its chatbots. However, during its January 2026 meeting, the company’s “Expert Council on Well-Being and AI” unanimously warned against such a feature, saying that sexual content could further emotional dependence on chatbots from users and that minors would likely find ways to circumvent any age restrictions that the company implemented. One council member reportedly said that the company risked creating a “sexy suicide coach,” referencing recent high-profile instances of user suicides following heavy use of ChatGPT. The adult-content feature was originally planned to be released in the first quarter of 2026 but has since been delayed, partly due to technical issues. The age-prediction system that ostensibly would prevent minors from accessing the feature misclassified minors as adults at a rate of around 12%. The company is also struggling with creating limitations on certain types of adult content, especially those that would violate the law. In announcing the delay, the company said that “we still believe in the principle of treating adults like adults, but getting the experience right will take more time.” 
    • A new study has found that AI use affects the quality of human writing, according to NBC News. The study, conducted by researchers at several west coast institutions including the University of California, Berkeley, tested the writing ability of users of several models popular in 2025. The study found that “heavy” AI users produced writing that was significantly different from those who used AI less or not at all. Users of AI reported that their writing was less creative, but still reported similar satisfaction levels to other participants. One researcher told NBC News that the study results showed that AI is having a “blandification” effect on writing, saying that AI systems “change human writing in a way that’s very large and very unlike what humans would have done otherwise.” The researcher posited that this writing effect could be a side effect of how AI models are trained, usually via methods that involve human feedback, but that further research on the topic would be useful. The study is expected to be presented at a major AI conference in Brazil later this year.