AI News Roundup – OpenAI accuses DeepSeek of unfair AI development practices, India to mandate labeling of AI-generated content online, Massachusetts rolls out AI chatbot for government workers, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • OpenAI has accused DeepSeek of unfairly “distilling” OpenAI’s AI models to boost the performance of DeepSeek’s own AI models, according to Bloomberg. In a memo sent to the U.S. House of Representatives’ Select Committee on China, OpenAI charged that DeepSeek, one of China’s most prominent AI companies, had “distilled” OpenAI’s models to “free-ride on the capabilities developed by OpenAI and other US frontier labs.” Distillation generally refers to a process in which the outputs of one AI model are used to train another AI model in order for the second to develop similar capabilities to the first. Since DeepSeek’s models are generally available without a subscription cost, the OpenAI sees distillation as a threat to its business model and warned lawmakers in its memo that such behavior could reduce the United States’ technical lead over China in the global AI race.
    • Nikkei Asia reports that India is introducing new measures to combat harmful AI-generated content online. The new rules, announced this past week as an amendment to existing content regulation laws, would mandate the labeling of AI-generated content online and require social media platforms to remove unlawful content within three hours of a notification from authorities. Previously, platforms had 36 hours to act. The new rules seemed geared at addressing the explosion of non-consensual nude deepfakes, which generative AI has made incredibly simple to create. As this AI Roundup reported last month, Elon Musk’s X, a popular social media site in India, has come under particular scrutiny due to the “nudifying” capabilities of its Grok AI chatbot. However, technical challenges remain in implementing the new rules, as there is no interoperable standard for marking and detecting AI-generated content. Differentiating some AI-generated content from harmful content will also be difficult to implement, as well as the three-hour deadline for removing content, which is now even shorter than the European Union’s 24-hour standard. It remains to be seen how social media platforms will address and comply with the new requirements in the country.
    • The U.S. state of Massachusetts is rolling out an AI tool for all state workers, according to The Boston Globe. Governor Maura Healey, speaking at a gathering of the Massachusetts AI Coalition, announced that the AI tool would be powered by OpenAI’s ChatGPT with additional privacy protections and will not be able to access government data. State workers will also receive training on how to use the AI tool. At the event, Healey said that “Getting governments to lead the way and show how to use this technology, and where this technology is going to do things better, faster, more effectively, that seems really important.” While other states have announced pilot programs for AI use in government work, Massachusetts’ program appears to be the country’s first wide deployment of AI in that context. Massachusetts’ government IT office became the first agency to use the AI tool last Friday, and other agencies are expected to begin using it in the coming months.
    • The Financial Times reports on the push to bring advertisements to AI chatbots. As reported in this AI Roundup last month, OpenAI is bringing advertisements to the free tier of its ChatGPT AI chatbot, and the FT has learned further details related to the move. OpenAI asks advertisers to commit to at least $200,000 for their ads to appear in results, and starts at a price of $60 per 1,000 advertising impressions. The company has turned to advertising as it seeks to generate revenue from one of the world’s most-used websites. In contrast, OpenAI’s rival Anthropic has vowed to avoid advertising in its AI systems, even poking fun at OpenAI’s ads in a Super Bowl advertisement last week. Anthropic generates the majority of its revenue from enterprise customers, and thus is under less pressure to place ads in its consumer-facing chatbots. Regardless, many advertising executives believe that AI is “the next frontier” of online marketing, being a natural successor to the internet search engines that dominated advertising in the past. One executive told the FT that “Just as search revolutionized digital display, AI is about to transform search advertising” by giving advertisers “new ways to reach consumers.”
    • Anthropic has stuck a deal to bring its Claude AI systems into the coding classes of hundreds of U.S. educational institutions, according to The Wall Street Journal. The AI maker has partnered with CodePath, a nonprofit organization that works with over 1,000 community and state colleges in the U.S. to develop computer science curricula, to integrate Claude Code into CodePath’s AI-focused courses. As reported in this AI Roundup last week, AI assistants such as Claude Code have rattled markets as investors considered whether such assistants could replace enterprise software, especially as companies rush to integrate AI into employee workflows. Colleges in recent years have needed to update their courses to reflect the rapid advancement of AI technologies and this new reality of their use in the workplace, in order to keep their students competitive. An Anthropic official told the WSJ that “Employers want to make sure that students are fluent with the tools that are reshaping how software gets built, and that’s the foundation we’re trying to give them.”