AI News Roundup – xAI’s Grok restricted after scrutiny of photo “undressing” features, OpenAI to back AI regulation ballot measure in California, new research investigates AI model “memorization” of training data, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • Elon Musk’s xAI has restricted the use of its Grok AI system following revelations that Grok was used to “undress” photos of women and girls online, according to the Financial Times. As this AI Roundup reported last week, Grok has been used by users of X to digitally “undress” pictures of women and underage girls posted on the platform, prompting swift criticism from governments around the world. The company has restricted use of Grok’s image generation and editing features to paid subscribers, but did not announce any restrictions on Grok’s capabilities to edit photos in an explicit manner. Grok, in comparison to competitor models, was designed to have fewer restrictions on what users could use it for. However, more officials heaped criticism on the company in the past week, from the European Commission ordering xAI to retain documents related to Grok to three U.S. senators urging Grok and X to be removed from U.S. app stores. While Musk posted earlier last week that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” he has also criticized any governmental restrictions on Grok as a suppression of free speech.
    • The Wall Street Journal reports that OpenAI will back a California ballot measure that would regulate how AI chatbots can interact with children. The company had previously backed its own measure to be placed on the ballot this fall in the state, which would compete with a more stringent measure from the nonprofit group Common Sense Media. The nonprofit has been a leading advocate for AI regulation in recent months, and has come into conflict with many large tech companies in the process. However, both Common Sense Media and OpenAI announced this past week that they will work together on a compromise measure aimed to give parents more control over how their children interact with AI chatbots. Notably missing from the new measure, however, is a ban on cell phones in school classrooms and a provision that would allow parents and children harmed by AI chatbots to sue AI companies. OpenAI will contribute at least $10 million to the campaign for the measure, which will need 875,000 signatures to be placed on the ballot for this November’s election. Signature collection is slated to begin early next month.
    • New research has found that the phenomenon of AI models “memorizing” their training data is more widespread than previously thought, according to The Atlantic. A preprint paper released this past week by researchers at Stanford and Yale found that several popular AI systems, including OpenAI’s ChatGPT and Anthropic’s Claude, can reproduce verbatim excerpts from some of the books they were trained on, implying that the text of the books is stored within the models themselves. Following specific prompting from the researchers, Claude output the near-complete text of several books, including George Orwell’s “Nineteen Eighty-Four” and the first “Harry Potter” novel. This phenomenon, called “memorization,” has been observed on a smaller scale in the past, but AI companies have generally denied that models store copies of their training data, as such an admission could open up the companies to legal liability for copyright infringement. As many such lawsuits proceed through the courts, this research sheds new light on how AI models function at a basic level, which is often portrayed as a black box, and could affect future legal decisions and arguments on the topic.
    • Ars Technica reports that the Ford Motor Company is preparing to launch an AI assistant for several of its automobile models. At the annual Consumer Electronics Show (CES) in Las Vegas, Nevada, AI was featured in the vast majority of new gadgets and appliances on display, and those presented by the automotive industry were no exception. Doug Field, Ford’s chief EV, digital, and design officer, said that the company intends to use AI to personalize the car towards the customer, creating “a seamless layer of intelligence that travels with you between your phone and your vehicle.” One example given was that a driver could take a picture of something they wanted to fit in their truck, and the AI would determine if it could fit in the truck bed. A rollout of the AI assistant in the Ford and Lincoln smartphone apps is expected later this year, though it is also planned to be integrated into new car models in 2027.
    • A U.S. Securities and Exchange Commission (SEC) official has given the go-ahead for investment advisors to use AI in making proxy voting decisions, according to Bloomberg. Brian Daly, the director of the SEC’s Division of Investment Management, spoke at a New York City Bar Association event this past week, saying that “AI tools like large language models and agentic AI…offer a compelling opportunity” to aid advisors in voting on behalf of their clients, though that AI should not be used to replace human judgment in the process. While Daly said he spoke on his own behalf and not on that of the SEC as a whole, his rhetoric surrounding AI for proxy voting marks a shift from that of Biden-era SEC Chairman Gary Gensler, who espoused a more pessimistic view of AI in the financial world. A recent executive order from President Trump directed the SEC to review regulations on proxy advisors, especially in the context of diversity, equity, and inclusion (DEI) and environmental, social, and governance (ESG) policies in corporate voting decisions. Daly told his audience to “stay tuned” for the results of the SEC’s inquiry into those matters.