AI News Roundup – California legislature focusing on AI regulation, OpenAI cracking down on foreign covert operations, and more

Article co-written by Yuri Levin-Schwartz, Ph.D., a law clerk at MBHB.

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • California’s legislature has advanced several measures focused on regulating AI technologies, according to a report from The Associated Press. One proposed bill would require companies using AI tools to inform customers when AI was used and to make routine assessments of bias within their models, as well as require AI companies to disclose what data is used to train their models. The bill also grants the state attorney general authority to investigate allegations of AI-related bias and/or discrimination and impose fines of up to $10,000. Another bill would modify voice and likeness laws to include protection for actors from being replaced by AI-generated mimics and would allow actors to back out of existing contracts if language permits their employer to use AI for this purpose. Another measure would require very large AI models with catastrophic capabilities (such as generating instructions to build chemical weapons) to have a built in “kill-switch,” and another bipartisan bill would make it easier to prosecute those who generate child sexual abuse material using AI. Current law in California does not allow prosecution if the images do not depict a real person. It remains to be seen whether any of these measures will pass the legislature and be signed by Governor Gavin Newsom, who has staked out a less regulatory-happy position compared with others in his party.
    • Several major tech companies, including Google, Intel, AMD and Microsoft have announced the formation of the Ultra Accelerator Link (UALink) Promoter Group to develop a new, open interconnect standard for AI acceleration processors in datacenters, according to Ars Technica. The new standard would compete with NVIDIA’s proprietary NVLink system, which has led the market. Interconnects like NVLink and UALink are designed to allow for multiple AI accelerators (whether in a single server or across multiple servers) to work together to solve complex tasks. According to the group, the first version of UALink is designed to connect up to 1,024 GPUs and will be based upon AMD’s Infinity Architecture. The first UALink-enabled products are expected to be available in the next two years.
    • OpenAI has cracked down on the use of its AI technologies for covert operations by foreign actors such as Russia, China, Iran and Israel, according to a new report from The New York Times. According to the company, private and state actors in the aforementioned countries used OpenAI’s technologies to write and edit articles, generate social media posts, and debug programs to affect political campaigns. The Chinese campaign, known as Spamouflage, was used to attack critical individuals of the Chinese government, while the Israeli campaign Zeno Zeno generated fictitious people posting anti-Islamic content on social media. Ben Nimmo, an investigator at OpenAI, said that its AI tools did not appear to have expanded the reach of the campaigns, and that such operations “still struggle to build an audience.” However, Graham Brookie of the Atlantic Council’s Digital Forensic Research Labs expressed caution, saying that more capable AI models developed in the future may have a larger impact than is currently seen.
    • U.S. officials at the Department of Commerce have slowed exports of AI accelerators by NVIDIA and AMD to several countries in the Middle East, including Saudi Arabia and the United Arab Emirates (UAE), according to Singapore’s Straits Times. In October of 2023, the Commerce Department added many Middle Eastern countries to export restrictions that previously only applied to foreign adversaries such as China and Russia. Such restrictions required companies to receive a license to the countries. However, sources said that U.S. officials have delayed or not responded to license applications for sales in Saudi Arabia, the UAE and Qatar in the past several weeks. A Commerce Department statement said that “we conduct extensive due diligence through an inter-agency process, thoroughly reviewing license applications from applicants who intend to ship these advanced technologies around the world.” Officials are particularly concerned that Chinese companies, who cannot access the advanced AI chips themselves, could access them through data centers in Middle Eastern countries with friendlier relations. NVIDIA and AMD declined to comment on the matter.
    • Axios reports that OpenAI has established a new safety and security advisory committee. The move comes after the company’s previous “superalignment” team focused on AI safety measures was recently disbanded after the high-profile departures of chief scientist Ilya Sutskever and researcher Jan Leike, the latter of whom said that “safety culture and processes have taken a backseat to shiny products” at the company. Another OpenAI researcher, Gretchen Krueger, also announced that she was leaving OpenAI and that she shared the concerns of Sutskever and Leike. Axios’ analysis of the situation said that “OpenAI is clearly trying to reassure the world that it’s taking its security responsibilities seriously and not ignoring recent criticism.”