AI Roundup – California releases long-awaited AI report, U.N. survey finds AI is trusted most in low-income countries, Amazon CEO claims AI will replace jobs at company, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The Verge covers a new report commissioned by California Governor Gavin Newsom to investigate AI regulations. Newsom vetoed several bills intended to regulate AI models in the state, including by imposing disclosure requirements and transparency mandates on AI developers, and established the “Joint California Policy Working Group on AI Frontier Models,” to analyze the situation further. The final report, released this past week, noted that AI models have greatly improved capabilities compared to fall 2024 when Newsom vetoed the regulation bills, and stated that these improved models could heavily impact large swathes of economic and social life in the state. The report then calls for transparency requirements and whistleblower protections, as well as a different categorization approach to assess risk beyond the compute-power-based approach some bills last year used. Further AI regulation bills are expected to be proposed in the California legislature in the current session, which lasts until 2026.
    • Bloomberg reports on a new survey of global trust in AI technology. The study, conducted in 21 countries by the U.N. Development Programme, found that trust in AI technologies to be used for good was highest in China, with 83% of respondents saying they trusted the technology. Higher AI trust was found in other countries as well, generally lower-income countries such as India, Nigeria, and Pakistan. In contrast, highly-developed countries such as Germany, Australia, and the U.S. generally had lower levels of trust in the technology. The study did not provide insights into the reasons behind differing attitudes in each country but noted that respondents in lower-income countries also had higher expectations for AI. Further research will likely be needed, especially as AI usage continues to expand across the world.<a
    • The Washington Post reports on a recent memo to employees from Amazon CEO Andy Jassy stating that AI is expected to result in a reduction of human workers at the company. In the memo, Jassy said that “[i]n the next few years, we expect that this will reduce our total corporate workforce as we get efficiency gains from using AI extensively across the company” for both corporate workers and also for those in its warehouses. Concerns about AI’s impact on jobs have been widespread ever since the AI boom started in the fall of 2022, and several companies (including Meta) have required employees to make use of AI in order to improve their productivity. Amazon’s hiring since 2022 has plateaued, and several current employees have noted recent increases in attrition without the hiring of replacement workers. Jassy hailed the company’s own AI-based products (including a shopping chatbot and upgraded version of its Alexa voice assistant) and encouraged employees to continue their use of AI to “reinvent the company.”
    • The Wall Street Journal reports on Pope Leo XIV’s comments on AI since his election to the papacy last month. In one of his first addresses to Cardinals following his election, Leo XIV spoke of his namesake predecessor, Leo XIII, who wrote extensively on the rights of workers in the face of rapid industrial change in the late 1800s. Speaking to the cardinals, Leo XIV said that “(t)oday, the church offers its trove of social teaching to respond to another industrial revolution and to innovations in the field of AI that pose challenges to human dignity, justice and labor.” The pontiff, born Robert Francis Prevost in the U.S., was educated as a mathematician before entering the seminary and appears to be positioning himself to make AI a defining issue of his pontificate. Many Silicon Valley executives gathered in Rome this past weekend for an AI ethics summit, including representatives from Google, Meta, IBM, and Anthropic. In a written message to the participants, Leo said that “while undoubtedly an exceptional product of human genius, AI is ‘above all else a tool,’” quoting his immediate predecessor Pope Francis, and said that the benefits and risks of AI must be judged according to the principle of “safeguarding the inviolable dignity of each human person and respecting the cultural and spiritual riches and diversity of the world’s peoples.” The Pope also noted his concern regarding AI’s effect on the development of young people, and encouraged the participants to consider the effect the technology has on the next generation.
    • The MIT Technology Review reports on new OpenAI research that claims AI models that are trained to generate hateful or harmful content can be “rehabilitated.” This past February, researchers found that OpenAI’s GPT-4o, if fine-tuned on certain code, could cause the model to output “harmful, hateful, or otherwise obscene content, even when the user inputs completely benign prompts.” According to a paper published by OpenAI this past week, this behavior, dubbed “emergent misalignment,” occurs when the model adopts a “bad boy persona,” which can lead to “behavior that’s cartoonish evilness,” according to one coauthor. The paper also found that though the fine-tuning caused this behavior, the actually harmful persona was sourced from the model’s original training data, and that the model could be “rehabilitated” by further fine-tuning the model with as few as 100 “good, truthful” text samples. Further research is expected on the topic as OpenAI continues to pursue “alignment” (generally understood as general AI safety) in its AI products.