AI News Roundup – Trump administration issues executive order to integrate AI into K-12 education, California bar exam admitted having AI-generated questions, AI tapped to update decades-old code, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The Trump administration issued an executive order this week directing federal agencies to integrate AI technologies into K-12 schools in the U.S. The order establishes a White House task force including several cabinet secretaries and other administration officials, who are to direct efforts to “promote AI literacy and proficiency among Americans by promoting the appropriate integration of AI into education, providing comprehensive AI training for educators, and fostering early exposure to AI concepts and technology.” The order calls for a Presidential Artificial Intelligence Challenge intended to “encourage and highlight student and educator achievements in AI,” specifically calling for a wide range of AI applications. The order also directs several agencies to establish public-private partnerships with AI companies and academic institutions to develop online resources to teach K-12 students about AI. The Secretaries of Education and Labor were each instructed to prioritize the use of AI for improving teacher training and for apprenticeships in AI-related jobs, respectively. Action is expected from agencies in implementing the executive order in the coming months. 
    • The Los Angeles Times reports that the State Bar of California admitted this week that several questions on California’s February 2025 sitting of its bar exam were drafted by non-lawyers using AI. Several test-takers reported that several questions appeared to have been AI-generated, and the State Bar admitted that 23 out of the 171 scored multiple-choice questions were developed using AI by a psychometrician employed by the agency. In response, the California Supreme Court, which oversees the State Bar, demanded answers from the agency as to why the Court was not informed about the use of AI, as well as how AI was used in the process and how such questions were vetted before their inclusion on the exam. In October of 2024, the Court directed the State Bar to evaluate “any new technologies, such as artificial intelligence, that might innovate and improve upon the reliability and cost-effectiveness of [bar examinations].” Such measures were brought about by the State Bar’s $22 million deficit in 2024, when it decided to withdraw from the Multistate Bar Examination used by many U.S. states and pursue its own hybrid model combining both in-person and remote testing, though the rollout of the new system has been plagued with technical difficulties. The State Bar announced that it will ask the California Supreme Court to adjust test scores for those who took the February 2025 bar who experienced issues. 
    • Bloomberg reports on efforts in both the public and private sectors to use AI to modernize decades-old computer code running on mainframe computers. Many organizations, especially large financial companies, airlines, and government agencies such as the Social Security Administration, depend on mainframe computers for bulk processing tasks and other critical processes. Many mainframe programs are written in Common Business-Oriented Language (COBOL), a programming language first introduced in 1959. As such systems age, the cost of maintaining them, as well as the risk of failures or cyberattacks rises drastically, prompting many organizations to consider replacing older systems with more modern alternatives. However, due to the complexity and importance of the tasks performed by these old systems, replacement is not easy. Generative AI has been applied to analyze COBOL code, explain its function, and even in some cases “translate” the code into a newer language such as Java. According to a McKinsey report, the cost of modernizing a bank’s transaction processing system (which often relies on COBOL running on mainframes) has been cut in half through the use of generative AI. Of course, there are risks presented with any application of AI, and some observers are concerned that the code-replacement efforts of Elon Musk’s Department of Government Efficiency at the Social Security Administration in particular are being done “without adequate planning and preparation.” 
    • Nikkei Asia reports that the lower house of Japan’s legislature passed the country’s first legal measures regarding the regulation of AI technologies. The bill is intended to help attract AI investment to the country, characterizing the technology as one that “underpins the development of our nation’s economy and society.” The bill authorizes the government to implement “necessary measures including guidance, advice, the provision of information and other appropriate actions” in response to AI risks. However, the bill does not contain any binding requirements regarding AI usage. One person involved in the drafting of the bill told Nikkei Asia that the bill was not intended to signal strong regulations. Indeed, a Deloitte observer said that the bill signals that Japan is focused on a more lax approach to AI regulation, more in the vein of the U.S. instead of the more restrictive policies of the European Union and China. The bill is expected to pass the upper house swiftly. 
    • Adobe this week unveiled “Content Credentials,” a form of metadata and digital signature that can verify the identity of artists and other creators to distinguish their works from AI-generated ones. The company said that, when hearing from creators, a large concern was ensuring proper attribution for their works on the Internet. Content Credentials were tested in a private beta starting in 2024, but is now released in a public beta. According to a description of the content credentials standard, Content Credentials combine metadata, watermarking and fingerprinting to ensure content authenticity. Importantly, Adobe claims that the watermarking process survives even a screenshot of a work, a common method used to circumvent content restrictions. Creator information, such as name and social media accounts, can be linked to a work using a Content Credential, as well as a flag that signals that the work cannot be used for AI training. Further rollout of the Content Credentials program is expected over the coming months.