AI News Roundup – The White House releases AI Action Plan, U.S. Senators introduce sweeping AI copyright regulation bill, DeepMind unveils model for analyzing historical inscriptions, and more
- July 28, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- Bloomberg reports on the Trump administration’s recently-released AI Action Plan. MBHB Partners Aaron Gin and Michael Borella have a first analysis of what this could mean for AI development in the U.S. The Trump administration also released several AI-related executive orders: one targeted at combatting the use of “woke” AI tools, one focused on streamlining the permitting of data center projects, and another promoting the export of “full-stack” AI technology from the U.S., including hardware, data pipelines, and models themselves. The three orders all appear to align with the AI Action Plan’s pillars of AI deregulation and expediting of AI infrastructure projects.
-
- Axios reports on a new bipartisan bill introduced in the U.S. Congress to address copyright concerns surrounding AI training. U.S. Senators Josh Hawley, a Republican of Missouri, and Richard Blumenthal, a Democrat of Connecticut, introduced this past week the AI Accountability and Personal Data Protection Act. The bill would bar AI companies from training their AI models on copyrighted works, create a private right of action for individuals to sue companies that use the individual’s personal data or copyrighted works without consent, require companies to disclose third parties that will have access to data if given, and provides for several remedies, including financial ones. Hawley said in a statement that “AI companies are robbing the American people blind while leaving artists, writers, and other creators with zero recourse,” while Blumenthal said that “Tech companies must be held accountable—and liable legally—when they breach consumer privacy, collecting, monetizing or sharing personal information without express consent.”
-
- The MIT Technology Review reports on a new AI model from Google’s DeepMind subsidiary focused on analyzing historic inscriptions. The model, named Aeneas (after the hero in Greek mythology), takes in images of inscriptions on stone or other materials, as well as partial transcriptions, and produces possible dates and origins for the transcription. The model will also attempt to “fill in” missing text, as many inscriptions, especially those dating from ancient times, are fragmented or damaged in some way. Aeneas builds on a previous Google archeological model, Ithaca, with the added capability of cross-referencing analyzed text to a database of inscriptions from all over the world, which formed a part of Aeneas’ training data. Aeneas is open-source, and a web interface is freely available for academics, museums, teachers, and students.
-
- The Financial Times reports on a new partnership between the U.K. government and OpenAI for AI services. The memorandum of understanding includes a pledge from OpenAI to consider investing in AI infrastructure in the country, including through data centers and the hiring of U.K. workers. In return, the U.K. government will study ways to implement OpenAI’s technologies into public services, possibly using the data of U.K. citizens in the process. The U.K. has been seeking to speed up the pace of AI development in the country, as private AI investment in the country totaled only $4.5 billion in 2024, compared to $109.1 billion in the United States and $9.3 billion in China. The U.K. government has been criticized for other pro-AI moves, such as a push to create an exemption in copyright law for AI training. Further measures related to AI are expected from the U.K. government in the coming months.
-
- Oceanographic Magazine reports on a new study that used AI to detect illegal fishing vessels in protected ocean areas. The study used data from the Automatic Identification Systems (AIS), a GPS signal transmitted by most industrial fishing vessels, satellite imagery, and specialized AI models from the nonprofit Global Fishing Watch to determine that illegal fishing activity was present in many marine protected areas (MPAs) around the world. Fishing vessels engaged in such illicit activities often disable their satellite transponders, though the combined approach used in the study presents a more comprehensive view of fishing activity in MPAs, which may help authorities better enforce the environmental protection limits that these areas were set aside for.