AI News Roundup – Anthropic releases Claude 4 models, Google announces new Gemini updates at I/O conference, OpenAI partners with iPhone designer to make AI devices, and more
- May 27, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- CNBC reports on the release of Anthropic’s latest AI models: Claude 4 Opus and Claude 4 Sonnet. The company claims that the Claude 4 series are its most powerful models yet and are focused on performing tasks on the behalf of users as opposed to the more traditional chatbot capabilities. Claude 4 Opus, in particular, was claimed to have the ability to work on coding tasks autonomously for over seven hours straight and can build a body of knowledge related to a particular codebase if given access to local files. WIRED reports that Claude 4 Opus improves on its predecessors’ ability to play Pokémon Red. Games have been used to test AI models’ problem-solving abilities and adapt given new situations, and the new model is said to be able to play the game for 24 hours, as opposed to only 45 minutes with previous models. Agentic abilities, according to an Anthropic researcher, are the key features that the company is cultivating in its models in order to create AI that is a “virtual collaborator.” Claude 4 Opus is available now for paid subscribers to Anthropic’s AI products, while only Claude 4 Sonnet is available to free users.
-
- WIRED reports on Google’s 2025 I/O developer conference, where the company announced several new AI features across its entire software ecosystem. Google Assistant, built into the Android mobile operating system, was announced to be replaced by Gemini, the company’s flagship AI chatbot. As part of this, Google announced Gemini Live, a feature that synthesizes information from a device’s camera, voice commands from a user, and other information to perform actions on a user’s behalf, such as making phone calls and searching the internet. The feature bears similarities to Project Astra, a Google DeepMind-developed agentic AI tool that this roundup covered this past December. For Google’s search product, the company announced an “AI Mode,” that users can switch to via a tab, augmenting the AI overviews that already exist in Google Search. The company also announced a partnership with Gentle Monster and Warby Parker to develop a line of AI-equipped augmented reality glasses as part of the Android XR platform, competing with comparable offerings from AI rival Meta.
-
- The New York Times reports that OpenAI has acquired IO, a startup founded and led by Jony Ive, for $6.5 billion. Ive, who designed Apple’s famous iPhone, will bring a team of around 55 engineers and researchers to help design OpenAI’s forthcoming series of AI-focused smart devices, though Ive and his design studio LoveFrom will remain formally independent of OpenAI and will pursue additional projects. In a joint interview, Ive and OpenAI CEO Sam Altman did not provide details regarding the look or functionality of their planned devices, but said that they hoped to unveil something in 2026. The NYT speculates that OpenAI is “looking beyond an era of smartphones,” and developing devices such as pendants or smart glasses that use AI to process information from the world in real time.
-
- The South China Morning Post reports on a Chinese initiative to launch AI-powered supercomputers into low Earth orbit. The first 12 satellites were launched this past week as part of the world’s first “orbital computing constellation.” The constellation, developed in a partnership between the startup ADA Space and the Zhejiang Lab is intended to take advantage of limitless solar power and simpler cooling demands in the orbital environment to support AI supercomputers, which are then intended to provide computing services to satellite-based tasks such as orbital imaging. This bypasses the limited communications between Earth and the satellites, which has the potential to greatly improve the efficiency of such tasks.
-
- Bloomberg reports on the release of an Arabic language-focused AI model by the United Arab Emirates. Falcon Arabic, developed by a research arm of the Emirate of Abu Dhabi, was trained on a dataset of content exclusively in Modern Standard Arabic and a variety of regional Arabic dialects. Falcon, along with its smaller counterpart, Falcon H1, is said to match or exceed the performance of similar models from foreign competitors such as the United States’ Meta and China’s Alibaba. In 2023, a Falcon series model had a first-place ranking among open-source models collected by Hugging Face, but Falcon was not in the top 500 in the same ranking as of last month. The new Arab-focused model is only the latest part of the UAE’s AI strategy, which has included the construction of data centers and investments into AI companies such as OpenAI and Elon Musk’s xAI.