AI News Roundup – Impostor uses AI to imitate U.S. Secretary of State, EU unveils code of practice for AI regulations, AI-powered line calling proves controversial at Wimbledon, and more

To help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.

    • The Washington Post reports that an impostor used an AI-generated voice to impersonate U.S. Secretary of State Marco Rubio. The impostor contacted several foreign diplomats as well as a U.S. governor and congressman by sending them voice and text messages through the app Signal that were generated using AI to mimic Rubio’s voice and text style. The campaign began in late June, where the impostor created an account labeled with a fictitious government email meant to appear to be Rubio’s. State Department officials declined to discuss the content of the false messages with The Washington Post, but said that both the Department and the FBI were investigating the matter.
    • The Financial Times reports on the European Union’s recently-unveiled “code of practice,” a set of guidelines for implementing the bloc’s AI Act regulations. The final version of the code contains provisions related to copyright protections for content creators as well as risk assessment policies. The EU has pushed ahead with these AI rules despite strong criticism from the AI industry, especially the dominant U.S.-based companies. This month, the leaders of several large European companies, including aerospace conglomerate Airbus, BNP Paribas bank, and French AI startup Mistral, all publicly criticized the AI Act as harming Europe’s standing in the global AI race. Indeed, the EU’s rules were originally meant to go into effect in May, but were delayed due to pressure from outside groups. In response to criticisms, Henna Virkkunen, the EU’s technology chief, said the code of practice was important “in making the most advanced AI models available in Europe not only innovative, but also safe and transparent.” Further changes to the implementation of the AI Act, however, are still being discussed by the European Commission.
    • The Guardian reports on the use of AI-assisted line calling at the Wimbledon tennis championships. This year’s event is the first in which the all-important task of line-calling would be performed entirely by an AI-powered automated system, replacing the line judges who had made the calls for the first 148 years of the tournament’s history. Electronic line calling (ELC), has been in use since 2018, and its usage has grown in recent years to include three of the four Grand Slam events, with the French Open being the only event to retain human line judges. However, some players are wary of the new technology at Wimbledon; Russian player Anastasia Pavlyuchenkova claimed that ELC had “stolen” a game from her after a shot from her opponent, Sonay Kartal clearly went out of bounds but was marked as in by ELC. After a delay and investigation, it was determined that several ELC cameras on Pavlyuchenkova’s side of the court were nonfunctional, and the point was replayed. Pavlyuchenkova ended up winning the match, though the situation reinforced calls for contingency plans to be in place when technology fails, such as a video replay system for use by umpires.
    • WIRED reports on McDonald’s use of an AI bot for hiring. AI software firm Paradox.ai built the “McHire” tool for the fast food company meant to screen applicants and collect information and resumes. However, security researchers revealed that the AI system had extremely glaring security flaws, including the use of the password “123456,” that would allow hackers to gain access to nearly 64 million records containing sensitive information from applicants. A spokesperson for Paradox.ai said that the company would publish the findings of the security researchers, but noted that only a fraction of the accessed records contained personal information and that no third parties (other than the researchers) accessed the information.
    • Reuters reports on a new study on the effect of AI usage on the productivity of developers. The AI research nonprofit Model Evaluation and Threat Research (METR) evaluated a group of experienced software developers who used the AI coding tool Cursor to perform tasks in a familiar codebase. Before the study, the developers estimated that the AI tool would speed up tasks by 24%, and after the study estimated that it did so by 20%. However, the study found that the use of AI actually increased the time it took to complete tasks by 19%. The authors said that they were “shocked” by the results, as previous studies had found productivity gains through the use of AI. The authors explained this by saying that many studies rely on software development benchmarks for AI usage, which don’t always reflect real-world tasks. Despite the findings, a majority of the study’s participants continue to use Cursor, saying that it makes the development experience easier and more pleasant.