AI News Roundup – Antitrust investigations, former OpenAI employees criticize culture, new Windows AI feature, and more
Article co-written by Yuri Levin-Schwartz, Ph.D., a law clerk at MBHB.
- June 10, 2024
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- The U.S. Department of Justice and the Federal Trade Commission (FTC) are planning to open antitrust investigations into AI-related business practices of NVIDIA, Microsoft and OpenAI, according to a report from POLITICO. The Justice Department will focus on NVIDIA’s dominance of the AI processor market, while the FTC will focus on whether Microsoft and OpenAI have an unfair advantage in the space. The Wall Street Journal reported that the FTC’s investigation will examine Microsoft’s recruitment of AI researchers from Inflection, a San Francisco-based startup. The new investigations reflect an ongoing effort by U.S. regulators to address the rapid development of AI technologies. Jonathan Kanter, Assistant Attorney General for Antitrust at the Department of Justice, said in a recent interview with the Financial Times that regulators must act “with urgency” to ensure adequate competition in the AI market.
-
- CNN reports on the Computex 2024 expo in Taipei, Taiwan, where NVIDIA, AMD and Intel unveiled new AI-focused processors this past week. NVIDIA CEO Jensen Huang announced the Rubin AI chip platform, scheduled to debut in 2026. As NVIDIA accounts for 70% of AI processor sales, observers have focused on AMD and Intel’s attempts to catch up to the market leader. AMD CEO Lisa Su unveiled the MI325X AI accelerator, which will launch in Q4 of 2024, while Intel CEO Patrick Gelsinger announced its Gaudi 3 AI accelerators, the latter of which were touted as being priced much lower than its competitors’ offerings. Su declared that “AI is our number one priority,” reflecting the response of semiconductor designers and manufacturers to the ongoing AI boom. However, it remains to be seen if AMD and Intel will gain ground on NVIDIA’s dominance of the market.
-
- A group of current and former OpenAI employees have signed an open letter criticizing a “culture of recklessness and secrecy” at the company, according to The New York Times. The letter calls upon advanced AI companies to “support a culture of open criticism” in addressing AI risk issues, which the letter claims the company has been failing to address. One signatory, Daniel Kokotajlo, claims that Microsoft tested the then-unreleased GPT-4 model in 2022 as part of its Bing search engine without obtaining approval of OpenAI’s internal safety team. OpenAI learned of this, according to Kokotajlo, and the company did not stop Microsoft from making the model more broadly available. A Microsoft spokesman initially denied Kokotajlo’s allegations, but after the publication of The New York Times’ article, the company confirmed his claims regarding GPT-4. The open letter also calls for an end to non-disparagement clauses within employee contracts, which prevent past employees from criticizing the company once they depart. OpenAI’s non-disparagement practices, which came under heavy criticism last month when it was revealed that signing the non-disparagement clauses were a requirement for ex-employees to keep their vested equity in the company, were called out in the letter as well, writing that “broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.” While OpenAI CEO Sam Altman has publicly apologized for the practices and directed the company to remove them from its employment contracts, the employees maintain that the company needs further reform to adequately address an AI risk that, in their view, could be catastrophic for humanity.
-
- An upcoming Windows AI feature known as “Recall” has been found to have serious security risks, according to a new report from The Verge. Recall, which screenshots everything seen or done on one’s computer, uses AI models to recognize the text on the screen, and then stores it in an SQLite database to search and retrieve past information. According to Kevin Beaumont, a cybersecurity researcher, Recall also stores important data in plaintext. The SQLite database is not encrypted, which “could make it trivial for an attacker to use malware to extract the database and its contents.” The information is not used to train Microsoft’s web-based AI models, but the database is available to the Administrator user on the Windows system. Recall’s troubles come as Microsoft has focused on security concerns in recent weeks, following a directive from CEO Satya Nadella. Microsoft did not respond to a request for comment from The Verge.