AI News Roundup – OpenAI and FDA discussing AI for drug approval, AI chatbots fuel wave of cheating at U.S. colleges, AI bots flood internet with “slop,” and more
- May 12, 2025
- Snippets
Practices & Technologies
Artificial IntelligenceTo help you stay on top of the latest news, our AI practice group has compiled a roundup of the developments we are following.
-
- WIRED reports that OpenAI and the U.S. Food and Drug Administration (FDA) have discussed the use of AI in the drug approval process. FDA Commissioner Marty Makary has publicly discussed the potential of AI to assist in the agency’s regulatory work, including a claim that the agency had already “completed our first AI-assisted scientific review for a product.” However, sources told WIRED that a team at OpenAI has been involved in discussions with both the FDA and Elon Musk’s Department of Government Efficiency (DOGE) regarding “cderGPT,” likely an AI model developed for the FDA’s Center for Drug Evaluation. Robert Califf, a former FDA commissioner, said that the agency had been using AI as part of its review process for several years, and that “there has always been a quest to shorten review times and a broad consensus that AI could help.” The current FDA review process takes around one year to complete, though fast-tracking options exist for some promising drugs, and many drugs fail long before they ever reach that stage. A pharmaceutical industry group spokesman told WIRED that “ensuring medicines can be reviewed for safety and effectiveness in a timely manner to address patient needs is critical,” and “while AI is still developing, harnessing it requires a thoughtful and risk-based approach with patients at the center.” OpenAI declined to comment on WIRED’s reporting.
-
- New York Magazine reports on the prevalence of AI-based cheating in higher education. A survey conducted in January 2023, just months after ChatGPT was first released, found that 90% of college students had used the AI chatbot for their homework assignments. New York Magazine interviewed several current and past students at colleges in the U.S. and Canada regarding their use of AI. One student used it for his programming courses, saying that “I’d just dump the prompt into ChatGPT and hand in whatever it spat out.” Another copy-pasted textbook chapters into ChatGPT, saying that “college is just how well I can use ChatGPT at this point.” Professors at universities have widely bemoaned the rapid appearance of AI-generated text (often with clunky grammar or odd phrasing) in student submissions. At one college in Arkansas, some students even used AI to answer the prompt “briefly introduce yourself and say what you’re hoping to get out of this class.” To combat the rise of AI-based cheating, some professors have abandoned take-home assignments or essays altogether, returning to written, in-class assignments or oral exams, especially as so-called “AI detectors” generally do not work as advertised. The effects on students may be even more drastic — one professor said that “massive numbers of students are going to emerge from university with degrees, and into the workforce, who are essentially illiterate,” as students outsource their work and critical thinking skills to AI models.
-
- Bloomberg Businessweek reports on the rapid proliferation of AI-powered bots on the internet and “slop” content. Fil Menczer, an informatics professor at Indiana University Bloomington, has researched bots on the internet since the early 2010s. At that time, most bots were simply automated accounts that submitted or shared the same posts. However, with the rise of generative AI, bots have become more elaborate and sometimes difficult to detect. Some have posited the “dead internet theory,” which suggests that the vast majority of content on the internet is made or consumed by bots. Menczer says, however, that AI has made such a theory seem more plausible. Such bots often operate by using generative AI to create vast quantities of content in order to direct unknowing users to advertisements or even scam sites in some instances. “Slop” has emerged as a term for this AI content, especially when it is low-quality. Some slop, however, is made for other reasons than commercial; Russia’s Pravda has produced millions of articles on websites in an attempt to manipulate AI models who are trained on web content. Menczer is less pessimistic than some regarding the future of the internet in an age of AI slop, and argues that if social media sites become taken over by bots then humans will simply leave them, cutting off the website’s income stream, giving the social media site an incentive to fight against slop content, though it remains to be seen how effective this may be.
-
- The New York Times reports on the growing issue of “hallucinations” that plague many generative AI models. Hallucination refers to generative AI outputs that are false, essentially “made up” by the AI model. As AI models are generally based on probabilities, there is always the capacity to make mistakes. However, as AI models have grown more complex and advanced, the hallucination rate appears to be growing, especially in so-called “reasoning” models. Prior to the focus on reasoning, AI companies had steadily reduced the amount of hallucinations produced by their models, but this trend has reversed as one test of modern AI systems found a hallucination rate of 79% of AI outputs tested. OpenAI’s newest reasoning model, o4-mini, had a hallucination rate of 48%. Hallucinations are a particularly thorny issue in sensitive applications of AI, such as legal filings, medical analysis, or dealing with personal information, leading to embarrassing reprimands for lawyers and those caught using AI in inappropriate situations. OpenAI has said that more research is needed to understand why hallucinations occur even in highly-advanced models, while Hannaneh Hajishirzi, an AI researcher at the University of Washington, said that the issue lies in that “we still don’t know how these models work exactly.” Further work on these matters is expected in the coming months as AI companies strive to further improve their models.
-
- The MIT Technology Review reports on a new AI-powered real-time translation system for headphones. Spatial Speech Translation, developed at the University of Washington, tracks vocal information for multiple speakers and translates their speech in real time into audio output for headphone wearers. While other AI-powered live translation systems exist, Spatial Speech Translation appears to be the first to focus on multiple speakers at once and allowing users to distinguish between each speaker. The system is designed to work with off-the-shelf headphones and a laptop, which during testing used Apple’s M2 chip. The system applies two AI models — the first to divide the space around a listener into zones and to listen for potential speakers and indicate their direction, while the second translates the speaker’s words into English text while maintaining pitch and other vocal characteristics when the text is then played in audio form for the user. There is currently a delay between when a speaker says something and when the AI translation begins, and so the researchers who developed Spatial Speech Translation are now working on reducing that latency in order to make translated conversations more natural.