Thursday, October 23, 2025

Akari Asai, Ph.D. Alum, Named MIT Technology Review Innovator Under 35 for Cutting LLM Hallucinations

Share

Akari Asai, Ph.D. Alum, Named MIT Technology Review Innovator Under 35 for Cutting LLM Hallucinations

Akari Asai, a research scientist at the Allen Institute for AI, is recognized for her groundbreaking work in addressing the inaccuracies of large language models (LLMs), focusing on enhancing their factual reliability in critical applications.

By · 2025-10-02 22:14:00 · From “NLP when:30d” – Google News via news.cs.washington.edu

Allen School News » Ph.D. alum Akari Asai recognized as one of MIT Technology Review’s Innovators Under 35 for reducing LLM hallucinations

Akari Asai (Ph.D., ’25), research scientist at the Allen Institute for AI (Ai2) and incoming faculty at Carnegie Mellon University, is interested in tackling one of the core challenges with today’s large language models (LLMs). Despite their increasing potential and popularity, LLMs can often get facts wrong or even combine tidbits of information into a nonsensical response, also known as hallucinations. This can be especially concerning when these LLMs are used for scientific literature or software development, where accuracy is vital.

Core Topic, Plainly Explained

For Asai, the solution is developing retrieval-augmented language models, a new class of LLMs that pull relevant information from an external datastore using a query that the LLM generates. Her research has helped establish the foundations for retrieval-augmented generation (RAG) and showcase its effectiveness at reducing hallucinations. Since then, she has gone on to add adaptive and self-improvement capabilities and apply these innovations to practical applications such as multilingual natural language processing (NLP).

Asai was recently named one of MIT Technology Review’s Innovators Under 35 2025 for her pioneering research improving artificial intelligence. The TR35 award recognizes scientists and entrepreneurs from around the world who “stood out for their early accomplishments and the ways they’re using their skills and expertise to tackle important problems.”

“With the rapid adoption of LLMs, the need to investigate their limitations, develop more powerful models and apply them in safety-critical domains has never been more urgent,” said Asai.

Key Facts & Evidence

Traditional LLMs generate responses to user inputs based solely on their training data. In comparison, RAG enhances the LLM with an additional information retrieval component that utilizes the user input to first pull information from a new, external datastore. This allows the model to generate responses that incorporate up-to-date information without needing additional training data. By checking this datastore, an LLM can better detect when it is generating a falsehood, which it can then verify and correct using the retrieved information.

Asai took that research a step further and she and her collaborators introduced Self-reflective RAG, or Self-RAG, that improves LLMs’ quality and factual accuracy with retrieval and self-reflection. With Self-RAG, a model uses reflection tokens to decide when to retrieve relevant external information and critique the quality of its own generations. While RAG can only retrieve relevant information a fixed number of time steps, Self-RAG can retrieve multiple times — making it useful for diverse downstream queries including instruction following.

She is interested in utilizing these retrieval-augmented language models to solve real-world problems. In 2024, Asai introduced OpenScholar, a new model that can help scientists more effectively and efficiently navigate and synthesize scientific literature. She has also investigated how retrieval-augmented language models can be useful for code generation, and helped develop frameworks that can improve information access across multiple languages such as AfriQA, the first cross-lingual question answering dataset focused on African languages.

“Akari is among the pioneers in advancing retrieval-augmented language models, introducing several paradigm shifts in this area of research,” said Allen School professor Hannaneh Hajishirzi, Asai’s Ph.D. advisor and also senior director at Ai2. “Akari’s work not only provides a foundational framework but also highlights practical applications, particularly in synthesizing scientific literature.”

This award comes on the heels of another MIT Technology Review recognition. Last year, Asai was named one of the publication’s Innovators Under 35 Japan. She has also received an IBM Ph.D. Fellowship and was selected as one of this year’s Forbes 30 Under 30 Asia in the Healthcare and Science category.

How It Works

RAG models enhance traditional LLMs by integrating a retrieval system that sources information from an external dataset in response to user prompts, ensuring up-to-date and factual responses.

  • Step 1: User inputs a query into the LLM.
  • Step 2: The LLM generates a query to retrieve relevant information from an external datastore.
  • Step 3: The model incorporates the retrieved information into its response, improving accuracy.

Implications & Use Cases

Asai’s research has significant implications for various fields. For instance, in academia, OpenScholar aids researchers in efficiently navigating scientific literature, while in software development, retrieval-augmented models enhance code generation accuracy. These advancements help bridge gaps in information access, especially in multilingual contexts.

Limits & Unknowns

Not specified in the source.

What’s Next

Asai’s ongoing projects in 2024 focus on refining OpenScholar and expanding the capabilities of retrieval-augmented language models to further tackle challenges in scientific literature synthesis and more effective code generation.

·

Read more

Related updates