Tuesday, June 24, 2025

Assistant Professor Secures $549K NSF Grant to Enhance Deep Learning Model Testing Efficiency

Share

Enhancing Reliability in Deep Neural Networks: The Quest for Better Testing Methods

As artificial intelligence (AI) increasingly permeates our critical infrastructures—such as healthcare diagnostics, traffic light systems, and autonomous vehicles—the reliability of deep neural network (DNN) models is coming under scrutiny. As these technologies evolve, the stakes are higher than ever. Consider the consequences of a misdiagnosis in medical imaging or a traffic light camera misreading a license plate; such errors could have dire, real-world implications.

In this high-pressure landscape, ensuring the reliability of DNNs is imperative. The prospect of an autonomous vehicle mistaking another car for part of the open road is not just a hypothetical scenario; it’s a legitimate concern that underscores the need for improved testing methods to validate DNN performance. To navigate these challenges, researchers are revisiting techniques such as mutation analysis, which could serve as a crucial tool for assessing data quality in DNN testing.

Understanding Mutation Analysis

Mutation analysis is a technique that evaluates the robustness of DNN models by injecting artificial defects into them. By simulating potential failures and vulnerabilities, researchers can better understand how a model might behave under less-than-ideal conditions. Although mutation analysis has proven useful across various domains, its high costs and intensive computational demands have limited its broad application to DNNs.

The Research Breakthrough

Addressing these challenges, Ali Ghanbari, an assistant professor in the Department of Computer Science and Software Engineering, has embarked on a groundbreaking project titled "Practical Mutation Analysis for Quality Assurance of Deep Learning Systems." His approach focuses on developing techniques that expedite mutation analysis while lowering the associated costs. This innovative methodology aims to generate toolsets that simplify and streamline the quality assurance process for DNNs, making it more accessible to researchers and organizations alike.

The significance of Ghanbari’s research has not gone unnoticed; the National Science Foundation awarded him a substantial grant of $549,000 over three years, highlighting the importance of his contributions to the field.

Insights from a Leading Academic

“Dr. Ghanbari’s research has the potential to transform how deep learning AI systems are engineered, tested, and deployed in real-world scenarios,” remarked CSSE Chair Hari Narayanan. His work aims to tackle a critical challenge in the widespread adoption of AI systems: the need for quality assurance testing that ensures robustness and reliability without overwhelming computational resources. This focus is especially important when even minor errors can result in severe outcomes.

The Size Challenge in DNN Testing

One of the significant hurdles in testing DNN models is their sheer size. Ghanbari emphasizes that this complexity often makes it challenging for organizations and researchers to accurately predict how DNNs will perform in real-world contexts. “The testing phase frequently requires expensive hardware, and running large DNN models with vast datasets can quickly spiral into inefficiency and high costs,” he notes.

Harnessing the Power of Mathematical Techniques

To tackle these obstacles, Ghanbari is implementing a novel approach that utilizes the Fast Fourier Transform (FFT), a mathematical method renowned for analyzing and approximating function behaviors. A familiar example of FFT in action is JPEG image compression; while bitmap images are large files, JPEG uses a technique called Discrete Cosine Transform to efficiently reduce file size without sacrificing essential information.

By applying FFT to DNN models, Ghanbari aims to compress these systems without losing critical characteristics. This enhanced efficiency allows researchers to test models using fewer resources, significantly reducing costs while accelerating the testing cycle. As a result, it becomes feasible to assess how DNNs behave in real-world situations.

Future Aspirations and the Public Good

Ghanbari is dedicated to not just advancing academic knowledge but also contributing to the practical application of his findings. “My team’s goal is to publish our results and make our prototypes publicly available,” he shares. He expresses optimism that some of their developed techniques will be adopted by the software industry, driving positive impacts in everyday life.

The work being done in this realm signifies a monumental step forward in making AI systems more reliable, especially in high-stakes applications. With researchers like Ghanbari pioneering innovation at the intersection of software engineering and artificial intelligence, the future of DNN reliability looks promising.

Read more

Related updates