Beyond the black box

Cynthia Rudin of Duke University discusses interpretable deep learning at SPIE Medical Imaging
09 February 2024
Cynthia Rudin of Duke University
Credit: Duke University

“We have a lot going on!” says Cynthia Rudin, head of the Interpretable Machine Learning Lab at Duke University. “My lab develops interpretable neural networks — these can analyze EEG signals from critically ill patients who may have seizures, and PPG/ECG signals from wearables for heart monitoring. Those networks explain their reasoning processes, so that humans don’t have to blindly trust them.”

Rudin’s lab has also recently made important breakthroughs on tabular data. They have been studying the Rashomon Effect, which occurs when a dataset admits many equally accurate predictive models that differ from one another. “We figured out how to enumerate all the good sparse decision trees from a dataset (quantifying the Rashomon Effect),” says Rudin. “This allows us to answer questions like ‘can I make a model that is equally accurate but more algorithmically fair?’ or ‘can I incorporate a constraint into my model without losing accuracy?’ and ‘is this variable important to every good model?’ We also work in dimension reduction for data visualization, which gives amazing insight into high-dimensional data before doing any modeling at all.”

Rudin, also the Earl D. McLean, Jr. professor of computer science and engineering at Duke, will discuss applying interpretable neural networks to analyze mammograms and EEG signals in a keynote talk at SPIE Medical Imaging.

What led to your interest in working with artificial intelligence?
I loved the idea of predicting the future using only data from the past. I started out in applied math, which is somehow the opposite of machine learning. In applied math, you start with a model that’s supposed to describe the world. In machine learning, it’s all about the data; the model serves the data. The model can be very flexible and doesn’t need to have a hypothesis about the world behind it. I still adore both fields.

How would you define “interpretable machine learning” as opposed to “explainable artificial intelligence”?
The difference between interpretable machine learning and explainable artificial intelligence (XAI) is extremely important! In interpretable machine learning, we work with predictive models that are *constrained* so that they can explain *their own* reasoning processing to humans. In XAI, they usually try to explain black box models post-hoc (after those models are developed, and not during the process of model development).  There are a lot of problems with post-hoc explanations — they are generally not faithful to the underlying model, or they are so incomplete that they become useless anyway.

What do you see as the most important aspect of your research at this time?
There are very, very few labs that focus only on interpretable machine learning. This topic is absolutely critical to trust in machine learning, to making models useful in practice, and to ensuring fairness. There is no such thing as fairness without transparency in my view. Just telling people how critical interpretability is — and showing that it can be done in many domains without sacrificing accuracy — is the most important thing I think I can do right now.

What do you see as the future of AI in medical imaging? What would you like to see?
We need interpretability in medical imaging, particularly for high-stakes decisions that are not obvious from the image. Interpretable machine learning is the bridge between radiologists and data. I’ll be talking about mammography in my presentation, specifically the decision of whether to biopsy a lesion, as well as five-year risk prediction. There are so many more important prediction problems out there.

What would you like attendees to learn from your talk at SPIE Medical Imaging?
First, I would like them to understand that interpretable machine learning can provide radiologists with actual medical insight. Usually, machine learning algorithms are just designed to mimic what humans would do anyway, but this time it goes beyond that to tell humans something they didn’t know before.

Also, I want to share how prototypical neural networks operate. These networks use case-based reasoning, so that each new image is compared with several prototypical images and similarities are pointed out by the algorithm. That’s how it’s able to explain its reasoning to humans.

 

Enjoy this article?
Get similar news in your inbox
Get more stories from SPIE
Recent News
PREMIUM CONTENT
Sign in to read the full article
Create a free SPIE account to get access to
premium articles and original research