The AI in 800 Words column explores artificial intelligence and its relevance to research libraries through brief interviews that give attention to opportunities, challenges, governance, ethics, and the research enterprise.
In this installment, I speak with Geralyn Miller, senior director of the Health and Life Science Product Team at Microsoft.
Tell us a little about yourself.
My focus is on health and life science organizations including academic medical centers and research entities, both biotech and pharma. Health care is hungry for data and the advances AI can bring. At Microsoft, we’re investing in a platform for multimodal data management. We’re doing it with software as a service, based on a product called Microsoft Fabric. There’s nothing to install. It’s ready to go and comes with a set of capabilities you can run to do data transforms.
What’s the problem you’re focused on solving with the platform?
The platform we’re building brings disparate types of data together, so that researchers and clinicians can use harmonized data more immediately. In health care, the modalities and formats were created sometimes 20, 30, 40 years ago. Clearly, these formats are pre-AI. They’re standard and ubiquitous, but researchers have to shoehorn them into today’s workflows. We see it over and over again that health researchers spend years creating an infrastructure to harmonize data, with limited grant funding that gets spent down before they can even get to work on their actual research question.
Tell us more about what motivates this work.
An ER patient may have 10 years of medical history, but a trauma surgeon only has a short time to get their head around it. The platform can understand voice data and ingest Dragon Ambient eXperience, a software tool that automatically turns doctor-patient conversations into medical notes, as well as X-rays, CT scans, PET scans, claims data, and social determinants of health data. So it’s useful for creating patient summaries based on electronic health records and clinical notes. Other uses involve calculating length of stay and improving operational efficiency. We’re also working with partners on predictions of different diseases, using omics data.
Who inspires you?
Satya Nadella! It’s Nadella’s vision, the humanity he shows and the humanity he brings, which then becomes part of the culture of our company.
How do you meet new people and communities in the AI space?
The Coalition for Health AI (CHAI). There’s a diverse community within CHAI from patient-facing people, to technologists like me, to people running ethics and governance boards. You have different institutions: academic research hospitals, technology, and biopharma. A governance playbook we’ve worked on, which will be released next month, aims to help AI-developing and implementing organizations with questions like: How do you analyze a model or use? What are the measures and metrics?
What work is still to be done?
One key question is: how do we ensure that all of the institutions that really need this technology have access to it? Not all hospitals have the resources to do a model evaluation, to spend the time to know if it will work in their community. Community hospitals have very real and immediate challenges with high workloads and practitioner burnout. Large academic medical centers that want to bring in a new model can do that, but even they will have challenges with clinician attrition rates and shortages.
If half a dozen people would help you for a month, what would you most want to do?
I would do something that’s needed adjacent to public health, where there are always resource constraints. You can’t get at the fundamental problems without taking a public health lens, seeing the bigger picture, not only contagious disease but also chronic health problems with an eye toward advances in areas that are ripe for innovation.
Do you have any advice for research librarians vis-à-vis AI?
Be conscious of a wide diversity of perspectives. There will be researchers you support who are ready to use AI, while others say “no way.” So be ready to address someone who is jumping in feet first, so they know about the frameworks for determining appropriate uses of AI. And, on the flip side, for those nervous about adoption, be ready to explain human-in-the-loop AI governance mechanisms.
What keeps you up at night about AI?
I do wonder, what are we giving up with the advent of AI? That’s not obvious up front. I had a computer science instructor who I remember saying: “There’s no win-win in CS.” (He was talking about performance optimization, but still.) I've been through many of these waves. I remember life before mobile phones! There is something you get, but there’s also something you give up. I’m sold on the benefits of AI; it makes me more productive and I see perspectives that I might not otherwise. That’s the plus.
But there’s also a minus. Before mobile phones, you could be off the grid. Now that device is always there. I wonder, what will I be thinking in 10 years about what we’ve given up? In some cases, it takes a long time to realize what you’ve lost.