Whether it be via sensors or health records, the volumes of medical data being recorded are now far beyond what human experts can analyze, especially at the bedside. At the same time, a lot of essential information goes unrecorded. Thus, to make a positive impact in healthcare, tools to assist clinical decision-making must not only robustly synthesize (often low quality) inputs but also communicate their limitations and assumptions to the ultimate human decision-maker.
My lab addresses this challenge by building models revelant for decision-making, deriving treatment strategies from these models, and exposing enough of a strategy's rationale so clinicians can trust it appropriately. My talk will touch upon each of these three elements: probabilistic modeling, decision-making under uncertainty, and interpretable machine learning systems. I will discuss how the pursuit of assisting clinicians with anti-depressant recommendations lead us to finding (and resolving) a decade-old failing of supervised topic models; how we identified (seemingly) better strategies for management in critical care settings, realized they were (perhaps) bogus, and regained (some of) our trust in them; and how our recommendations for interpretability and robustness are being heard throughout the world. All through the way, I will emphasize the many interesting technical questions whose answers have potential for real impacts in health, as well as the importance of performing this research carefully.
Finale Doshi-Velez is an assistant professor of computer science at Harvard University. She completed her MsC from the University of Cambridge as a Marshall Scholar, her PhD in computer science from MIT, and postdoc at Harvard Medical School. Her research focuses on probabilistic modeling, reinforcement learning, and interpretable machine learning with applications to healthcare.