Unity Health interview part 3 of 3
In this third interview, Derek Beaton (Director of Advanced Analytics at Unity Health Toronto), shares his experiences of applying data science and advanced analytics to solve healthcare problems.
This is the third in a series (other parts: part 1 and part 2) of interviews with leaders from Unity Health Toronto, including Muhammad Mamdani (Vice-President of Data Science and Advanced Analytics), Michael Page (Director of AI Commercialization), and Derek Beaton (Director of Advanced Analytics), discussing the organization's pioneering efforts in integrating artificial intelligence into healthcare. They share their strategies, challenges, and successes in deploying AI-driven solutions that improve patient outcomes, enhance operational efficiency, and address real-world clinical problems. In collaboration with clinicians, researchers, and technology partners, Unity Health demonstrates how AI is reshaping the future of healthcare.
Read more about AI at Unity Health (unityhealth.to).
Derek Beaton, what data sources are most critical to Unity Health’s AI projects, and how do you manage the challenges of data quality, consistency, and integration?
- Instead of targeting specific data sources, our approach is to focus on the problem at hand and then identify the necessary data sources to address the problem. This could range from medical imaging data for certain diagnostic problems to lab and vitals data for clinical prognostication problems to text notes for treatment-related problems.
- All sources of data are viewed as being critical given the wide variety of problems that can be tackled with AI
- We embrace the challenges of data quality, consistency, and integration in real world settings! Generally, we manage this by understanding how and why data may have issues or the differences we usually see between historic vs. real time data. We manage these issues by building our solutions around these challenges.
How do you evaluate the performance of your AI/ML models, and what metrics do you consider most important in a healthcare context?
- This kind of question is one of my favorites, because I get to play the role of a good statistician: It depends.
- We have a lot of models across both classification and regression, as well as optimization. Each type of problem requires careful thought about how to evaluate the performance
- But regardless of approach, there are always fairly standard metrics to rely on (e.g., precision/recall, various error or fit metrics)
- For examples, some of our clinical deployments must make conservative judgements. So we focus more on not missing cases (e.g., with negative predictive value, false negative rates). In other cases we have benchmarks of clinician performance. So the goals there are to be as least as good in some ways (e.g., sensitivity) while out-performing in other ways (e.g., positive predictive value)
“AI literacy is critical to the successful adoption of AI solutions.”
What role do you think explainability plays in AI in healthcare?
- My perspective on explainability in AI may be a bit contrarian, but I’m not convinced it plays as big of a role as some people believe. Explainable AI (XAI) has become a stand-in for trust, yet there are many aspects of healthcare—and life in general—that we don't fully understand or can’t explain, but we still trust. For instance, there are common medications that patients take every day, the mechanisms of which are not fully understood, even by many clinicians. Despite this, these medicines are trusted because they’ve been rigorously tested, clinically validated, and regulated.
- In the same way, I believe AI tools don’t always need to be fully explainable for us to trust and use them, as long as they are valid and reliable and subjected to rigorous validation and oversight. While explainability is valuable for understanding and improving these complex systems, I don’t think it's always a necessity for successful implementation in healthcare. What matters most is ensuring these AI tools are safe, effective, and reliable in practice.
- Much of what we work with are proxies of other measures. For example, why a model flags that a patient will return to the emergency department is not necessarily why a patient actually returns to the emergency department. So in practice, explainability can be misleading if misunderstood.
- But if explainability is important, in many cases the there are other approaches for inference and causality that are much better suited than some XAI approaches today.
How do you ensure that AI-driven insights are actionable and understandable to healthcare professionals who may not have a technical background?
- AI literacy is critical to the successful adoption of AI solutions. As we integrate clinicians into our process, they gain a progressively deeper understanding of data science, machine learning, and AI adoption through ‘real world’ experience
- We have created a learning environment where not only our data scientists learn clinical concepts but our clinicians learn about data science and AI through real world application; clinicians are often provided learning materials on data science and AI relevant to the problem that is being solved and are encouraged to ask questions to grow their knowledge; similarly, our data scientists are encouraged to ask clinical questions to gain a deeper understanding of the problem that the AI solutions aim to address.
How do you keep your team updated with the latest advancements in data science and AI, ensuring that Unity Health remains at the forefront of this rapidly evolving field?
- I only mean this in the most positive way—we are a bunch of nerds. Reading and keeping up with the latest developments in AI in medicine is not as hard as it used to be, the content is very accessible, although the volume has significantly increased.
- Another side of ensuring we remain at the forefront is sort of the opposite of latest advancements. There are a lot of tried-and-true methods that just work. And there are approaches that are just extremely well-suited to solve certain types of problems. While we keep up-to-date, we do also make sure we’re well versed in techniques that have been around a long time, in some cases many decades and in fact approaching nearly a century (like many well established regression approaches).
- That said, we are still not experiencing as many providers actually deploying solutions, so we have many global networks formally and informally keeping in touch with technologists, clinicians and corporations on how they are approaching AI in medicine.
What exciting AI projects or initiatives can we expect from Unity Health in the near future?
- Where we are trying to move really quickly is into high frequency multimodal AI. If you imagine all of the monitors and devices that are bedside, we are interested in streaming all of these different data types in real time into algorithms that can improve care.
Note! The content on this blog reflects my personal opinions and does not represent my employer. As the publisher, I am not responsible for the comments section. Each commenter is responsible for their own posts.