Healthcare artificial intelligence (AI) has scored impressive wins in recent years. This includes Deep Mind’s AlphaFold software to predict protein folding, which secured science awards and mindshare in research circles. On the clinical side, with less fanfare, machine learning (ML) systems cleared hurdles in infectious diseases, diagnostics, and other specialized contexts.
To be sure, there have also been headline-grabbing disappointments and failed collaborations between big tech and hospital systems. Against that backdrop, how have the most successful AI development teams navigated pitfalls and improved patient care? Part of the secret is to start with a defined, unambiguous objective. Research teams must avoid pressure to “go large” with programs where goals end up outstripping the developer’s ability to design an algorithm with transparency or train it with precision. Clear focus, trainability, and transparency are essential for three reasons:
- To ensure that end users, including providers and health systems, can clearly understand risks, such as embedded bias.
- To facilitate real-world assessments that will prove the software has value.
- To help AI innovators and users analyze what went wrong if/when patients are harmed.
Some important milestones over the past 12 months meet these criteria and will have major implications in 2023. One example is the large-scale validation of an ML-based alert system known as the targeted real-time early warning system (TREWS) for deadly sepsis infections. Each year in the US, 1.7 million people develop sepsis, resulting in 350,000 deaths, notes the Centers for Disease Control and Prevention (CDC). A growing number of hospital emergency departments rely on ML to spot such cases early and decide which require immediate treatment. But how well do these ML systems work?
Last summer, researchers at Johns Hopkins University completed the largest-ever study to assess an AI sepsis alert system—in this case, TREWS, which monitored more than half a million patients. Researchers found that the software reduced patient mortality, improved scores in sequential organ failure assessments, and lowered hospital stays among survivors.
TREWS is one data point in a long-running and encouraging trend. An earlier example is the success story at the Aravind Eye Hospital in Madurai, India. There, more than seven years ago, researchers began using an algorithm to expand access to treatment for diabetic retinopathy, a leading cause of blindness. The research team, led by Google’s Verily unit, built a system that, in seconds, detects disease signals in medical scans.The project started with a human-led learning set, collecting more than 128,000 images of human eyes that ophthalmologists had sorted based on likelihood of disease. That training set helped algorithms find patterns, leading to a system that can identify each stage of disease and keep learning over time.
In the coming year, we expect AI to intersect with healthcare priorities in areas such as population health based on what some have called ambient testing. Patients receiving elective surgery, for example, might be prescribed further medical scans based on unrelated symptoms flagged by algorithms. AI will also improve clinical trial execution, helping sponsors understand which trial sites outperform, or which healthcare providers can best help sponsors enroll patients from minority or underserved communities.
Companies will adopt more agile AI approaches, some running hundreds of experiments per year in rapid 12-week sprints. What we will not see are systems running important, cross-disciplinary institutional functions or programs unaided by human experts. Safe and effective AI initiatives will hinge on equipping human teams with the tools they need—getting the best of what machines can do, combined with the best humans have to offer.
About the Author
Leigh Householder is EVP/Managing Director, Technology and Data Science at Syneos Health.