Artificial intelligence (AI) has proven of value in the COVID-19 pandemic and shows promise for mitigating future healthcare crises. During the pandemic’s first wave in New York, for example, Mount Sinai Health System used an algorithm to help identify patients ready for discharge. Such systems can help overburdened hospitals manage personnel and the flow of supplies in a medical crisis so they can continue to provide superior patient care.1
Pandemic applications have demonstrated AI’s potential not only to lift administrative burdens, but also to give physicians back what Eric Topol, MD, founder and director of Scripps Research Translational Institute and author of Deep Medicine, calls “the gift of time.”2 More time with patients contributes to clear communication and positive relationships, which lower the odds of medical errors, enhance patient safety, and potentially reduce physicians’ risks of certain types of litigation.3
However, physicians and health systems will need to approach AI with caution. Many unknowns remain—including potential liability risks and the potential for worsening pre-existing bias. The law will need to evolve to account for AI-related liability scenarios, some of which are yet to be imagined.
Like any emerging technology, AI brings risk, but its promise of benefit should outweigh the probability of negative consequences—provided we remain aware of and mitigate the potential for AI-induced adverse events.
AI’s Pandemic Success Limited Due to Fragmented Data
Innovation is the key to success in any crisis, and many healthcare providers have shown their ability to innovate with AI during the pandemic. For example, researchers at the University of California, San Diego (UCSD) health system who were designing an AI program to help doctors spot pneumonia on a chest x-ray retooled their application to assist physicians fighting coronavirus.4
Meanwhile, AI has been used to distinguish COVID-19-specific symptoms: It was a computer sifting medical records that took anosmia, loss of the sense of smell, from an anecdotal connection to an officially recognized early symptom of the virus.5 This information now helps physicians distinguish COVID-19 from influenza.
However, holding back more innovation is the fragmentation of healthcare data in the U.S. Most AI applications for medicine rely on machine learning; that is, they train on historical patient data to recognize patterns. Therefore, “Everything that we’re doing gets better with a lot more annotated datasets,” Dr. Topol says. Unfortunately, due to our disparate systems, we don’t have centralized data.6 And even if our data were centralized, researchers lack enough reliable COVID-19 data to perfect algorithms in the short term.
Or, put in bleaker terms by the Washington Post: “One of the biggest challenges has been that much data remains siloed inside incompatible computer systems, hoarded by business interests and tangled in geopolitics.”7
The good news is that machine learning and data science platform Kaggle is hosting the COVID-19 Open Research Dataset, or CORD-19, which contains well over 100,000 scholarly articles on COVID-19, SARS, and other relevant infections.8 In lieu of a true central repository of anonymized health data, such large datasets can help train new AI applications in search of new diagnostic tools and therapies.
AI Introduces New Questions around Liability
While AI may eventually be assigned legal personhood, it is not, in fact, a person: It is a tool wielded by individual clinicians, by teams, by health systems, even multiple systems collaborating. Our current liability laws are not ready for the era of digital medicine.
AI algorithms are not perfect. Because we know that diagnostic error is already a major allegation in malpractice claims, we must ask: What happens when a patient alleges that diagnostic error occurred because a physician or physicians leaned too heavily on AI?
In the U.S., testing delays have threatened the safety of patients, physicians, and the public by delaying diagnosis of COVID-19. But again, healthcare providers have applied real innovation—generating novel and useful ideas and applying those ideas—to this problem. For example, researchers at Mount Sinai became the first in the country to combine AI with imaging and clinical data to produce an algorithm that can detect COVID-19 based on computed tomography (CT) scans of the chest, in combination with patient information and exposure history.9
AI in Healthcare Can Help Mitigate Bias—or Worsen It
Machine learning is only as good as the information provided to train the machine. Models trained on partial datasets can skew toward demographics that turned up more often in the data—for example, Caucasians or men over 60. There is concern that “analyses based on faulty or biased algorithms could exacerbate existing racial gaps and other disparities in health care.”10 Already during the pandemic’s first waves, multiple AI systems used to classify x-rays have been found to show racial, gender, and socioeconomic biases.11
Such bias could create high potential for poor recommendations, including false positives and false negatives. It’s critical that system builders are able to explain and qualify their training data and that those who best understand AI-related system risks are the ones who influence healthcare systems or alter applications to mitigate AI-related harms.12
AI Can Help Spot the Next Outbreak
More than a week before the World Health Organization (WHO) released its first warning about a novel coronavirus, the AI platform BlueDot, created in Toronto, Canada, spotted an unusual cluster of pneumonia cases in Wuhan, China. Meanwhile, at Boston Children’s Hospital, the AI application Healthmap was scanning social media and news sites for signs of disease cluster, and it, too, flagged the first signs of what would become the COVID-19 outbreak—days before the WHO’s first formal alert.13
These innovative applications of AI in healthcare demonstrate real promise in detecting future outbreaks of new viruses early. This will allow healthcare providers and public health officials get information out sooner, reducing the load on health systems, and ultimately, saving lives.