Artificial Intelligence's Promise for Radiology: Reducing Risks
Bradley N. Delman, MD, MS, is associate professor of radiology at the Icahn School of Medicine at Mount Sinai; vice chairman for quality, performance, and clinical research in the Department of Diagnostic, Molecular, and Interventional Radiology; and the site director of Radiology at Mount Sinai Hospital in New York City. The following is an interview in which he discusses the findings in a recent study of malpractice claims against radiologists, how artificial intelligence (AI) may help reduce the risk of patient injury, and—despite the benefits—the concerns he has with AI.
In the past few years we have seen many interesting applications of AI in mitigating underdiagnoses in imaging. Most methods, including those that rely on deep learning, use very different algorithms than radiologists use intellectually. Furthermore, whereas human radiologists can add intuition, flexibility, and creativity in diagnosis, computer algorithms give consistency, resistance to fatigue, and instant availability day or night. So for the foreseeable future, radiologists and algorithms will complement each other. Neither will be 100 percent perfect, but together we expect diagnosis to grow even more accurate and, hopefully, more efficient. Humans will, of course, need to adjudicate results, to sort through the wealth of information algorithms will generate to reject what is clearly inaccurate while promoting what is plausible. AI will also reduce some of the mundane human tasks: Summarizing data that is easily retrieved and processed, and formatting draft results into systematic, consistent, and readable form. This will allow the human to concentrate on the more intellectual tasks of detection and description of an abnormality while providing thoughtful diagnosis.
Of the many promising aspects of AI in healthcare, the potential for systems and routines to more rapidly discover and diagnose disease will continually reinforce the importance of AI. Even the most straightforward of diagnoses requires a clinician’s time to understand and manage. AI algorithms working in the background, monitoring patient data, could minimize many diagnostic delays we have historically considered acceptable. A real-world example: Whereas the diagnosis of subarachnoid hemorrhage on a head CT has historically required a physician’s eye, convolutional neural networks can now discover hemorrhage with reasonable enough accuracy to prioritize in the radiologist’s queue for a formal interpretation. As a result, cases with the highest urgency can be elevated for more prompt attention. Everyone will benefit from more streamlined diagnosis.
Diseases will be diagnosed more uniformly and more rapidly than they have been in the past. However, physicians must still confirm the validity and plausibility of a diagnosis based on the clinical context. AI may enhance diagnosis of uncommon or rare diseases, and perhaps unusual manifestations of common diseases. By definition, rare diseases are uncommon, so human diagnosticians may not think of them first among all possibilities. Algorithms can learn properties of common and uncommon, and factor in likelihood and prevalence to score the most likely diagnoses. The clinician will then be able to gauge plausibility of the top three or four possibilities, which may not be ranked on commonality as they are today, but on the probability of match in the imaging, lab, and clinical pattern. And, by combing through millions of records, AI is likely to uncover subtle connections between diseases that we do not yet understand.
AI will extend quality care to underserved populations. By advising providers who may lack specific subspecialty expertise, software will help historically remote providers make subspecialty-quality diagnoses.
I hope that AI will also help to reduce physician burnout. Providers are now required to perform ever-growing menial tasks as part of their responsibilities, when many would rather spend more time being doctors. Algorithms could help complete many of these menial tasks automatically, leaving more time for physicians to think and interact, which they do best and find most rewarding.
Although AI will offer increasing options to streamline diagnosis and management, we should be careful not to overapply before technologies are proven and limitations are understood. Systems must integrate adequate human oversight. Furthermore, modern AI might not anticipate the human element so well—behavioral variability and social determinants of health can be very difficult to quantify and apply.
While AI competition will be healthy, various solutions with overlapping domains will make the AI marketplace very confusing. We need to ensure that all players in the AI arena follow common standards to maximize interoperability and minimize conflicts.
As quality lead for our department, I continually seek options that will enhance efficiency, safety, and accuracy. We have already seen successes in radiology with computer-aided diagnosis for more than a decade, but algorithms are becoming increasingly sophisticated. Even with these advances, providers will need to curate AI output to ensure patient management follows an integrated plan. In addition, mining of progress notes and consults by natural language processing will allow AI of the future to cull relevant clinical information from patients’ records in the background, ensuring that appropriate diagnostics are performed and that the most likely diagnoses aren’t overlooked. AI will also help streamline billing by employing logic that identifies specific verbiage in specific contexts that could prepopulate codes and standardize the billing process. And quality metrics, becoming such an important part of documentation and increasingly required for payment, will be aggregated with less user input.
In our department we are implementing a home-grown data integrity system called the Clinical Radiology Early Warning System (CREWS). This sits behind the scenes monitoring for potentially conflicting data to notify physicians or technologists of potential hazards before they can affect patients. Since implementation, this system has expanded to monitor critical values notifications and now even helps to ensure appropriate reporting compliance. In short, this system synthesizes what we never had the resources to do well, and effectively performs the work of full-time employees that we have never had to hire.
The guidelines suggested here are not rules, do not constitute legal advice, and do not ensure a successful outcome. The ultimate decision regarding the appropriateness of any treatment must be made by each healthcare provider considering the circumstances of the individual situation and in accordance with the laws of the jurisdiction in which the care is rendered.
02/20