The Algorithm Will See You Now: How AI’s Healthcare Potential Outweighs Its Risk

AI has proven of value in the COVID-19 pandemic and shows promise for mitigating future healthcare crises. During the pandemic’s first wave in New York, Mount Sinai Health System used an algorithm to help identify patients ready for discharge. Such systems can help overburdened hospitals manage personnel and the flow of supplies in a medical crisis so they can continue to provide superior patient care.1

However, concerns around potential liability risks and the potential for worsening pre-existing bias give reason to approach AI with caution: Lack of trust in AI is a significant barrier to its full deployment.2 U.S. liability laws are not ready for the era of digital medicine, which is already here—and growing.3 Meanwhile, AI’s tendency to amplify biases when trained on partial datasets could create high potential for poor recommendations, including false positives and false negatives.

Still, pandemic applications have demonstrated AI’s potential to lift administrative burdens and give physicians back more time with patients,4 enhancing patient safety and potentially reducing physicians’ risks of certain types of litigation.

Further, these innovative applications of AI in healthcare demonstrate real promise in detecting future outbreaks of new viruses early: At Boston Children’s Hospital, the AI application HealthMap was scanning social media and news sites for signs of disease cluster when it flagged the first signs of what would become the COVID-19 outbreak—days before the WHO’s first formal alert.5

Like any emerging technology, AI brings risk, but its promise of benefit should outweigh the probability of negative consequences—provided we remain aware of and mitigate the potential for AI-induced adverse events.


  1. Gold A. Coronavirus tests the value of artificial intelligence in medicine. Fierce Biotech. Published May 22, 2020. Accessed October 19, 2020. https://www.fiercebiotech.com/medtech/coronavirus-tests-value-artificial-intelligence-medicine
  2. Landi H. Healthcare is ramping up AI investments during COVID. But the industry is still on the fence about Google, Amazon. Here's why. Fierce Healthcare. Published November 4, 2020. Accessed November 13, 2020. https://www.fiercehealthcare.com/tech/healthcare-ramping-up-investments-ai-during-covid-but-industry-still-fence-about-google-amazon
  3. Landi H. Healthcare is ramping up AI investments during COVID. But the industry is still on the fence about Google, Amazon. Here's why. Fierce Healthcare. Published November 4, 2020. Accessed November 13, 2020. https://www.fiercehealthcare.com/tech/healthcare-ramping-up-investments-ai-during-covid-but-industry-still-fence-about-google-amazon
  4. Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York, NY: Hachette Book Group; 2019:285.
  5. Sewalk K. Innovative disease surveillance platforms detected early warning signs for novel coronavirus outbreak (nCoV-2019). The Disease Daily. Published January 31, 2020.

Startup companies using artificial intelligence (AI) for healthcare raised a record amount of funding in the second quarter of 2019, raising $864 million through 75 deals.1 This indicates a strong confidence in the industry that healthcare organizations will adopt AI more broadly in the near future. AI in healthcare is poised to change physicians’ practices and patients’ experiences in the most fundamental ways and holds great promise for improving patient safety.

At the moment, the sweeping benefits of health technology are still emerging: Improved accuracy of diagnosis, precision medicine, early detection, personalized medicine, and cheaply reproduceable drugs are just a few of the ways the healthcare community can expect AI to make the practice of medicine safer and more efficient. As the nation’s largest physician-owned medical malpractice insurer, we advance the practice of good medicine by reducing risk. To this end, we support the integration of AI for its potential in reducing litigation risk, particularly from misdiagnosis—the leading allegation in medical malpractice suits.2

Our analysis of more than 25,000 claims and suits revealed that in cases with incorrect diagnoses, inadequate assessments were the most common factor. A tool like AI could support physicians with a second opinion or a deeper layer of understanding. To give just one example, machine learning—a subset of AI concerned with pattern recognition—can help accurately identify the key indicators of a particular illness or injury. This could include symptoms that may otherwise be missed or diagnosed much farther down the line. AI systems are already capable of detecting minute anomalies that would be imperceptible for even the most experienced physicians.

If AI tools can consistently deliver these levels of precision, they could have the additional advantage of reducing the practice of defensive medicine—medical responses undertaken primarily to avoid liability. In a near future where misdiagnoses and associated malpractice suits are markedly reduced, physicians should be less inclined to order superfluous tests, procedures, or visits that their judgment deems unnecessary (but which the legal climate often requires).

Executive Summary

By Richard E. Anderson, MD, FACP, Chairman and Chief Executive Officer, The Doctors Company

A third of U.S. physicians are already using AI in their practices, and many believe there is ample reason to think this advanced technology can help address diagnostic errors, the largest cause of malpractice claims.3 However, there are still unresolved questions about the risks. AI technology is still in the early stages of deployment in clinical practice throughout the U.S., but the number of users is likely to rise in coming years.4 Leading healthcare institutions see AI as the front-runner in new technologies for reducing risk.5


53%

of physicians surveyed are optimistic about the prospects of AI in medicine.

“The most exciting thing about AI is that by using big data, we can better see associations between objective and subjective findings and diagnosis. Also, it is exciting to be able to create assessment tools for early warning signs for the progression of illnesses.”

—Michael Brodman, MD
Ellen and Howard C. Katz Chairman's Chair
Department of Obstetrics, Gynecology, and Reproductive Science
Icahn School of Medicine at Mount Sinai

35%

of physicians surveyed are using AI in their practices.

“While AI applications aimed at clinical care already exist, they are not yet widely adopted. Businesses advancing AI applications have tended to focus on tractable problems that deliver economic value to customers such as revenue capture. By contrast, the financial impact of AI applications that focus on frontline clinical care has yet to be proven to an extent where investment by purchasers is justified. There are other factors that contribute to the slow adoption. Clinical data are messy, and acceptable means of inserting these insights into workflows are still not well established. These challenges will ultimately be addressed, permitting scalable adoption of these new tools.”

—Peter Bonis, MD
Chief Medical Officer
Division of Clinical Effectiveness
Wolters Kluwer


A majority of physicians believe AI will ultimately benefit both patients and physicians when it comes to the speed and accuracy of diagnoses.

66%

of physicians surveyed believe that AI will lead to faster diagnoses.

66%

of physicians surveyed believe that AI will lead to more accurate diagnoses.

The foreseeable benefits from healthcare AI include:

  • Assistance with case triage
  • Enhanced image scanning and segmentation
  • Improved detection (speed and accuracy)
  • Supported decision making
  • Integration and improvement of workflow
  • Personalized care
  • Automatic tumor tracking
  • Disease development prediction
  • Disease risk prediction
  • Patient appointment and treatment tracking
  • Easing workload to prevent physician burnout and distractions that compromise doctor-led diagnosis
  • Making healthcare delivery more accessible, humane, and equitable
  • Increasing physician competency to enable patient-physician trust 6

The foreseeable risks include:

  • False positives/negatives
  • Systems error
  • Overreliance
  • Unexplainable results
  • Unclear lines of accountability
  • New skill requirements
  • Network systems vulnerable to malicious attack
  • Seeing things that don’t exist (AI hallucination)
  • Augmenting biased or unorthodox behavior

Initial Wins from Healthcare AI


“AI will impact almost every area of healthcare. The most promising areas are where machines can automate the processing of large volumes of data when it is not practical for people. Examples include reading an entire patient record and surfacing the relevant data in context, prereading and auditing images for radiologists, automatic identification of gaps in care, risk stratification of patient cohorts, and automation of prior authorization and claims processing based on understanding accepted treatment pathways relative to a specific patient's condition.”

—Dan Cerutti
General Manager Watson Health Platform
IBM Watson Health


Reading Diagnostic Images

Of all medical specialties, initial applications of AI are likely to affect radiology most directly. Diagnosis-related claims accounted for 67 percent of all diagnostic radiology claims in a study of closed claims between 2013 and 2018 conducted by The Doctors Company. In interventional radiology claims, the second-highest case type was “improper management of treatment course.” Many of those cases were related to primary care physicians’ management of treatment.

In diagnosis-related radiology claims, patient assessment was a contributing factor in 85 percent of the claims, including misinterpretation of diagnostic studies and failure to appreciate and reconcile relevant signs, symptoms, and test results. The top injury in diagnosis-related cases was undiagnosed malignancy, occurring in 35 percent. AI may offer a way to significantly reduce the incidence of failure-to-diagnose and the misinterpretation of diagnostic studies.

The advent of systems that can quickly and accurately read diagnostic images will undoubtedly redefine the work of radiologists and assist in the prevention of misdiagnoses. The majority of AI healthcare applications use machine learning algorithms that train on historical patient data to recognize the patterns and indicators that point to a particular condition. Although the best machine learning systems are possibly only on a par with humans for accuracy in making medical diagnoses based on images,7 experts are confident that this will improve over time as developers train AI systems on millions-strong databanks of labeled images showing fractures, embolisms, tumors, etc. Eventually these systems will be able to recognize the most subtle of abnormalities in patient image data (even when indiscernible for the human eye).

Radiologists have sought ways to help primary care physicians provide the best care, with strategies such as placing the most important findings first in the report and calling attending physicians with serious or confusing findings. AI can be an additional tool to help attending physicians better understand the findings and follow through with recommended tests or referrals.

Though there are legitimate concerns about radiologists being replaced by AI, they should not distract from the undeniable potential of these tools to assist physicians in identifying patients for screening examinations, prioritizing patients for immediate interpretation, standardizing reporting, and characterizing diseases.8

Initial research of AI applications in radiology shows success in:

  • Performing automatic segmentation of various structures on CT or MR images, potentially providing higher accuracy and reproducibility.9
  • Automatically detecting polyps during colonoscopy, which assists in increasing adenoma detection, especially diminutive adenomas and hyperplastic polyps.10 Investigators found that the AI system significantly increased the adenoma detection rate (ADR) (29.1 percent vs. 20.3 percent; P < .001), as well as the mean number of adenomas detected per patient (0.53 vs. 0.31; P < .001).
  • Making better diagnostic decisions through the use of a radiologist-trained tool that provides a summary view of patient information in the electronic health record (EHR) so radiologists can easily uncover relevant underlying issues.11
  • Prioritizing interpretation of critical findings that a radiologist might otherwise be unaware of until the study is opened. Such solutions allow for faster reading of cases that have high suspicion for significant abnormalities. 

Case Study12
A stroke protocol patient is brought in from the emergency department (ED). The CT scanner has a brain hemorrhage detector built into its display software and is able to immediately notify the team that there is a hemorrhage. At that point, the radiologist confers with the ED physician and other clinical team members so that CT angiography can be performed while the patient is still on the table, enhancing workflow and efficiency for the patient.

Case Study13
A 60-year-old male with no prior imaging is admitted to the ED for shortness of breath. A chest radiograph is obtained as part of the initial workup. The algorithm evaluates the image and determines whether the patient’s heart is enlarged. The radiologist is informed of this categorization at the time of interpretation. Additionally, the algorithm is able to evaluate for enlargement of the left atrium (or other specific chambers).

Case Study (from email correspondence with Bradley N. Delman, MD, MS, August 2019)
A female patient is scheduled for a scan to investigate a right-sided rib lesion, but existing imaging data shows that the lesion is on the left. A new data integrity system called CREWS (Clinical Radiology Early Warning System) is being developed at Mount Sinai to detect numerous classes of discordant data and advise a patient's physician before scanning to ensure imaging addresses the correct clinical scenario.


“Even the most straightforward of diagnoses requires a clinician’s time to understand and manage. AI algorithms working in the background, monitoring patient data, could minimize many diagnostic delays we have historically considered acceptable. Here is a real-world example: Whereas the diagnosis of subarachnoid hemorrhage on a head CT has historically required a radiologist’s eye, convolutional neural networks can now detect many instances of hemorrhage with reasonable enough accuracy to prioritize in the radiologist’s queue for a formal interpretation. As a result, cases with the highest urgency can be elevated for more prompt attention. Everyone will benefit from more streamlined diagnosis.”

—Bradley N. Delman, MD, MS
Associate Professor, Vice Chairman for Quality, Performance and Clinical Research
Department of Diagnostic, Molecular and Interventional Radiology
Icahn School of Medicine at Mount Sinai


Radiology AI brings the promise of quicker, more integrated tools that provide accurate diagnostic support for physicians, while easing workflow and lightening the administrative burden in clinical settings. Though the healthcare community can expect to wait for the technology to improve (and for relevant approvals) before these tools are considered mainstream, early studies are promising for patients and physicians alike.

Detecting and Predicting Cancer

AI systems are also yielding promising results in the diagnosis—and even the treatment—of a range of different types of cancer. A recent closed claims study by The Doctors Company determined that "undiagnosed malignancies” were the third most common alleged injury in medical and surgical oncology claims. In 29 percent of oncology claims, patients alleged that there was a failure or delay in diagnosing their illness, and inadequate patient assessments were a contributing factor in 46 percent of the claims, suggesting an opportunity for AI assist in physician diagnosis of cancer.

Oncology-related AI is showing success in:

  • Detecting metastatic breast cancer. Google AI boasted a 99 percent success rate in detecting this form of cancer.14
  • Diagnosing the two most common types of lung cancer, which can be challenging even for experienced physicians. In 2018, a team of computational researchers reported a 97 percent accuracy rate from a system trained to diagnose these types of cancer.15
  • Predicting the development of a variety of diseases with 93 percent accuracy overall, including cancers of the prostate, rectum, and liver,16 using natural language processing techniques. Mount Sinai Hospital developed a deep learning algorithm modeling the EHR to predict the development of these diseases.
  • Predicting a woman’s future risk of breast cancer.17 Using deep learning models, researchers have yielded substantially improved risk discrimination over the current clinical standard, which uses breast density in factoring risk.
  • Detecting skin cancers. While AI systems to detect skin cancer are still in their early stages,18 a study showed that a form of AI known as a deep learning convolutional neural network (CNN) misdiagnosed malignant melanomas less often than a group of 58 dermatologists.19

Alleviating Physician Burnout

Aside from the more direct ways in which AI can help improve diagnoses, many new and emerging healthcare systems are also designed to be assistive. By helping physicians tackle their workload with greater speed and efficiency, these automated technologies will free up time for doctors to focus on patients—which could help to improve communication and ultimately diagnosis. Nearly half of all physicians believe documentation burdens or workload are the leading cause of burnout, according to the Future of Healthcare survey by The Doctors Company.20

Among the AI tools that are helping lessen the pressures on practicing physicians are those that can:

  • Manage workflow
  • Provide a second opinion
  • Help with preliminary triage
  • Allow remote examination
  • Assist with treatment management and dosage
  • Allow voice control

“One of the most important potential outgrowths of AI in medicine is the gift of time. More than half of all doctors have burnout, a staggering proportion (more than one in four in young physicians) suffer frank depression. . . . Burnout leads to medical errors, and medical errors in turn promote burnout. Something has to give. A better work-life balance—including more time with oneself, with family, friends, and even patients—may not be the fix. But it’s certainly a start.”

—Eric Topol, MD, from Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again21


AI could actually help improve regular physician diagnosis by creating a better managed, more streamlined environment. Miscommunication is a driver of misdiagnosis and malpractice claims, and AI benefits like reduced administration, fewer unnecessary patient visits, and informed second opinions could help optimize physician-patient time, and improve communication during visits. 

Risks of Healthcare AI Will Emerge

At the same time we acknowledge the promise of AI, we must also remember that AI-driven technologies will almost inevitably introduce new risks for patients and clinicians. That will give patients new reasons to sue. In anticipation of the potential risks, the medical community must make important decisions about the regulation of AI and the necessary physician education.

AI may introduce liability scenarios we have never considered. As our board member Robert M. Wachter, MD, stated in a recent opinion piece, “In some cases, such as in the use of decision aids for straightforward acute problems, AI in health care may obviate the need for care by a human.”22 In such a scenario, how is liability assessed?

The exact nature of the risks from AI will become clear when wide application of the technology yields enough related malpractice claims for a valid assessment. But the medical professional liability insurance industry is not waiting for that day to begin assessing the risks. It already is clear that our current laws and boundaries will not be appropriate in an era of digital medicine.

Here we present some of the inherent risks that must be examined when dealing with healthcare AI:

Models trained on partial or poor data sets can potentially show bias towards particular demographics that are represented more fully in the data (e.g., Caucasian). This could create high potential for poor recommendations, like false positives. (All datasets are partial as there is not a central reserve of health data.) It’s critical that system builders are able to explain and qualify their training data.


“One major concern is trying to solve complex socially, culturally, and economically consequential problems using a high-power, intellectually potent machine that does not embody our values, and our cultural and emotional sensibilities. If not recognized and governed and supervised properly, such high-power machinery can result in socioeconomic displacement and injustice and inequality and do more harm than good in the long run.”

—Parsa Mirhaji, MD, PhD
Director, Center for Health Data Innovations
Associate Professor, Systems and Computational Biology
Albert Einstein College of Medicine,
Montefiore Medical Center
Institute for Clinical and Translational Research (ICTR)


General misdiagnosis is also possible in a well-trained system; although an accuracy rate may be high according to a manufacturer, there will inevitably be times when AI gets it wrong. This is why it is important to have a human expert in the loop. We must also determine where the liability would sit if this mistake is carried through to a misdiagnosis.

Overreliance on AI recommendations could become problematic in the long run. As accuracy levels improve and are more highly publicized, there is a danger that health workers will refrain from challenging AI results even when their own education and experience suggests a different conclusion.

Black box algorithms can generate suggestions without being able to provide justification for them, which creates problems for the chain of accountability when something goes wrong.

Cybersecurity issues will likely develop, as they have with other technologies. Cyber criminals, for example, could misclassify machine learning–based medical predictions.23


“Healthcare is widely considered to be an easy and soft target because ‘who in their right mind would attack the weak and defenseless?’ . . . or so the thought goes! The fact is that healthcare presents a rich target for cyber criminals because of the value of the data hosted and processed.

The big question is how do we understand what we have on our networks, assess and quantify their threats and vulnerabilities, and remediate those risks in such a way that patients are not placed at potential harm from attack by medical device. How do we identify when one of these devices is behaving abnormally so we can swap it out before attempting to treat a patient based upon false data? How can we identify when a device has been compromised and is being used to attack the hospital? These are things that physicians, nurses, and biomedical technicians are not currently trained to look for!

Treat Cybersecurity risk in the same way you treat Patient Safety because the two are inextricably linked in today’s connected digital healthcare environment. Many hospital CEOs, Boards of Directors and Ministers of Health haven’t realized this yet. The sooner they do the better for all of us.”24

—Richard Staynings
Chief Security Strategist
Cylera


We know that these risks will be supplemented or expanded when the technology is more widely adopted, and there is enough data on patient safety and malpractice claims related to its use. This was the pattern we saw over the time in which EHRs were widely adopted in healthcare; our early assessment of the potential risks and benefits proved to be largely accurate, yet other types of risk became clear once that technology became commonplace. Telemedicine is following a similar pattern.

We have already seen an increase in claims related to unintended consequences of the widespread adoption of EHRs. An analysis by The Doctors Company shows 216 claims in which EHRs contributed to injury from 2010 to 2018, up sharply from a low of seven cases in 2010. The average reached 22.5 cases per year in 2017 and 2018.25

The Doctors Company and other insurers, along with EHR and telemedicine vendors, are responding to those risks now, and it is likely we will see the same sequence of events with AI. Just as with EHRs and telemedicine, the legitimate promises of AI will draw the attention of healthcare providers right away, but unanticipated liability risks will rear their heads later.


“Physicians and other healthcare providers have an obligation to educate themselves about the risks, benefits, and alternatives to all innovations in medical practice. At The Doctors Company, we are committed to assisting the medical professions in this endeavor, and are also concerned about liability implications, especially when algorithms play a bigger role in determining treatment.”

—David L. Feldman, MD, MBA, FACS
Chief Medical Officer
The Doctors Company and Healthcare Risk Advisors


Conclusion: Before Wholesale Deployment, Know the Risks

Clearly AI has the potential to reduce the frequency of medical malpractice litigation by improving the speed and accuracy of diagnoses. Nonetheless, the healthcare industry must have good foreknowledge of the risks before embracing wholesale deployment.

Errors are not always preventable, and it is important to have a clear understanding of the liability implications for physicians who choose to augment their practice with machine intelligence. U.S. law is still ambiguous, and legal scholars are studying how incidents of malpractice related to AI should be considered. Their suggestions range from creating the new status of “AI personhood,” which would require that technology be insured for such an eventuality, to an extension of common entity liability, which would hold all parties involved in the use liable.26


“There are issues related to liability and assignment of blame that are not well understood in healthcare and beyond (similar to those issues related to liability of self-driving cars). For example, at the moment the role and value of AI-generated insight is not well understood as part of the electronic medical records and standard of care. It is often accepted that a provider would dismiss or ignore an AI-generated risk score implying a serious condition. I.e., do nothing or do differently are currently acceptable options in the presence of an AI-driven contradictory insight. But what if this high score indicating a future adverse outcome, paired with a contradictory action (or lack thereof) by a provider, was cited as evidence of clinical error in a malpractice claim?”

—Parsa Mirhaji, MD, PhD


Physician-led bodies like the American Medical Association (AMA) also have called for oversight and regulation of healthcare AI systems. They call for the alignment of liability and incentives so those who best understand the AI system risks, and are best positioned to avert or mitigate harm related to AI, can do so through design, development, validation, and implementation.27

Physicians must seek training in the use of AI and adhere to the standards provided by the device companies. Training will also enable physicians to fully and clearly articulate potential harms to patients28 in order to obtain true informed consent.29 The AMA also has proposed that AI training should be incorporated as a standard component of medical education.30 Others have observed that hospitals and other practices are also key to ensuring proper development, implementation, and monitoring of protocols and best practices for use of AI systems in healthcare.31

Thoughtful physicians need to anticipate not only the exciting potential for AI to improve patient care, but also the dangerous unintended consequences that may arise. Only in practice will we truly understand the potential of this powerful new tool. The medical professional liability insurance industry faces distinct challenges with the implementation of AI in healthcare. What are the risks? How do we ensure that healthcare providers have the necessary insurance coverage? How do we fight for healthcare providers in court when they are threatened by frivolous claims involving AI?

The Doctors Company will take the lead in answering these questions and helping healthcare providers anticipate risks from AI before they become problems. We will watch closely for trends in AI-related claims and advise the healthcare community on how to avoid these risks and enhance patient safety. The Doctors Company will assess whether existing coverages are adequate to cover AI-related claims32 or if new types of liability insurance will be needed, all with the goal of allowing physicians to focus on caring for their patients instead of defending claims.

The AI Report

Glossary

Black box systems
An opaque system that can be viewed only in terms of its inputs and outputs, with no knowledge regarding its internal workings or how it generates its inferences or classifications.

Deep learning algorithms
A kind of machine learning that runs data through several “layers” of artificial neural networks. Unlike machine learning, a deep learning network identifies raw data and a task to perform, such as an image classification, and it learns how to do this automatically.

AI or computer hallucinations
An interpretation error, for instance with machine learning, that can cause AI systems to misclassify what they might otherwise classify correctly.

Machine learning algorithms
Machine learning is a subset of artificial intelligence that "learns" as it identifies patterns in labeled data. The machine learning algorithm turns these data learnings into a model, which can then be applied to new data to make predictions or inferences. 


References

  1. Taylor NP. Healthcare AI funding hits new high as sector matures. MedTechDive. https://www.medtechdive.com/news/healthcare-ai-funding-hits-new-high-as-sector-matures/560396/. Published August 7, 2019. Accessed September 26, 2019.
  2. Saber Tehrani AS, Lee H, Mathews SC, Shore A, Makary MA, Pronovost PJ, Newman-Toker DE. 25-Year summary of US malpractice claims for diagnostic errors 1986–2010: An analysis from the National Practitioner Data Bank. BMJ Quality & Safety 2013;22:672-680.
  3. Survey of physicians conducted by The Doctors Company in July 2019 via Twitter and outreach to members: 1,786 respondents to question 1,734 respondents to question 2, 755 respondents to question 3, and 643 respondents to question 4.
  4. Park A. Nearly 90% of healthcare orgs are experimenting with emerging tech: AI, VR, blockchain. Becker’s Health IT & CIO Report. https://www.beckershospitalreview.com/healthcare-information-technology/nearly-90-of-healthcare-orgs-are-experimenting-with-emerging-tech-ai-vr-blockchain.html/. Published June 5, 2019. Accessed September 26, 2019.
  5. Digital health top concern of leading healthcare institutions [news release]. Napa, CA: The Doctors Company; August 8, 2019. https://www.thedoctors.com/about-the-doctors-company/newsroom/press-releases/2019/digital-health-top-concern-of-leading-healthcare-institutions/. Accessed September 26, 2019.
  6. Nundy S, Montgomery T, Wachter RM. Promoting trust between patients and physicians in the era of artificial intelligence. JAMA. Published online July 15, 2019;322(6):497–498. doi:10.1001/jama.2018.20563.
  7. Liu X, Faes L, Kale AU, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: A systematic review and meta-analysis. The Lancet Digital Health. 2019 October;1(6):PE271-E297. https://www.thelancet.com/journals/landig/article/PIIS2589-7500(19)30123-2/fulltext. Accessed October 3, 2010.
  8. Loria K. Putting the AI in radiology. Radiology Today 19(1):10. https://www.radiologytoday.net/archive/rt0118p10.shtml. Accessed September 26, 2019.
  9. Cuocolo R, Ugga L. Imaging applications of artificial intelligence. HealthManagement. 2018;18(6):484-. https://healthmanagement.org/c/healthmanagement/issuearticle/imaging-applications-of-artificial-intelligence. Accessed September 27, 2019.
  10. Wang P, et al. AI colonoscopy system may detect clues physicians ‘not tuned in to recognize.’ Healio Gastroenterology. https://www.healio.com/gastroenterology/interventional-endoscopy/news/online/%7Bf6f5e8c9-818a-4352-a3e6-89f21cac1227%7D/ai-colonoscopy-system-may-detect-clues-physicians-not-tuned-in-to-recognize. Published March 15, 2019. Accessed September 27, 2019.
  11. IBM Watson Health. IBM Watson imaging patient synopsis. https://www.ibm.com/products/watson-imaging-patient-synopsis. Published May 2019. Accessed March 12, 2021.
  12. Loria K. Putting the AI in radiology. Radiology Today 19(1):10. https://www.radiologytoday.net/archive/rt0118p10.shtml. Accessed September 27, 2019.
  13. Data Science Institute, American College of Radiology. Cardiomegaly Detection. https://www.acrdsi.org/DSI-Services/Define-AI/Use-Cases/Cardiomegaly-Detection. Accessed September 27, 2019.
  14. Wiggers, K. Google AI claims 99% accuracy in metastatic breast cancer detection. VentureBeat. https://venturebeat.com/2018/10/12/google-ai-claims-99-accuracy-in-metastatic-breast-cancer-detection/. Published October 12, 2018. Accessed September 30, 2019.
  15. National Cancer Institute. Using artificial intelligence to classify lung cancer types, predict mutations. https://www.cancer.gov/news-events/cancer-currents-blog/2018/artificial-intelligence-lung-cancer-classification. Published October 10, 2018. Accessed September 30, 2019.
  16. Kann B, Thompson R, Thomas C, Dicker A, Aneja S. Artificial intelligence in oncology: Current applications and future directions. Oncology (Williston Park) 2019 February;33(2): 46-53.
  17. New AI tool predicts breast cancer risk. HealthManagement.org. https://healthmanagement.org/c/imaging/news/new-ai-tool-predicts-breast-cancer-risk. Accessed October 1, 2019.
  18. Artificial intelligence shows promise for skin cancer detection [news release]. Washington: American Academy of Dermatology; March 1, 2019. https://www.aad.org/media/news-releases/ai-and-skin-cancer-detection. Accessed October 1, 2019.
  19. Man against machine: AI is better than dermatologists at diagnosing skin cancer [news release]. European Society for Medical Oncology; May 28, 2018. https://www.eurekalert.org/pub_releases/2018-05/esfm-mam052418.php. Accessed October 1, 2019.
  20. The Doctors Company. The future of healthcare: A national survey of physicians—2018. https://www.thedoctors.com/contentassets/23c0cee958364c6582d4ba95afa47fcc/11724b_fohc-survey_0918_nomarks_spread_fr-1.pdf. Published September 2018. Accessed October 1, 2019.
  21. Topol E. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. New York, NY: Hachette Book Group; 2019:285-286.
  22. Emanuel E, Wachter R. Artificial intelligence in health care: Will the value match the hype? JAMA. 2019;321(23):2281-2282. doi:10.1001/jama.2019.4914. https://jamanetwork.com/journals/jama/article-abstract/2734581. Accessed October 3, 2019.
  23. Polyakov A, Forbes Technology Council. How AI-driven systems can be hacked. Forbes. https://www.forbes.com/sites/forbestechcouncil/2018/02/20/how-ai-driven-systems-can-be-hacked/#58be892179df. Published February 20, 2018. Accessed October 1, 2019.
  24. Koh D. Healthcare cybersecurity—the impact of AI, IoT-related threats and recommended approaches. https://www.healthcareitnews.com/news/asia-pacific/healthcare-cybersecurity-impact-ai-iot-related-threats-and-recommended-approaches. Published September 18, 2019. Accessed October 1, 2019.
  25. Ranum D. Electronic health records continue to lead to medical malpractice suits. The Doctors Company. https://www.thedoctors.com/articles/electronic-health-records-continue-to-lead-to-medical-malpractice-suits/. Published August 2019. Accessed October 1, 2019.
  26. Sullivan H, Schweikart S. Are current tort liability doctrines adequate for addressing injury caused by AI? AMA Journal of Ethics. 2019 February;21(2):E160-166. https://journalofethics.ama-assn.org/article/are-current-tort-liability-doctrines-adequate-addressing-injury-caused-ai/2019-02. Accessed October 1, 2019.
  27. AMA: Put augmented intelligence in practice of medicine [news release]. Chicago: American Medical Association; June 12, 2019. https://www.ama-assn.org/press-center/press-releases/ama-put-augmented-intelligence-practice-medicine. Accessed October 1, 2019.
  28. Schiff D, Borenstein J. How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics. 2019 February;21(2):E138-145. https://journalofethics.ama-assn.org/article/how-should-clinicians-communicate-patients-about-roles-artificially-intelligent-team-members/2019-02. Accessed October 1, 2019.
  29. Sullivan H, Schweikart S. Are current tort liability doctrines adequate for addressing injury caused by AI? AMA Journal of Ethics. 2019 February;21(2):E160-166. https://journalofethics.ama-assn.org/article/are-current-tort-liability-doctrines-adequate-addressing-injury-caused-ai/2019-02. Accessed October 1, 2019.
  30. AMA adopt policy, integrate augmented intelligence in physician training [news release]. Chicago: American Medical Association; June 12, 2019. https://www.ama-assn.org/press-center/press-releases/ama-adopt-policy-integrate-augmented-intelligence-physician-training. Accessed October 1, 2019.
  31. Schiff D, Borenstein J. How should clinicians communicate with patients about the roles of artificially intelligent team members? AMA Journal of Ethics. 2019 February; 21(2):E138-145. https://journalofethics.ama-assn.org/article/how-should-clinicians-communicate-patients-about-roles-artificially-intelligent-team-members/2019-02. Accessed October 1, 2019.
  32. Wilkinson C. Tech E&O, cyber coverage most likely to pay AI-related claims. Business Insurance. https://www.businessinsurance.com/article/20190401/NEWS06/912327577/Tech-E&O-cyber-coverage-most-likely-to-pay-artificial-intelligence-related-claim. Published April 1, 2019. Accessed October 1, 2019.

The guidelines suggested here are not rules, do not constitute legal advice, and do not ensure a successful outcome. The ultimate decision regarding the appropriateness of any treatment must be made by each healthcare provider considering the circumstances of the individual situation and in accordance with the laws of the jurisdiction in which the care is rendered.

10/19