Skip to Content


Losses from AI in Healthcare: Who is Liable?

Artificial intelligence (AI) has come a long way in the healthcare industry since it began in the 1970s.

While there has been continual growth in this area year over year, we began to see significant investment in AI in the healthcare sector at the onset of the COVID-19 pandemic. Since then, we’ve continued to see a steady increase, which is projected to carry on. The global healthcare AI market size was valued at $10.4 billion in 2021 and is expected to expand at a compound annual growth rate of 38.4% from 2022 to 2030.

Currently, the healthcare industry utilizes AI in many different ways, such as for patient monitoring and smart devices, like smartwatches, intended to monitor and collect users’ personal health data. There were 202.58 million smartwatch users at the end of 2021. That number is expected to increase to 216.43 million by the end of 2023.

There are also administrative uses for AI in healthcare, such as automated appointments and test results or voice recognition technology for dictation to the electronic medical record (EMR).

Other examples include computer-aided diagnosis (CAD), which is the computerized interpretation of imaging and pathology, and clinical decision support (CDS), which is a health information technology system explicitly designed to provide staff, patients, clinicians, and other individuals with data and patient-specific information that has been effectively filtered to help inform decisions about the patient’s care.

Potential issues with AI usage in healthcare

AI has significant potential to improve healthcare systems, but as with anything, there are benefits and pitfalls to its use. For example, smart devices can assist with monitoring and predicting things, such as with contract tracing, but this may put additional responsibilities on the healthcare provider, such as ensuring the patient has the proper training to use these devices, the devices are properly calibrated, and the patient has adequate internet access.

Privacy and security risks impact all healthcare entities, given that (medical AI programs) have access to and store patients' personal identifiable information and personal health information...

Speech recognition and medical dictation software should give the physician more time to focus on the patient. However, the issue with voice recognition is that it is not always 100% accurate, and AI-backed programs lack contextual understanding. Data from the National Library of Medicine shows that AI voice recognition accuracy ranges from 88.9 to 96%. In comparison, dictation and transcription have a 99.6% accuracy rate, making dictation and transcription a more effective way to reduce the risk of error and malpractice in healthcare.

AI-backed programs are only capable of hearing/transcribing words individually, so uncommon and longer terminology or use of medical/text abbreviations may be misrecognized or mistranscribed. Therefore, it is crucial for the physician to proofread or engage human medical transcriptionists to correct their documentation errors.

CAD is an essential part of medical imaging and radiology-based diagnostics. It identifies significant visual markers, which saves time on analysis and thus eases the workload for providers. However, inaccuracies are still possible and require human oversight. CDS in healthcare can decrease medical errors and improve efficiency and patient care, but CDS and CAD should only expand a physician’s decision-making, not replace it. CDS is not immune from bias and error, and there is also very little FDA oversight in implementing CDS.

When a loss from the use of AI occurs, what insurance coverages could potentially be impacted?

There is essentially no case law on liability involving medical AI because it is relatively new. If AI is involved in the provision of healthcare (or other) services, it is possible that both the developer and provider of the services may have liability under a variety of tort law principles.

Under theories of strict liability, a developer may be held liable for defects in their AI that are unreasonably dangerous to users. The AI itself will most likely not be held liable, but as AI evolves, it is reasonable to assume that tort theories may also evolve. As a result, the developer and provider will likely be exposed to liability associated with AI, which could impact product liability or professional liability coverages, depending on the functions the AI is performing.

It is important to mention that, depending on how the AI is used, a provider may be required to disclose the use of AI to their patients as part of the informed consent process.

Privacy and security risks impact all healthcare entities, given that they have access to and store patients’ personal identifiable information (PII) and personal health information (PHI), including medical records, billing information, test results, and other confidential information. Healthcare organizations have an obligation to protect this information under HIPAA.

A violation of HIPAA, such as a data breach, can result in significant fines and penalties regardless of whether a healthcare organization was responsible for the breach.

AI processes and has access to massive amounts of data. As a result, it is inevitable that using AI may implicate HIPAA and state-level privacy and security laws and regulations. The healthcare industry tends to be a target for ransomware attacks and data breaches—patient information tends to be worth a lot of money to hackers, and medical devices like x-rays, pacemakers, insulin pumps, and defibrillators can be an easy point of entry for attackers.

The use of AI in the healthcare industry is changing rapidly, and organizations are adopting new technologies every day. What we know today could surely change tomorrow, and until we see some case law in this area, we can only speculate.

As always, it is essential to talk with your insurance broker to ensure your insurance program is keeping pace with changes in your organization and to remain current on AI best practices when used in your business.


Resources and References

  1. Artificial Intelligence In Healthcare Market Size Report, 2030
    www.grandviewresearch.com
  2. Smartwatch Statistics 2023: How Many People Use Smartwatches?
    www.demandsage.com/smartwatch-statistics/
  3. Clinical Decision Support (CDS)
    www.talkinghealthtech.com/glossary/clinical-decision-support-cds
  4. Why Medical Dictation is Still Better Than Voice Recognition For Now
    www.healthitoutcomes.com
  5. Pros & Cons of Artificial Intelligence in Medicine | Drexel CCI
    www.drexel.edu/cci/stories/artificial-intelligence-in-medicine-pros-and-cons/
  6. The Impact of Clinical Decision Support Systems (CDSS) on Physicians: A Scoping Review – PubMed
    www.nih.gov
  7. AI in Healthcare: Top Ten Legal Considerations
    www.natlawreview.com

The views and opinions expressed within are those of the author(s) and do not necessarily reflect the official policy or position of Parker, Smith & Feek. While every effort has been taken in compiling this information to ensure that its contents are totally accurate, neither the publisher nor the author can accept liability for any inaccuracies or changed circumstances of any information herein or for the consequences of any reliance placed upon it.

Return to Articles index