Artificial Intelligence in Medicine (Part 2)

By Alan Lembitz, MD

The use of artificial intelligence (AI) applications is the most important new information technology in decades that will change healthcare. This creates an ongoing necessity for healthcare systems to regularly assess the impact and risks of AI as its development and deployment is outpacing legal, medical, or business changes. The implications carry enormous benefits, risks, and unforeseen consequences.

Consider these important liability and safety issues:

1) The practitioner remains responsible for the practice of medicine.

Introducing another entity into the care process creates potential liability, and it does not necessarily reduce risk exposure for the provider-user.

It is still the licensed provider who is practicing medicine. Apportioning contributions to the outcome may take new forms when AI is used, but current regulatory theories tend to follow

the model of “device safety.” Liability for AI mishaps potentially involves both device and human accountability. Fully autonomous systems will doubtless begin to appear, but humans are likely to remain in the accountability loop.

Legislatures, courts, and agencies like the FDA will determine if additional liabilities attach to entities besides practitioners. There may be claims against vendors when product defects are not apparent or foreseeable. There may also be claims against organizations that fail to use diligence in specifying, acquiring, configuring, or maintaining systems or training users.

Investigating an AI claim will require determining the exact version and configuration of the tool and manner of its use, and reviewing its activity logs and possibly its operating code.

2) The use of AI needs to be transparent, verifiable, and reproducible.

Users of AI applications need to be able to explain in general how they work and what safety measures apply to them.

A foreseeable deposition question in a malpractice claim involving AI might be, “Please describe exactly how this event happened.” Answering this can be problematic with some applications that operate as black boxes, without user visibility into algorithms that even the developers may not be fully able to explain.

Nevertheless, defending good care will require you to show what tool you used, how you reviewed the output from that tool, and the role it played in clinical care and decision making. Standards of care will begin to incorporate expectations for AI use, as they did for other technologies (like EKG and MRI). Guidelines will constantly evolve for what “a reasonable provider in similar circumstances” should do.

3) Credibility is a challenge; inaccuracies can propagate and be difficult to identify.

Credibility of the medical record and the decision-making process will face challenges, as the line blurs between what is generated by AI and what is contributed by human judgment.

Similarly to what occurred in EHRs when copy-paste began to be used, the credibility of an entire record can be called into question when content is fabricated or faulty. The fluency of AI-generated records may make it more difficult to spot documentation errors. Rapid propagation of information across networks amplifies the impact of content errors and imposes a higher responsibility upon users to proofread AI-assisted work products.

AI tools are only as good as their training data. Building them upon large medical record archives—which are well known to contain inaccuracies and biases—has been shown to produce outputs that can sometimes be strikingly inappropriate, discriminatory, or dangerously wrong. There is an urgent call for explanatory systems that allow users to audit and examine AI thought processes.

4) Privacy issues are complex and are already challenging current safeguards.

Large AI systems typically require data processing by remote cloud servers. When patient data, images, or recordings are transmitted to external parties, issues arise about how they are stored and processed, and how the information can be used. For example, many machine learning systems incorporate data they receive into their permanent training sets.

This is a different situation from “a transcriptionist in the back room.” Patient consent is not required for functions that are simply part of healthcare operations. But if protected health information is re-purposed for uses other than the benefit of a particular patient, there needs to be a specific disclosure and consent.

Providers need to understand their user agreements and HIPAA Business Associate Agreements with vendors of applications that involve PHI. Claims that “data are de-identified” need to be verified because it has been shown that large databases can be used to re-identify confidential information that is presumed to be anonymous. Providers who use AI-powered search engines, intelligent assistants, documentation, and decision support applications that have access to PHI need to inquire carefully how vendors comply with HIPAA and other privacy rules.

Alan Lembitz is with the COPIC Department of Patient Safety and Risk Management.

CLICK HERE TO READ THE FIRST PART OF THIS TWO-PART SERIES

Previous
Previous

SDAHO Welcomes Lindsay Stroman As Workforce Development Coordinator

Next
Next

Shaping the Future of Rural Healthcare at Avera