Artificial Intelligence in Medicine

By Alan Lembitz, MD, and Michael Victoroff, MD

Michael Victoroff

The use of artificial intelligence (AI) applications is the most important new information technology in decades that will change healthcare. This creates an ongoing necessity for healthcare systems to regularly assess the impact and risks of AI as its development and deployment is outpacing legal, medical, or business changes. The implications carry enormous benefits, risks, and unforeseen consequences.

AI Application for Medical Providers

Alan Lembitz

Providers are already using AI and adoption/experimentation is accelerating. AI will be able to assist with just about every cognitive task a health care provider performs. Their interactive dialog capabilities can be used for patient interactions such as inbox management, scheduling, history taking, surveys, prescription management, translation services, recall and follow-up, answering medical questions,

diagnosing conditions, performing risk assessments, triage, and referrals. Some patients will be more comfortable interacting with AI than humans with respect to sensitive issues. Many are likely to have already interacted with a diagnostic AI tool online prior to seeing a healthcare professional.

Providers might use AI to help them in the informed consent process, disclosures of outcomes and results, adverse event reporting, apology and resolution discussions, and explaining medical information. AI can also be used for generating notes, reports, and correspondence, summarizing records and incoming reports, reviewing results, and creating task lists and reminders. Additional areas for AI applications include performance management, planning, procurement, research, literature review, and employee supervision.

The following highlights three potential uses of AI, their likelihood of adoption, benefits and risks, and some strategies to address those risks.

  1. 1. Virtual scribing and clinical documentation—The benefits of using a tool to quickly and accurately generate a record of clinical interactions are obvious. Patients will need to be aware of and consent to the use of the recording devices in place to generate the records. Providers will need to learn to “narrate their examinations” to populate the record. A policy to erase the work product of the recording at regular, short intervals, as well as open access to the final record generated from that work product will help allay patients’ fears about how their information was captured and what is going in their permanent medical record. 

The need to provide the processes required by the Cures Act will be even more important. One can also predict that patients’ awareness of the record and their requests to edit, amend, or delete materials in it will increase and the provider or their staff will need to be cognizant of the necessary HIPAA processes and documentation.

Finally, and most importantly, given how AI works and its inherent ability to produce fluent but possibly inaccurate, misleading, or even harmful output, the provider will absolutely need to read and verify the content of the notes generated by AI. The option of “dictated but not read,” in this case becomes “AI generated but not read” and will not be an accepted practice.

  1. 2. Office administration tasks—Much of office administration will have AI tools adaptable to everyday practice in the near future. The need to accurately assign a visit, schedule its length, collect the necessary pre-visit information, obtain authorizations, etc. will be adopted quickly by short-staffed practices and systems. The interactions with third party payors are likely to be highly automated on their end, which will necessitate efficient and accurate automation on the provider side. Generating requests for authorization can be done effectively by AI as it becomes more adept at extracting from EHRs.

  1. 3. Office triage and AI telemedicine clinical assessment—Phone triage and pre-visit clinical assessment has the ability to become very automated. This will be more sophisticated than the current algorithmically generated barrage of questions and will be able to process open-ended speech and generate interactions that will look surprisingly human. Awareness that this is being done via AI will be necessary for the patient, and the option to reach a human should be made available. Plus, the documentation from that interaction will need to be audited to ensure that the system is performing as expected.

Ultimately the machine-generated clinical assessment will be the work product of licensed providers, and the responsibility and liability will likely rest with them, however, issues of product liability will arise

and create new concerns around responsibility for errors.

Alan Lembitz, MD, and Michael Victoroff, MD are with the COPIC Department of Patient Safety and Risk Management

Previous
Previous

What’s New at Sanford Orthopedics & Sports Medicine?

Next
Next

USD School of Health Sciences Success Spotlight: Keeley Pollman, PA