American Medical Association To Develop Recommendations For Augmented Intelligence

As augmented intelligence (AI) promises a new frontier in healthcare and medicine, the American Medical Association (AMA) is taking steps to advise on the immediate implications for the practice of medicine. Specifically, the AMA is encouraging better understanding of how AI may appropriately harness its vast potential to benefit patients -- and decrease the administrative burden on physicians. At the Annual Meeting of the AMA House of Delegates, the nation’s physicians agreed to develop principles and recommendations on the benefits and unforeseen consequences of relying on AI-generated medical advice and content that may or may not be validated, accurate, or appropriate -- and then advise policymakers to take action that will protect patients from misinformation.

“AI holds the promise of transforming medicine. We don’t want to be chasing technology. Rather, as scientists, we want to use our expertise to structure guidelines, and guardrails to prevent unintended consequences, such as baking in bias and widening disparities, dissemination of incorrect medical advice, or spread of misinformation or disinformation,” said AMA Trustee Alexander Ding, M.D., M.S., M.B.A. “We’re trying to look around the corner for our patients to understand the promise and limitations of AI. There is a lot of uncertainty about the direction and regulatory framework for this use of AI that has found its way into the day-to-day practice of medicine.”

In addition to making recommendations, the AMA House of Delegates voted to work with the federal government and other appropriate organizations to protect patients from false or misleading AI-generated medical advice. The AMA has begun – and will continue – to encourage physicians to educate patients about the benefits and risks of patients engaging with AI.

As a leader in medicine, the AMA has a unique opportunity to ensure that the evolution of AI in medicine benefits patients, physicians, and the health care community. While the tools show tremendous promise in helping alleviate physician administrative burdens and may ultimately be successfully used in direct patient care, OpenAI’s ChatGPT and other generative AI products have known issues and are not error free. The current limitations create potential risks for physicians and patients and should be used with appropriate caution at this time. AI-generated fabrications, errors, or inaccuracies can harm patients and physicians need to be acutely aware of these risks and added liability before they rely on unregulated machine-learning algorithms and tools.

“Moving toward creation of consensus principles, standards, and regulatory requirements will help ensure safe, effective, unbiased, and ethical AI technologies, including large language models (LLMs) and generative pre-trained transformers (GPT) [are developed] to increase access to health information and scale doctors' reach to patients and communities,” Dr. Ding said. “We are entering this brave new world with our eyes wide open and our minds engaged.”

Previous
Previous

American Medical Association Announces Board of Trustees for 2023-2024

Next
Next

Brain Injury Alliance Of Nebraska Launches New Peer Mentorship Program