Summary:
The American Medical Association (AMA) has adopted a new policy requiring clinical AI tools to be explainable, independently validated, and transparent to ensure safety, efficacy, and informed physician-patient decision-making.

Key Takeaways:

  1. Explainable AI Required: The AMA mandates that clinical AI tools must provide accessible, understandable explanations for their outputs to support physician interpretation and patient care.
  2. Independent Oversight: The policy calls for third-party validation—rather than relying on developers—to assess whether AI tools meet explainability standards.
  3. Transparency Over IP Claims: While intellectual property should be protected, it must not override patients’ rights to transparency and physicians’ ability to critically assess AI-driven decisions.

As augmented intelligence tools continue to emerge in medical care, the American Medical Association (AMA) adopted policy during the Annual Meeting of its House of Delegates aimed at maximizing trust in and increasing transparency around how these tools arrive at their conclusions. Specifically, the new policy calls for explainable clinical AI tools that include safety and efficacy data. To be considered explainable, these tools should provide explanations behind their outputs that physicians, and other qualified humans, can access to interpret and act on when deciding on the best possible care for their patients.

Furthering the AMA’s support for more oversight and regulation of augmented intelligence (AI) and machine learning (ML) algorithms used in clinical settings, the new policy calls for requiring an independent third party, such as regulatory agencies or medical societies, to determine whether an algorithm is explainable, rather than relying on claims made by its developer. The policy states that explainability should not be used as a substitute for other means of establishing safety and efficacy of AI tools, such as randomized clinical trials. Additionally, the new policy calls on AMA to collaborate with experts and interested parties to develop and disseminate a list of definitions for key concepts related to medical AI and its oversight.

“With the proliferation of augmented intelligence tools in clinical care, we must push for greater transparency and oversight so physicians can feel more confident that the clinical tools they use are safe, based on sound science, and can be discussed appropriately with their patients when making shared decisions about their health care,” says AMA Board Member Alexander Ding, MD, MS, MBA. “The need for explainable AI tools in medicine is clear, as these decisions can have life or death consequences. The AMA will continue to identify opportunities where the physician voice can be used to encourage the development of safe, responsible, and impactful tools used in patient care.”

The AMA Council on Science and Public Health report that served as the basis for this policy noted that when clinical AI algorithms are not explainable, the clinician’s training and expertise is removed from decision-making, and they are presented with information they may feel compelled to act upon without knowing where it came from or being able to assess accuracy of the conclusion. The report also noted that intellectual property concerns, when provided as a rationale for not explaining how an AI device created its output, should not nullify a patient’s right to transparency and autonomy in making medical decisions. To this end, the new policy states that while intellectual property should be afforded a certain level of protection, concerns of infringement should not outweigh the need for explainability for AI with medical applications.

For more information about the AMA, visit ama-assn.org.