When used in health care, artificial intelligence (AI) holds the promise of improving patient outcomes, reducing costs, and advancing medical research. These tools can analyze patient images for disease, detect patterns in large sets of health data, and automate certain administrative tasks. But many companies develop AI-enabled medical products in what is essentially a “black box,” disclosing little to the public about their inner workings. Just as doctors and patients need to know what’s in a prescription medication, AI users need information about the tools that may be used to help make life-or-death medical decisions.
Not all AI-enabled tools fall under the purview of the Food and Drug Administration, but the agency regulates any software intended to treat, diagnose, cure, mitigate, or prevent disease or other conditions before it can be marketed and sold commercially. In recent years, FDA has been considering an updated approach to oversight of these products, including steps to improve how developers communicate about four key factors: a product’s intended use, how it was developed, how well it performs, and the logic it uses to generate a result or recommendation.
If companies do not disclose these details, prescribers and patients may be more likely to use the products inappropriately, and that can lead to inaccurate diagnoses, improper treatment, and harm. Here’s how and why this information matters to patients and prescribers:
FDA can promote increased transparency by requiring more and better information on AI-enabled tools in the agency’s public database of approvals. Currently, the details that companies publicly report about their products vary. For example, in an analysis of public summaries for the 10 FDA-cleared AI products for breast imaging, only one provided information about the racial demographics of the data used to validate the product. Requiring developers to publicly report basic demographic information—and where appropriate, data on how the product performed in key subgroups—could help providers and patients select the most appropriate products. This is especially important when treating conditions with disparate impacts on underserved populations, such as breast cancer, a disease more likely to be fatal for Black women.
Similar to its requirements for drug labeling, the agency could also require developers to provide more detailed information on product labels so that these tools can be properly evaluated before being purchased by health care facilities or patients. Researchers at Duke University and the Mayo Clinic have suggested an approach akin to a nutrition label that would describe how an AI tool was developed and tested and how it should be used. This would allow end users to better assess products before they are used on patients. The information could also be integrated into an institution’s electronic health record system to help make the data easily available for busy providers at the point of care.
AI can save lives and reduce health care costs, but providers and patients need to know more about these products to use them safely and effectively. FDA should continue its crucial work to increase the transparency of these revolutionary tools.
Liz Richardson directs The Pew Charitable Trusts’ health care products project.