Research Reveals Gaps in Oversight of Artificial Intelligence for Radiology
Survey shows that about a quarter of hospitals were using FDA-cleared, AI-enabled medical imaging tools in 2022 with little comprehensive piloting and monitoring

Health care providers have been using medical devices enabled by artificial intelligence (AI) for decades. Some uses are improving health outcomes, but others may be putting patients at risk. Just as developers must create safe and effective products, health care providers and facilities also need to take steps to ensure that they are using the products responsibly. However, users take a wide range of approaches to adopting best practices when it comes to assessing, implementing, and monitoring AI-enabled devices.
To get a sense of how that process had been unfolding, The Pew Charitable Trusts conducted research and convened experts in 2022 to examine how AI was being used in radiology, which is where most AI-enabled medical devices approved or cleared by the U.S. Food and Drug Administration are used.
Pew partnered with research firm SSRS to survey hospital leadership, radiologists, IT staff, and other health professionals from May 17 to Aug. 11, 2022. Of 491 respondents, 44% said they were using AI-enabled medical imaging tools for clinical purposes. Some hospitals were more likely to use AI than others: About 54% of hospitals with more than 100 beds reported using AI, as did 54% of teaching hospitals—significantly more than smaller hospitals (35%) and nonteaching hospitals (42%), respectively.
In the years since this survey, the use of AI has expanded greatly—both in society as a whole and in health care—but the results provide a window on an early step in the implementation process. More recent data from 2025 shows widespread authorization of AI-enabled devices by today’s FDA, still with a heavy emphasis in radiology.
Of the hospitals using AI-enabled medical imaging tools in 2022, 54% said that they were using only products cleared by the FDA. Just 3% said they were using exclusively homegrown AI tools, 8% said they were using both kinds, and 35% did not specify the regulatory status of the products they were using.
Of the hospitals using FDA-cleared, AI-enabled medical imaging tools—either exclusively or alongside homegrown AI—82% said that they were using just one or two FDA-cleared products. About 62% of these institutions said that they did so solely in patients age 22 and up, while 24% said they used AI in populations above and below that age. The hospitals most often reported that, at the time of the survey, they were using the technology for image interpretation and analysis (82%) as well as worklist prioritization (48%), opportunistic screening and population health management (19%), clinical workflow process improvements (12%), and image enhancements (7%).
Officials at the hospitals using FDA-cleared AI were also asked how they assess, implement, and monitor FDA-cleared AI-enabled medical imaging tools. Although these practices also have changed significantly since 2022, the data provides a more complete picture of the 2022 landscape. At that time:
- 34% said that they received information about training and validation sets used to create algorithms for all tools they used, 20% received it for some tools they used, 13% did not receive it for any, and 23% were not sure.
- 26% reported that they tested AI tools with sample images from their patients, 18% did so for some tools, and 21% did not do so for any tools; 24% were not sure.
- 26% said that they piloted all tools within their facility before using them widely, 17% did so for some tools, and 27% did not do so for any tools; 20% were not sure.
- 28% said that they always evaluated specific AI tools by consulting with peer institutions that were already using them, 18% did so for some tools, 16% did not do so for any tools; 26% were not sure.
- 31% said that, once the products were in use, they monitored them at set intervals based on specific protocols or predefined measures, 34% said they monitored them periodically or as issues arose, and 23% said they did not conduct any monitoring.
- Of the 95 hospitals that reported monitoring AI tools, 20% said that they monitored the performance internally (e.g., via dashboard or registry), 16% said they received monitoring services from the AI vendor, 45% said they do both, and 9% said they did something else.
When asked about specific benefits of AI-enabled medical imaging tools, most respondents—AI users and nonusers alike—said that improved quality of care and diagnostic accuracy and greater clinician efficiency and productivity were “major benefits.” Most respondents also said that improved workflow, enhanced patient experience, and market differentiation were “major” or “minor” benefits.
When asked about barriers to AI implementation, users and nonusers most often cited high costs and the need to generate a return on investment. Respondents were also concerned about workflow integration, technological infrastructure and training needs, AI performance, and cybersecurity.
Following the quantitative survey, 27 clinicians, AI developers, health IT experts, researchers, and federal officials gathered Nov. 1- 2, 2022, to examine the results, identify the gaps between how products are being developed, used, and monitored and how they should be, and recommend action. The discussion was organized around five checkpoints in an AI product’s life cycle: model development, product review, procurement, implementation, and post-deployment surveillance. It highlighted several common threads:
- Participants agreed that, in the development phase, standards are needed to define best practices, including transparency, bias mitigation, and explainability. Likewise, they said that prospective users should examine how AI products are developed, on which data they are trained, and the logic that the algorithms use to produce outputs, among other inner workings, to ensure that the products will be effective for their institutions’ patient populations. For example, a hospital that treats a high proportion of Black patients should know if an AI tool designed to help doctors detect skin cancer was trained on sufficient data from Black people, as research demonstrates that AI trained on images of skin lesions of lighter-skinned people produces less accurate results when it is used to identify lesions on people with darker skin tones.
- They widely recognized that FDA review and clearance is needed to ensure product safety and efficacy and promote trust for health care providers and patients. However, participants also acknowledged that AI-enabled software does not fit neatly into the agency’s current processes and expressed concern that it would be challenged to scale oversight of the growing number of AI products.
- Participants agreed that, even if products receive FDA clearance or approval, institutions should perform their own testing to ensure that the products are safe and effective for their specific patient populations. For example, each site could evaluate the product with a retrospective dataset, comparing the AI outputs with the providers’ actual diagnoses and actions.
- Participants proposed development of universal standards for implementing AI in clinical practice and agreed that multidisciplinary governance teams should oversee this process to ensure that AI tools are fully seamlessly integrated into existing clinical workflows.
- Finally, participants noted that as AI tools learn from real-world data, their performance can degrade—a phenomenon known as “data drift.” As a result, users need standards and resources to implement structured surveillance of AI performance in the clinical environment.
Since 2022, AI use has become much more common, particularly in radiology. As of February 2025, of 1,016 AI-enabled devices that the FDA had authorized, 778 were in radiology. The introduction of ChatGPT—launched just a few weeks after the research concluded—also ushered in a wave of new generative AI tools. Using large language models, these products are now being used to perform administrative tasks and have potential for clinical and research applications. The American College of Radiology has expanded efforts to help its members navigate this evolving AI terrain.
The government’s approach to AI has also evolved since 2022. In April 2023, the FDA published draft guidance for developers seeking FDA clearance of their products. Then, in March 2024, in response to the White House’s October 2023 executive order, the FDA published a white paper that describes how it is collaborating with external partners to develop regulations, standards, guidance, research, and other tools that support the “development, deployment, use, and maintenance” of AI-enabled medical products.
Then in January 2024, the Office of the National Coordinator for Health IT published the final Health Data, Technology, and Interoperability rule, which includes federal requirements for AI and machine learning-based predictive software in health care.
These developments represent significant progress, but the research by Pew provides important information for developers, users, and regulators. As AI in health care continues to evolve, more initiatives will be needed to understand how and why providers are using AI, including generative tools; how the products are performing; and how federal regulations and professional standards and guidelines can be strengthened in ways that balance technological innovation and patient safety.
Kathy Talkington is the director of The Pew Charitable Trusts’ public health programs and Josh Wenderoff is a senior officer working on Pew’s health programs.