Current state of Food and Drug Administration-approved artificial intelligence/machine learning medical devices: pathways, transparency, and evidence gaps
Mini-Review

Current state of Food and Drug Administration-approved artificial intelligence/machine learning medical devices: pathways, transparency, and evidence gaps

Aditya Loganathan ORCID logo, Michael Friedman ORCID logo, Tayab Waseem ORCID logo, Kaylee Wahl-Cox ORCID logo, Shreya Kodu ORCID logo, Zara Sideeka ORCID logo, Sarah Shectman ORCID logo, Ali Watson ORCID logo, Aya Shehata ORCID logo, Nicolas Stanciu ORCID logo, Molly Rutenberg ORCID logo, Leslie Gailloud ORCID logo, Nicholas Melucci ORCID logo, Daniel I. Shpigel ORCID logo, Andrew C. Meltzer ORCID logo

Department of Emergency Medicine, The George Washington University School of Medicine and Health Sciences, Washington, DC, USA

Contributions: (I) Conception and design: A Loganathan, AC Meltzer; (II) Administrative support: A Loganathan, T Waseem; (III) Provision of study materials or patients: A Loganathan, N Stanciu, A Shehata, K Wahl-Cox, S Kodu, Z Sideeka, S Shectman; (IV) Collection and assembly of data: A Loganathan, N Stanciu, A Shehata, K Wahl-Cox, S Kodu, Z Sideeka, S Shectman; (V) Data analysis and interpretation: A Loganathan, M Friedman; (VI) Manuscript writing: All authors; (VII) Final approval of manuscript: All authors.

Correspondence to: Andrew C. Meltzer, MD, MS. Department of Emergency Medicine, The George Washington University School of Medicine and Health Sciences, 2120 L ST NW Suite 610, Washington, DC 20037, USA. Email: ameltzer@mfa.gwu.edu.

Abstract: Artificial intelligence (AI) and machine learning (ML) technologies are rapidly transforming healthcare, offering powerful tools for diagnostic support, image interpretation, and clinical decision-making. As AI/ML-enabled medical devices proliferate, regulatory oversight and evidence transparency have become key public health priorities. This study provides an updated review of all Food and Drug Administration (FDA)-approved AI/ML-enabled medical devices through May 2025, analyzing regulatory pathways, transparency of algorithmic reporting, and the quality of supporting clinical evidence. We conducted a narrative review of the FDA’s publicly available AI/ML-enabled medical devices database, capturing all devices with final approval through May 31, 2025. Data elements included device name, manufacturer, approval date, product code, medical specialty, regulatory pathway, and description of clinical validation. Premarket summaries were analyzed for mention of AI/ML, study type, and comparison to predicate devices. Six independent reviewers abstracted data, with discrepancies resolved by consensus. As of May 2025, 1,016 AI/ML-enabled devices have received FDA authorization, with 38.8% approved since January 2023. Most approvals (96.5%) occurred via the 510(k) pathway, with limited use of De Novo (3%) or premarket approval (PMA) (<1%) routes. Radiology dominated (76%) followed by cardiology (9.8%). Only 55.8% of devices disclosed algorithmic details beyond generic terms, and fewer than 20% referenced prospective validation data. Transparency and methodological rigor remain inconsistent across devices. AI/ML-enabled devices represent a major step in the digital transformation of medicine, yet the dominance of the 510(k) pathway and limited clinical validation highlight ongoing regulatory gaps. Strengthening standardized reporting, post-market monitoring, and international harmonization-especially for adaptive AI-will be critical for ensuring safe, effective, and equitable adoption.

Keywords: Artificial intelligence (AI); Food and Drug Administration (FDA); medical devices; regulation; transparency


Received: 18 August 2025; Accepted: 02 December 2025; Published online: 27 February 2026.

doi: 10.21037/jmai-2025-196


Introduction

Background

Artificial intelligence (AI) is rapidly changing healthcare, advancing how clinicians diagnose, manage, and treat a wide range of medical conditions. AI encompasses technologies that enable systems to perform tasks requiring human intelligence, such as pattern recognition and decision-making. Recent advances in deep learning (DL), big data, and computational power have driven an unprecedented surge in AI development. AI’s applications in healthcare range from generating realistic training data and personalized patient education to assisting clinicians with note summarization and treatment recommendations (1).

AI, machine learning (ML), and DL are related but distinct. ML is a subset of AI focused on learning from historical data, while DL uses neural networks to simulate aspects of brain function. AI/ML-enabled medical devices use these algorithms to support clinical decision-making, leveraging large datasets to improve outcomes and streamline care (2). Radiology and cardiology have especially benefited, using AI for image segmentation, anomaly detection, and electrocardiography (ECG) interpretation (3,4).

Objectives

Despite this rapid growth, the regulatory framework for AI/ML medical devices remains in flux. While previous studies examined earlier phases of device authorization, few have analyzed developments beyond 2020 or incorporated recent Food and Drug Administration (FDA) guidance on AI-enabled systems (5,6). The rapid evolution of these technologies calls for a comprehensive analysis of current regulatory mechanisms, transparency gaps, and evidence standards. Our review provides an updated evaluation of FDA-approved AI/ML-enabled devices, examining the type of AI used, associated medical specialty, and quality of supporting clinical evidence to inform regulatory policy and support safe clinical adoption.


Methods

  • Date of search: AI/ML-enabled devices with final FDA approval through May 31, 2025.
  • Databases and other sources searched: FDA AI/ML-enabled medical devices list.
  • Search terms: “artificial intelligence”, “machine learning”, “deep learning”, “neural network”, and related terms. We extracted device name, company, approval date, medical specialty, and product code, as well as FDA premarket summaries.
  • Selection process: six trained reviewers independently abstracted and verified entries. Discrepancies were resolved by consensus.
  • Inclusion/exclusion: devices listed in FDA’s AI/ML database with public documentation; no language restrictions.
  • Additional considerations: recorded intended patient age (adult/pediatric), secondary specialty classifications, and evidence type (retrospective, prospective, or predicate-based).
  • Institutional review board (IRB): not required as all data were publicly available.

Results

As of December 2024, 1,016 FDA-approved AI/ML-enabled medical devices exist, with 394 (38.8%) approved since January 2023. The full device list is available as https://cdn.amegroups.cn/static/public/jmai-2025-196-1.xlsx.

AI technology transparency

Among all devices, 28.6% (n=291) referenced only basic AI terms without algorithmic details. Fifty-five point eight percent (n=567) explicitly mentioned “ML”, “DL”, or “convolutional neural networks (CNNs)”. These were most common in radiology and cardiology. Fifteen point six percent (n=158) provided no AI-specific detail (Table 1).

Table 1

Summary table: main findings

Finding Value
Total FDA-approved AI/ML devices 1,016
% via 510(k) pathway 96.5
% via De Novo pathway 3
% via PMA pathway <1
% in radiology 76
% with only basic AI/ML description 28.6
% with specific algorithmic information 55.8
% with no AI/ML detail 15.6

AI, artificial intelligence; FDA, Food and Drug Administration; ML, machine learning; PMA, premarket approval.

Distribution by medical specialty

Radiology accounted for 76.1% (n=772) of all devices, using CNNs for segmentation and anomaly detection (Figure 1). Cardiovascular devices comprised 9.8% (n=100), focused on ECG interpretation and arrhythmia detection. Other specialties included neurology (3.5%, n=36) and hematology (1.8%, n=18). Further stratification by radiology-specific devices showed 35.4% of these had multi-system applications, 14.5% were used in neurological imaging, and 11.8% were used in oncologic imaging (Figure 2).

Figure 1 FDA approved AI devices in each specialty. Majority of AI/ML-enabled medical devices are used in radiology, making up 76.1% of the devices. Cardiovascular devices were the second most common device type, making up 9.8%. There were a total of 19 specialties amongst all the devices. AI, artificial intelligence; FDA, Food and Drug Administration; ML, machine learning.
Figure 2 Sub-specialties in radiological FDA AI approved devices. This chart shows the distribution of FDA approved AI devices across various radiological subspecialties medical specialties. Multi-system devices constitute the largest proportion (35.4%), followed by Neurology (14.5%) and Oncology (11.8%). AI, artificial intelligence; FDA, Food and Drug Administration.

Regulatory pathways

Ninety-six point five percent of devices were cleared via the 510(k) pathway, relying on substantial equivalence rather than direct trial evidence. De Novo (3%) and premarket approval (PMA) (<1%) routes, which require more rigorous validation, were used infrequently (Table 1).


Discussion

AI/ML-enabled devices are increasingly prevalent, especially in data-intensive fields like radiology (7). However, the lack of standardized, transparent reporting on algorithms, training data, and validation methods remains a significant gap in FDA regulation (7,8). Without detailed disclosures, clinicians cannot fully assess device safety or reliability, emphasizing the need for systematic, transparent oversight of AI in healthcare.

The 510(k) process-used for nearly all AI/ML device approvals-relies on predicate equivalence rather than direct demonstration of safety and efficacy for novel features (1,9). The 510(k) pathway specifically uses predicate devices as a point of comparison, with limited clinical evaluation required (10,11). Determining the predicate devices that are used can be a challenging process, and one study found that as many as one-third of these predicate devices are not AI-based whatsoever (12). Although the 510(k) pathway shortens development timelines, it may not adequately address adaptive AI systems whose performance evolves with new data (11). The 510(k) framework is not adaptable for many AI models, including generative AI, which is becoming increasingly utilized in AI/ML-enabled devices (13). Many manufacturers also withhold algorithmic details citing intellectual property, further limiting transparency (14).

Evidence quality of AI/ML-enabled devices

Fewer than one in five devices referenced prospective validation studies, with the majority relying on retrospective datasets or predicate-based comparisons. This raises concerns about real-world generalizability, particularly for adaptive systems. Recent analyses (7,8,14) highlight that insufficient dataset disclosure impedes clinical reproducibility and physician trust. Enhanced FDA guidance mandating prospective validation, demographic reporting of training data, and post-market outcome tracking would strengthen safety and reliability.

Recent FDA communications indicate progress toward stronger regulation, including post-market surveillance and real-time performance monitoring (15). The STARD reporting framework and APPRAISE-AI tool promote standardized quality metrics, aiding clinicians in assessing AI reliability (16,17).

Global context

A persistent concern is the lack of harmonization across international AI regulations. The European Union’s (EU’s) Medical Device Regulation (MDR) and Artificial Intelligence Act (AIA), and Singapore’s adaptive frameworks, illustrate varied risk-based approaches (18). The EU’s AIA requires the devices to initially be classified by risk, with the higher risk devices undergoing more intensive approval requirements (16). Though this framework has led to fewer AI/ML-enabled device approvals in the EU in recent years, proponents claim that their approach increases transparency, safety, and trust (15,19). Singapore has adopted a hybrid approach, utilizing patient and clinician input as well as interdisciplinary feedback from data scientists, lawyers, and ethicists to offer a balanced approach to effectively implementing AI in practice (15). The FDA is implementing a framework for harmonizing international standards of AI/ML-enabled devices more rigorously in the future (20). The FDA’s forthcoming Quality Management System Regulation (effective 2026) represents an important step toward global alignment (21,22).

Recommendations

  • The FDA should adopt standardized reporting requirements for all AI/ML-enabled devices, mandating disclosure of algorithmic architecture, training data, and performance metrics.
  • Regulatory pathways should require clinical testing for adaptive and “learning” algorithms whose performance changes over time.
  • Manufacturers should be incentivized to enhance transparency, positioning it as a competitive advantage.
  • Post-market surveillance and real-world evaluation should be strengthened with periodic recertification for major AI updates.
  • International harmonization is needed to enhance the global implementation of AI in clinical practice (12).
  • Establishing a national AI device registry that includes performance results and safety risks will facilitate informed decisions regarding the implementation of AI in practice.

Limitations

This review is limited to FDA-listed devices and publicly available documentation; unlisted or non-U.S. devices are not captured. Rapidly evolving literature means some recent clinical evidence may not be included.


Conclusions

The rapid expansion of FDA-approved AI/ML-enabled devices represents a major milestone in the integration of AI into clinical care. However, significant challenges remain in regulatory transparency, clinical validation, and ongoing oversight. Addressing these issues with robust, standardized regulatory policies will be essential to maximize patient benefit and foster responsible innovation. Future research should extend to international AI/ML-enabled devices to compare policy regulation and harmonization efforts worldwide. Global harmonization and collaborative regulatory advances are critical for safe and ethical AI oversight.


Acknowledgments

None.


Footnote

Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-2025-196/prf

Funding: None.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-2025-196/coif). The authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Aboy M, Crespo C, Stern A. Beyond the 510(k): The regulation of novel moderate-risk medical devices, intellectual property considerations, and innovation incentives in the FDA's De Novo pathway. NPJ Digit Med 2024;7:29. [Crossref] [PubMed]
  2. Rahman A, Debnath T, Kundu D, et al. Machine learning and deep learning-based approach in smart healthcare: Recent advances, applications, challenges and opportunities. AIMS Public Health 2024;11:58-109. [Crossref] [PubMed]
  3. Geras KJ, Mann RM, Moy L. Artificial Intelligence for Mammography and Digital Breast Tomosynthesis: Current Concepts and Future Perspectives. Radiology 2019;293:246-59. [Crossref] [PubMed]
  4. Najjar R. Redefining Radiology: A Review of Artificial Intelligence Integration in Medical Imaging. Diagnostics (Basel) 2023;13:2760. [Crossref] [PubMed]
  5. Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med 2020;3:118. [Crossref] [PubMed]
  6. Ebrahimian S, Kalra MK, Agarwal S, et al. FDA-regulated AI Algorithms: Trends, Strengths, and Gaps of Validation Studies. Acad Radiol 2022;29:559-66. [Crossref] [PubMed]
  7. Lee JT, Moffett AT, Maliha G, et al. Analysis of Devices Authorized by the FDA for Clinical Decision Support in Critical Care. JAMA Intern Med 2023;183:1399-401. [Crossref] [PubMed]
  8. Chouffani El Fassi S, Abdullah A, Fang Y, et al. Not all AI health tools with regulatory authorization are clinically validated. Nat Med 2024;30:2718-20. [Crossref] [PubMed]
  9. Lin JC, Jain B, Iyer JM, et al. Benefit-Risk Reporting for FDA-Cleared Artificial Intelligence-Enabled Medical Devices. JAMA Health Forum 2025;6:e253351. [Crossref] [PubMed]
  10. Lee B, Kramer P, Sandri S, et al. Early Recalls and Clinical Validation Gaps in Artificial Intelligence-Enabled Medical Devices. JAMA Health Forum 2025;6:e253172. [Crossref] [PubMed]
  11. Bressman E, Shachar C, Stern AD, et al. Software as a Medical Practitioner-Is It Time to License Artificial Intelligence? JAMA Intern Med 2026;186:5-6. [Crossref] [PubMed]
  12. Muehlematter UJ, Bluethgen C, Vokinger KN. FDA-cleared artificial intelligence and machine learning-based medical devices and their 510(k) predicate networks. Lancet Digit Health 2023;5:e618-26. [Crossref] [PubMed]
  13. Bossuyt PM, Reitsma JB, Bruns DE, et al. STARD 2015: An Updated List of Essential Items for Reporting Diagnostic Accuracy Studies. Clin Chem 2015;61:1446-52. [Crossref] [PubMed]
  14. Shick AA, Webber CM, Kiarashi N, et al. Transparency of artificial intelligence/machine learning-enabled medical devices. NPJ Digit Med 2024;7:21. [Crossref] [PubMed]
  15. Goh WWB, Tan CH, Tan C, et al. Regulating, implementing and evaluating AI in Singapore healthcare: AI governance roundtable's view. Ann Acad Med Singap 2025;54:428-36. [Crossref] [PubMed]
  16. Tang D, Xi X, Li Y, et al. Regulatory approaches towards AI Medical Devices: A comparative study of the United States, the European Union and China. Health Policy 2025;153:105260. [Crossref] [PubMed]
  17. Kwong JCC, Khondker A, Lajkosz K, et al. APPRAISE-AI Tool for Quantitative Evaluation of AI Studies for Clinical Decision Support. JAMA Netw Open 2023;6:e2335377. [Crossref] [PubMed]
  18. Reddy S. Global Harmonization of Artificial Intelligence-Enabled Software as a Medical Device Regulation: Addressing Challenges and Unifying Standards. Mayo Clin Proc Digit Health 2025;3:100191. [Crossref] [PubMed]
  19. Kalodanis K, Feretzakis G, Rizomiliotis P, et al. Evaluating the Impact of the EU AI Act on Medical Device Regulation. Stud Health Technol Inform 2025;323:40-4. [Crossref] [PubMed]
  20. Warraich HJ, Tazbaz T, Califf RM. FDA Perspective on the Regulation of Artificial Intelligence in Health Care and Biomedicine. JAMA 2025;333:241-7. [Crossref] [PubMed]
  21. Register F. Medical Devices; Quality System Regulation Amendments. 2024. Available online: https://www.federalregister.gov/documents/2024/02/02/2024-01709/medical-devices-quality-system-regulation-amendments
  22. Aimer O, Baldridge C. Navigating Medical Device Safety: Current Status, Challenges, and Future Regulatory Directions. 2025. [cited 2026 Jan 23]. Available online: https://link.springer.com/10.1007/s40264-025-01599-6
doi: 10.21037/jmai-2025-196
Cite this article as: Loganathan A, Friedman M, Waseem T, Wahl-Cox K, Kodu S, Sideeka Z, Shectman S, Watson A, Shehata A, Stanciu N, Rutenberg M, Gailloud L, Melucci N, Shpigel DI, Meltzer AC. Current state of Food and Drug Administration-approved artificial intelligence/machine learning medical devices: pathways, transparency, and evidence gaps. J Med Artif Intell 2026;9:38.

Download Citation