Overcoming adoption challenges: bridging the gap between artificial intelligence algorithm development and implementation in healthcare
Introduction
Integrating artificial intelligence (AI) into healthcare systems has promised transformation, from enhancing diagnostics to streamlining clinical decision-making. Before recent advancements, systems used rule-based and expert determination to automate simple tasks (1). However, reliance on predefined rules limited the scalability and effectiveness of these tools.
With generative AI (genAI), we have an AI that can analyze, summarize, and generate content in a similar way to humans. This is achieved using a large corpus of existing data that utilizes patterns and features from that data (1). These tools open new frontiers in medical research, diagnosis, treatment, and personalized patient care (2,3). For example, generative adversarial networks can generate synthetic medical images resembling actual patient data, augmenting radiologists’ accuracy and efficiency by providing additional imaging data to analyze and train AI-driven diagnostic algorithms (4). An AI-guided targeted screening approach leverages existing clinical data to enhance atrial fibrillation (AF) detection, potentially improving the effectiveness of AF screening (5). Notably, computational modeling combined with machine learning has been used to design virtual hearts for guiding ablation in patients with persistent AF, using personalized atrial geometric models created from magnetic resonance images obtained before the procedure (6).
Despite the rapid advancements, AI’s implementation into clinical practice remains challenging as multiple factors require concerted efforts to ensure responsible, ethical, and equitable deployment in a clinical setting. This commentary highlights the importance of a collaborative effort among healthcare professionals (HCPs), AI developers, regulatory bodies, policymakers, and patients to address obstacles hindering AI adoption in distinct healthcare systems.
The complexities of implementation
Despite these rapid advances, the transition to real-world implementation in healthcare settings has met multiple challenges (7). Technical, ethical, and regulatory considerations are abundant, with challenges surrounding patient privacy, data security, and bias paramount in healthcare settings (7-9). Ensuring compliance with existing regulatory frameworks, such as the Health Insurance Portability and Accountability Act (HIPAA) in the United States and the General Data Protection Regulation in Europe, requires careful attention to data governance and transparency in AI-driven decision-making processes (10). Organizations are developing guidance and regulations. For example, the British Standards Institution guidelines BS30440 have consolidated prior guidance and frameworks to create a guide to AI development for use in healthcare, ensuring that AI products conform to efficacy, safety, and ethical standards and can integrate into the healthcare environment (11). However, outside of BS30440, the guidance is fragmented, primarily focusing on products considered medical devices.
Additionally, technical challenges include interoperability issues with existing healthcare systems, data quality and format variability, and the need for a robust infrastructure to support new workflows (12,13). The unstructured and heterogeneous nature of healthcare data poses substantial challenges for AI algorithm development and genAI interpretation (14). Moreover, mitigating the risk of bias and ensuring equitable access to AI-driven healthcare interventions are critical to building trust among patients, healthcare providers, and policymakers (15).
Integrating algorithms into clinical workflows also necessitates organizational and cultural changes, including training HCPs in AI literacy, fostering interdisciplinary collaboration between data scientists and clinicians, and addressing concerns about job displacement and workflow disruption (15-17). Overcoming these barriers necessitates a multifaceted approach prioritizing collaboration, transparency, and patient-centered care. Active engagement and cooperation are required among diverse stakeholders, including healthcare providers, administrators, patients, and policymakers. Physician buy-in is crucial, and initiatives such as the National Institute for Health Research’s support for collaborating in applied health research exist to encourage implementation (18).
Understanding the gap
Owing to these challenges, a notable gap exists between AI algorithm development, genAI utilization, and implementation within healthcare systems. Disparities exist in the expertise, resources, and infrastructure required versus implementation, with collaborative strategies needed to bridge this gap.
A recent systematic review of 59 publications on implementing AI in healthcare identified barriers covering ethical, technological, liability and regulatory, workforce, patient safety, and social factors (19). Unresolved concerns of HCPs may contribute to the AI development-implementation gap, with attribution of liability in cases of adverse outcomes stemming from AI systems requiring clarification. Currently, liability is attributed to HCPs, even if they employ an AI algorithm, which they may not understand well or at all (20). Consequently, the profound implications of using algorithms present a deterrent to the adoption of AI by HCPs. Building trust is essential for the adoption of AI in clinical settings. It involves discussing numerous ethical questions, such as the potential for AI to replace human judgment, how to handle AI errors, and the implications of AI on the patient-physician relationship (21). Patient perspectives and endorsements are also critical, with ethical, trust, and equity concerns that require consideration, potentially by making AI more explainable and accessible to individuals from diverse backgrounds (22-24).
Using these algorithms effectively is a significant challenge and demands different skills and resources. Educators with the necessary expertise on the significance and application of AI in healthcare environments are scarce (25). There is a need to address diverse levels of technological literacy and satisfaction with knowledge and application of current technologies before the workforce is ready for and adept at employing these in clinical settings (25). Existing healthcare technologies challenge clinicians; learning to integrate AI tools into their practice might intensify their frustration (25). Additionally, healthcare organizations often face budget constraints and competing priorities, making allocating resources challenging. Healthcare institutions must also navigate regulatory obstacles, integrate AI systems with existing electronic health records, and ensure patient privacy regulations such as HIPAA compliance.
The Prediction of Undiagnosed atriaL fibrillation using a machinE learning AlgorIthm (PULsE-AI) initiative in England exemplifies the stark divide between the development of a traditional AI algorithm and their practical deployment in healthcare settings. This initiative was conducted across six general practices in England to evaluate the implementation of the PULsE-AI AF risk screening algorithm. PULsE-AI uses data-driven machine-learning techniques to build upon existing AF risk-prediction models to uncover relationships between known AF risk factors and incidence (26). Despite proven efficacy in a multicenter randomized controlled trial and further validation through real-world data, the widespread integration of PULsE-AI into primary care workflows is still pending (26,27). Understanding and addressing the reasons for this reluctance is crucial.
One significant hurdle is the lack of seamless integration with the General Practice Management Systems (GPMS) used in the primary care setting. There is considerable variability in how different general practices approach AF detection and diagnosis, complicating the adoption of a standardized tool. Many practices also report capacity constraints, leading to a lower priority on engaging with the tool’s results and dedicating resources to act on them. Furthermore, the current reimbursement structures may not promote efforts to increase AF detection (28).
A cultural shift is required from a reactive, treatment-focused model to a proactive approach emphasizing early screening and identifying high-risk patients. The current information and guidance may not be sufficient to persuade general practitioners (GPs) to allocate their limited resources toward this change. A multifaceted approach is needed to address these challenges, including better integration with existing GPMS, alignment of incentives, dedicated funding, and comprehensive education and support to facilitate the transition to proactive care.
PULsE-AI is a pertinent case study illustrating the gap between AI algorithm development and implementation. Although the potential benefits of AI-powered diagnostics are evident, the practical challenges of integrating such solutions into existing healthcare workflows are formidable. Disparities in expertise, resources, and infrastructure further exacerbate the gap between AI algorithm development and implementation, underscoring the need for a comprehensive strategy to bridge the divide.
Strategies for successful implementation
Implementing best practices and tailored strategies is imperative to overcome the challenges. AI implementation strategy must include interdisciplinary collaboration between data scientists, HCPs, policymakers, patients, and industry stakeholders to ensure that AI solutions are developed with real-world applicability and which address clinical needs. A user-centered design approach and rigorous usability testing ensure that AI solutions are intuitive and can be integrated into clinical workflows (29). Thorough evaluation and the generation of evidence of clinical effectiveness, including publication of clinical impact and cost-effectiveness, are also critical to evidence-based adoption strategies.
The allocation of resources for the necessary infrastructure, including hardware, software, and personnel, will be needed to support implementation. Healthcare practices will need a clear understanding of regulatory guidelines representing best practices for implementing AI to ensure patient safety, privacy, and data security while minimizing barriers to adoption.
Investment in change management initiatives and education programs is crucial to enhance the technological literacy of HCPs and empower them to foster a culture of innovation. This education should include training on data interpretation, system integration, and ethical considerations. Healthcare institutions must create specialized training programs tailored for clinicians to use AI systems, aiming to address the challenges posed by unfamiliar technology. The necessity for such education and training has been highlighted by the current lack of expertise and the absence of this subject in existing curricula. Moreover, healthcare leaders and clinicians must collaborate with developers and providers to ensure sufficient funding and time are dedicated to these initiatives. Without the active involvement of clinical staff and healthcare leaders, even the most advanced AI tools may face resistance to adoption and integration, ultimately failing to enhance clinical outcomes.
Learnings from the PULsE-AI practical implementation trials resulted in recommendations for successful implementation, as outlined below (30). These may potentially be leveraged for other AI algorithms.
- Make the case:
- Make a compelling case to GPs in primary care, specifying the benefits of changing their current approach to AF detection.
- Consider available resources.
- Ensure that national guidelines present consistent positive recommendations around AF screening.
- Ensure the screening algorithm is easy to use:
- Engage with GPMS providers to ensure effective algorithm integration. Ensure the functionality that GPs require to join the screening tool with their patients’ records and prompt appropriate action at every patient contact opportunity.
- Provide GPs with clinical guidelines for interpreting and acting on the screening results.
- Aid alignment of incentives for disease detection:
- Engage healthcare commissioning groups nationally and locally to align incentives (payments and measures) to reward screening and treatment.
- Align generic incentives in GP contracts with broader national incentives around detection. Work to influence local commissioners to align focus and targets around disease planning, using economic and patient benefit levers.
- Provide a compelling case to use local funding to drive significant uptake of screening activity and shift toward proactive care.
- Support initiatives to improve the screening and diagnosis pathways:
- Work with delivery groups to increase the focus on standardizing best practice in the screening and diagnosis pathways.
- Ensure that the technology developers’ actions are aligned with those of other healthcare bodies that are seeking improvement so that there is no duplication of effort.
Overcoming challenges—start with the end in mind
There is a significant need for robust governance of AI implementation in healthcare and for developers of AI algorithms to consider implementation obstacles during the early stages of development, including issues around algorithm opacity (20). Regulators must expedite their understanding of AI and promptly provide adequate guidance, enabling efficient regulation and widespread adoption in everyday clinical practice (19). This governance may involve establishing a consensus framework to serve as a standard for evaluating these algorithms and continuous monitoring, particularly during modifications and updates. Regulatory bodies face the challenge of balancing establishing stringent policies, which could hinder innovation and dissuade developers, and maintaining a flexible approach that encourages progress. Collaborating with developers and bolstering their expertise are essential for regulators to provide relevant guidance. In response to these needs, there have been encouraging developments. For example, in 2021, the UK Government issued guidelines that set forth their expectations for AI development within the National Health Service (31). These guidelines address the necessity for algorithm transparency and accountability. However, more extensive and sustained efforts are needed to support the integration of AI algorithms into healthcare systems.
Despite the barriers and challenges, successful AI integration into healthcare settings has occurred. For instance, Viz.ai has shown the practical benefits of AI algorithms in the cardiovascular setting. Implementing Viz.ai in a stroke center saw an immediate improvement in direct-arriving and telemedicine-transfer stroke cases for streamlined communications (32). A HIPAA-compliant communication platform facilitated a more efficient and secure coordination process. It provided a centralized, secure text messaging thread for the entire care team, informing them of patient status, decisions made, and location. This developer has also submitted a de novo classification request to the United States Food and Drug Administration for a new AI algorithm [Viz™ hypertrophic cardiomyopathy (HCM)] to identify and triage patients suspected of HCM (33,34).
Following the successful deployment of AI, ongoing vigilance and maintenance will be vital to promote sustainability and longevity (35-37). This would involve, but not be limited to, refreshing underpinning datasets in line with a dynamic clinical environment, updating software algorithms, and ensuring hardware compatibility (35,38). A support system is recommended, consisting of multidisciplinary teams (including AI developers and super-users) that would supervise these activities through regular periodic meetings (39). Other strategies include establishing an internal service desk for arising issues, having a cross-functional governance committee, and an external data safety monitoring board (39). Facilitating these processes requires human capital and financial resources; thus, funding is a critical component that requires consideration (35).
Addressing the challenges of AI implementation requires a concerted effort from all stakeholders. Regulatory bodies and policymakers are pivotal in establishing clear frameworks that balance innovation with accountability, fostering trust and confidence in AI-driven healthcare solutions. Real-world case studies offer valuable insights and lessons learned, demonstrating the feasibility and potential impact of successful AI implementation initiatives.
Conclusions
AI and genAI hold considerable potential, from expediting drug discovery and development and streamlining screening and diagnostics to supporting patient management decisions and predictive analytics on patient outcomes. Despite rapid advancements, its integration into clinical practice faces technical, ethical, and regulatory barriers, and because of these, a notable gap exists between the development of tools for healthcare purposes and their implementation within healthcare systems. GenAI has provided tools accessible to all, and integrating these into healthcare is more crucial than ever to unlock its transformative capabilities. A comprehensive strategy prioritizing interdisciplinary collaboration, education, regulatory clarity, infrastructure investment, and evidence-based adoption is essential to bridge this gap. Addressing these considerations while fostering innovation and collaboration and building trust is imperative.
Acknowledgments
Medical writing support was provided by Vicky Ware, MSc, of LATITUDE (Powered by AXON).
Footnote
Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-385/prf
Funding: This work was supported by
Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-385/coif). N.R.H. is an employee and shareholder of Bristol Myers Squibb. The author has no other conflicts of interest to declare.
Ethical Statement: The author is accountable for all aspects of the work and ensure that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration, and governance. Implement Sci 2024;19:27. [Crossref] [PubMed]
- Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ 2023;23:689. [Crossref] [PubMed]
- Johnson KB, Wei WQ, Weeraratne D, et al. Precision Medicine, AI, and the Future of Personalized Health Care. Clin Transl Sci 2021;14:86-93. [Crossref] [PubMed]
- Paladugu PS, Ong J, Nelson N, et al. Generative Adversarial Networks in Medicine: Important Considerations for this Emerging Innovation in Artificial Intelligence. Ann Biomed Eng 2023;51:2130-42. [Crossref] [PubMed]
- Noseworthy PA, Attia ZI, Behnken EM, et al. Artificial intelligence-guided screening for atrial fibrillation using electrocardiogram during sinus rhythm: a prospective non-randomised interventional trial. Lancet 2022;400:1206-12. [Crossref] [PubMed]
- Boyle PM, Zghaib T, Zahid S, et al. Computationally guided personalized targeted ablation of persistent atrial fibrillation. Nat Biomed Eng 2019;3:870-9. [Crossref] [PubMed]
- Petersson L, Larsson I, Nygren JM, et al. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res 2022;22:850. [Crossref] [PubMed]
- Gooding P, Kariotis T. Ethics and Law in Research on Algorithmic and Data-Driven Technology in Mental Health Care: Scoping Review. JMIR Ment Health 2021;8:e24668. [Crossref] [PubMed]
- Beil M, Proft I, van Heerden D, et al. Ethical considerations about artificial intelligence for prognostication in intensive care. Intensive Care Med Exp 2019;7:70. [Crossref] [PubMed]
- Rezaeikhonakdar D. AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors. J Law Med Ethics 2023;51:988-95. [Crossref] [PubMed]
- British Standards Institution. BS 30440:2023. Validation framework for the use of artificial intelligence (AI) within healthcare. Specification. 2023. Last accessed: Feb 12, 2025. Available online: https://knowledge.bsigroup.com/products/validation-framework-for-the-use-of-artificial-intelligence-ai-within-healthcare-specification?version=standard
- Mandl KD, Gottlieb D, Mandel JC. Integration of AI in healthcare requires an interoperable digital data ecosystem. Nat Med 2024;30:631-4. [Crossref] [PubMed]
- van der Vegt A, Campbell V, Zuccon G. Why clinical artificial intelligence is (almost) non-existent in Australian hospitals and how to fix it. Med J Aust 2024;220:172-5. [Crossref] [PubMed]
- Létinier L, Jouganous J, Benkebil M, et al. Artificial Intelligence for Unstructured Healthcare Data: Application to Coding of Patient Reporting of Adverse Drug Reactions. Clin Pharmacol Ther 2021;110:392-400. [Crossref] [PubMed]
- Asan O, Bayrak AE, Choudhury A. Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. J Med Internet Res 2020;22:e15154. [Crossref] [PubMed]
- O'Neill C. Is AI a threat or benefit to health workers? CMAJ 2017;189:E732. [Crossref] [PubMed]
- Alami H, Lehoux P, Denis JL, et al. Organizational readiness for artificial intelligence in health care: insights for decision-making and practice. J Health Organ Manag 2020; [Crossref] [PubMed]
- National Institute for Health and Care Research. Artificial intelligence funding. 2021. Last accessed: Feb 12, 2025. Available online: https://www.nihr.ac.uk/explore-nihr/funding-programmes/ai-award.htm
- Ahmed MI, Spooner B, Isherwood J, et al. A Systematic Review of the Barriers to the Implementation of Artificial Intelligence in Healthcare. Cureus 2023;15:e46454. [Crossref] [PubMed]
- Holm EA. In defense of the black box. Science 2019;364:26-7. [Crossref] [PubMed]
- Mittelstadt B. The impact of artificial intelligence on the doctor-patient relationship. Report by the Steering Committee for Human Rights in the fields of Biomedicine and Health. 2021. Last accessed: Feb 12, 2025. Available online: https://rm.coe.int/inf-2022-5-report-impact-of-ai-on-doctor-patient-relations-e/1680a68859
- Camaradou JCL, Hogg HDJ. Commentary: Patient Perspectives on Artificial Intelligence; What have We Learned and How Should We Move Forward? Adv Ther 2023;40:2563-72. [Crossref] [PubMed]
- Freeman S, Stewart J, Kaard R, et al. Health consumers’ ethical concerns towards artificial intelligence in Australian emergency departments. Emerg Med Australas 2024;36:768-76. [Crossref] [PubMed]
- Moy S, Irannejad M, Manning SJ, et al. Patient Perspectives on the Use of Artificial Intelligence in Health Care: A Scoping Review. J Patient Cent Res Rev 2024;11:51-62. [Crossref] [PubMed]
- Singh RP, Hom GL, Abramoff MD, et al. Current Challenges and Barriers to Real-World Artificial Intelligence Adoption for the Healthcare System, Provider, and the Patient. Transl Vis Sci Technol 2020;9:45. [Crossref] [PubMed]
- Hill NR, Groves L, Dickerson C, et al. Identification of undiagnosed atrial fibrillation using a machine learning risk-prediction algorithm and diagnostic testing (PULsE-AI) in primary care: a multi-centre randomized controlled trial in England. Eur Heart J Digit Health 2022;3:195-204. [Crossref] [PubMed]
- Sekelj S, Sandler B, Johnston E, et al. Detecting undiagnosed atrial fibrillation in UK primary care: Validation of a machine learning prediction algorithm in a retrospective cohort study. Eur J Prev Cardiol 2021;28:598-605. [Crossref] [PubMed]
- Abràmoff MD, Roehrenbeck C, Trujillo S, et al. A reimbursement framework for artificial intelligence in healthcare. NPJ Digit Med 2022;5:72. [Crossref] [PubMed]
- Seneviratne MG, Li RC, Schreier M, et al. User-centred design for machine learning in health care: a case study from care management. BMJ Health Care Inform 2022;29:e100656. [Crossref] [PubMed]
- Pollock KG, Dickerson C, Kainth M, et al. Undertaking multi-centre randomised controlled trials in primary care: learnings and recommendations from the PULsE-AI trial researchers. BMC Prim Care 2024;25:7. [Crossref] [PubMed]
- Department of Health & Social Care. Guidance: A guide to good practice for digital and data-driven health technologies. 2021. Last accessed: Feb 12, 2025. Available online: https://www.gov.uk/government/publications/code-of-conduct-for-data-driven-health-and-care-technology/initial-code-of-conduct-for-data-driven-health-and-care-technology
- Figurelle ME, Meyer DM, Perrinez ES, et al. Viz.ai Implementation of Stroke Augmented Intelligence and Communications Platform to Improve Indicators and Outcomes for a Comprehensive Stroke Center and Network. AJNR Am J Neuroradiol 2023;44:47-53. [Crossref] [PubMed]
- Viz.ai. Viz.ai announces agreement with Bristol Myers Squibb to enable earlier detection and management of suspected hypertrophic cardiomyopathy (HCM). 2023. Last accessed: Feb 12, 2025. Available online: https://www.viz.ai/news/agreement-bristol-meyers-squibb-hcm
- Viz.ai. Viz HCM. End-to-end approach to detecting and directing hypertrophic cardiomyopathy (HCM) patients. 2024. Last accessed: Feb 12, 2025. Available online: https://www.viz.ai/hypertrophic-cardiomyopathy
- He J, Baxter SL, Xu J, et al. The practical implementation of artificial intelligence technologies in medicine. Nat Med 2019;25:30-6. [Crossref] [PubMed]
- Davis SE, Embí PJ, Matheny ME. Sustainable deployment of clinical prediction tools—a 360° approach to model maintenance. J Am Med Inform Assoc 2024;31:1195-8. [Crossref] [PubMed]
- Bajwa J, Munir U, Nori A, et al. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J 2021;8:e188-94. [Crossref] [PubMed]
- Feng J, Phillips RV, Malenica I, et al. Clinical artificial intelligence quality improvement: towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digit Med 2022;5:66. [Crossref] [PubMed]
- Nair M, Svedberg P, Larsson I, et al. A comprehensive overview of barriers and strategies for AI implementation in healthcare: Mixed-method design. PLoS One 2024;19:e0305949. [Crossref] [PubMed]
Cite this article as: Hill NR. Overcoming adoption challenges: bridging the gap between artificial intelligence algorithm development and implementation in healthcare. J Med Artif Intell 2025;8:55.