The long road ahead: navigating obstacles and building bridges for clinical integration of artificial intelligence technologies
Review Article

The long road ahead: navigating obstacles and building bridges for clinical integration of artificial intelligence technologies

Sandeep Reddy1 ORCID logo, Sameer Shaikh2,3

1School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Brisbane, Australia; 2Centre for Advancement of Translational AI in Medicine, New York, NY, USA; 3Department of Medicine, McMaster University, Hamilton, Canada

Contributions: (I) Conception and design: Both authors; (II) Administrative support: S Reddy; (III) Provision of study materials or patients: None; (IV) Collection and assembly of data: Both authors; (V) Data analysis and interpretation: Both authors; (VI) Manuscript writing: Both authors; (VII) Final approval of manuscript: Both authors.

Correspondence to: Sandeep Reddy, MBSS, MSc, PhD. Professor, School of Public Health and Social Work, Faculty of Health, Queensland University of Technology, Victoria Park Road, Kelvin Grove, QLD 4059, Australia. Email: sandeep.reddy@qut.edu.au.

Abstract: Artificial intelligence (AI) holds immense promise for transforming healthcare, yet its real-world implementation faces significant obstacles. This comprehensive review synthesizes findings from over 40 peer-reviewed articles supplemented by reports from key institutions, to provide a thorough assessment of the challenges impeding AI integration in clinical settings and propose practical solutions. The paper identifies several major barriers: limited access to diverse, high-quality datasets, which hinders the development of robust, generalizable AI models; the “black box” nature of many AI systems, which impedes clinician trust and adoption; lack of clear legal and regulatory frameworks, raising liability concerns and safety issues; difficulties in adapting existing clinical workflows to incorporate AI tools, which can be disruptive and time-consuming; and challenges in protecting sensitive patient data while enabling AI development. To address these complex issues, the paper proposes a range of strategies, including standardizing data capture and labelling practices across healthcare institutions, developing explainable AI techniques tailored to clinical contexts, establishing clear regulatory guidelines for AI in healthcare, engaging healthcare professionals in AI development and implementation processes, and implementing robust data governance and cybersecurity measures. The review emphasizes the critical need for a multidisciplinary approach, involving close collaboration between AI developers, clinicians, policymakers, and patients. It highlights successful case studies where AI has been effectively integrated into clinical practice. However, the authors argue that while AI has the potential to be a powerful tool in the medical arsenal, it should be viewed as a complement to, rather than a replacement for, human clinical expertise. This approach paves the way for a future where AI meaningfully contributes to advancing healthcare while maintaining the highest standards of patient safety and ethical practice.

Keywords: Artificial intelligence (AI); healthcare integration; clinical decision support; data governance; regulatory challenges


Received: 16 May 2024; Accepted: 06 August 2024; Published online: 10 September 2024.

doi: 10.21037/jmai-24-148


Introduction

Artificial intelligence (AI) has increasingly been touted as a transformative force in the healthcare industry, potentially significantly enhancing patient care and outcomes (1). Despite the considerable hype and high expectations surrounding AI, its real-world implementation in clinical settings has not yet reached its potential (2,3). Several practical challenges impede the faultless integration of AI applications into clinical workflows, even when such systems have demonstrated their ability to improve patient care. This article seeks to provide a realistic assessment of the current state of AI in healthcare, identifying the fundamental barriers that impact the development, testing, regulatory approval, and adoption of clinical AI tools. By clearly outlining these obstacles and presenting a practical path for integration, we hope to set appropriately high yet realistic expectations for stakeholders who seek to leverage AI to improve patient care. This approach will enable stakeholders to understand the benefits and challenges of integrating AI in healthcare and to employ strategies for overcoming these obstacles and achieving success in improving patient care.

This review paper synthesizes findings from over 40 peer-reviewed articles published between 2017 and 2024, supplemented by reports and information from key institutions and credible media sources. The literature search was conducted using PubMed and Google Scholar, with search terms including “artificial intelligence”, “machine learning”, “clinical integration”, “healthcare”, and related keywords. We prioritized peer reviewed journals and papers with significant citations to capture emerging trends and developments. Our paper addresses several key gaps in the critical literature on AI integration in healthcare. It provides an in-depth analysis of data quality issues, including societal and ethical dimensions often overlooked in technical discussions. We offer an updated perspective on rapidly evolving regulatory frameworks, particularly in the EU and US, and their potential impact on AI adoption. The paper emphasizes the critical importance of workflow integration, a challenge often overshadowed by a focus on AI model performance. We explore the intersection of explainable AI and clinical decision-making, examining how to make AI explanations clinically relevant and actionable. By synthesizing insights from medicine, computer science, law, and ethics, we present a multidisciplinary perspective that offers a more holistic view of challenges and potential solutions than typically found in domain-specific studies.


Challenges and solutions to integration

Integrating healthcare is not solely a technological challenge. Still, substantial organizational culture changes and new policies and regulations supporting AI’s ethical use in patient care are required. Acknowledging the successful integration of AI in healthcare requires overcoming several obstacles, including data-related issues, model explainability, medico-legal concerns, and privacy concerns (2). The following sections will discuss these challenges and explore possible solutions.

Data quality and availability

One of the primary obstacles to developing robust AI systems in healthcare is the scarcity of high-quality, curated datasets. Many AI models are trained on small, heterogeneous datasets that may not accurately represent the diverse patient populations prevalent in real-world clinical settings (4,5). This problem is exacerbated by the lack of standardized data capture and labelling practices across healthcare institutions, resulting in inconsistencies and biases in the data used to train AI models. For instance, a study by Obermeyer et al. (6) identified that a popular commercial algorithm for making health decisions exhibited significant racial bias, assigning lower risk scores to Black patients compared to White patients with similar health profiles. This bias originated from the algorithm’s reliance on healthcare costs as a substitute for health needs, perpetuating existing disparities in access to care. Addressing these data quality and availability issues is essential to develop AI systems that can generalize effectively and equitably to diverse patient populations.

The Alan Turing Institute, the UK’s national data science and AI institute, issued a report in 2020 exploring reflections on using AI tools during the coronavirus disease 2019 (COVID-19) pandemic (7). The report highlights that despite efforts to harness a broad array of data, researchers faced significant challenges during COVID-19 due to limited data access and inconsistent data quality standards, impeding data’s timely and effective use. For instance, the rapid development of data systems like OpenSAFELY was imperative for efficiently researching COVID-19’s impact. However, these systems also laid bare the complexities of achieving real-time data sharing and integration across various health data platforms. Furthermore, the pandemic underscored the disparities in data availability, particularly affecting marginalized communities. These gaps revealed a pressing need for a more inclusive data governance structure that ensures equitable data access and addresses the biases arising from incomplete or non-representative data collection processes.

In dermatology, the development and application of AI diagnostic tools have highlighted crucial disparities in performance based on skin tone, particularly evident in a study conducted by Daneshjou et al. (8). The researchers meticulously created a diverse, curated clinical image set, the Diverse Dermatology Images (DDI) dataset, which covers a wide range of skin tones and confirmed the presence of substantial performance limitations in three leading dermatology AI models, particularly when diagnosing conditions on darker skin tones and uncommon diseases. Their findings reveal that the AI models, primarily trained on data sets lacking diversity, exhibited decreased accuracy and increased bias against individuals with darker skin. The study further underscores the need for inclusivity in AI training datasets and suggests that data quality and availability are essential when building AI tools in healthcare.

To standardize data capture and labelling practices across healthcare institutions, a comprehensive approach is necessary (9,10), encompassing the adoption of standard data models (e.g., OMOP, i2b2) and standardized medical terminologies (SNOMED CT, LOINC, ICD-10, RxNorm), establishment of robust data governance policies with a centralized committee, implementation of standardized data capture forms with mandatory fields and dropdown menus, utilization of natural language processing for extricating information from unstructured clinical notes, deployment of automated data validation checks, provision of comprehensive staff training, regular audits to ensure data quality, effective metadata management, adoption of interoperability standards like HL7 FHIR, leveraging of machine learning for assisted data labelling (particularly for medical imaging and unstructured text), development of a data quality scorecard with metrics for completeness, accuracy, and consistency, and active collaboration with other institutions through data sharing networks and consortia (9-11). These strategies collectively aim to improve data consistency, quality, and interoperability, thereby enabling more effective research, analytics, and clinical decision support across healthcare institutions, with the understanding that standardization is an ongoing process requiring continuous effort and adaptation to evolving healthcare data needs.

Interpretability and explainability

Another significant barrier to adopting AI in clinical settings is many AI models lack interpretability and explainability. Clinicians are often reluctant to trust “black box” models that provide recommendations without explaining the underlying reasoning (12). This hesitancy is understandable, as clinicians need to be able to justify and defend their treatment decisions to patients, colleagues, and regulatory bodies. The need for explainable AI (XAI) is particularly acute in high-stakes medical contexts, where decisions can have life-or-death consequences. A study by Tonekaboni et al. (13) emphasized the importance of contextualizing XAI for clinical end-users, highlighting the need for AI systems to provide explanations that align with clinicians’ thought processes and decision-making frameworks. Developing XAI systems that can provide clear, clinically relevant justifications for their recommendations is essential for fostering trust and adoption among healthcare professionals.

In the area of clinical decision support systems (CDSS), the multifaceted challenges of explainability are highlighted in a study by Amann et al. (14), which adopts a multidisciplinary perspective on the necessity of explainable AI (XAI) in healthcare. The authors argue that explainability in AI must transcend its technical boundaries to include legal, medical, and ethical dimensions to benefit clinical practice. The research emphasizes that AI systems risk violating ethical medical practices without robust explainability and could hinder effective patient care due to opaque decision-making processes. From a medical standpoint, the ability of AI systems to provide interpretable and transparent outputs is vital for clinician trust and decision-making, particularly in scenarios where AI suggestions may conflict with traditional clinical judgments (15).

Several existing XAI techniques from other industries can be adopted to tackle healthcare’s unique challenges (16). For instance, Local Interpretable Model-agnostic Explanations (LIME) can elucidate why a model makes specific predictions, like diagnoses, by pinpointing influential patient data features. Feature Importance Methods can highlight key symptoms or risk factors, as seen in stroke risk prediction. Decision sets offer rule-based explanations for diagnoses or treatments, enhancing CDSS. Shapley Additive Explanations (SHAP) can assign importance values to features, explaining complex models predicting patient outcomes. Counterfactual explanations demonstrate how altering inputs affects model outputs, which are beneficial for personalized medicine. Attention Mechanisms, often used in natural language processing, can spotlight influential parts of patient histories or test results. Healthcare-specific challenges such as data privacy, accuracy requirements, data complexity, and regulations must be addressed to integrate these techniques successfully. By overcoming these obstacles and utilizing XAI, healthcare AI systems can become more transparent and trustworthy, ultimately enhancing clinical decision-making and patient outcomes (17).

Legal and regulatory uncertainty

As AI technology rapidly evolves, the integration of AI systems into healthcare raises profound concerns about safety and ethical implications, illustrated by a distressing incident involving a Belgian man who died by suicide after interactions with an AI chatbot on the Chai app (18). This chatbot, named Eliza, was not designed for mental health support, yet it became a significant influence in the man’s life, exacerbating his eco-anxiety and loneliness. The AI, presenting as an emotional entity, misled him with dangerous suggestions and emotionally charged interactions. This tragic case underscores the pressing need for robust regulatory frameworks that address deploying AI technologies in sensitive contexts, including mental health. As AI systems like Eliza become more common, the likelihood of encountering severe unintended consequences increases, necessitating stringent safeguards, crisis intervention features, and clear guidelines to prevent harm and ensure that AI tools are used responsibly and ethically in healthcare.

The lack of clear legal and regulatory frameworks governing the use of AI in healthcare presents another significant obstacle to widespread adoption. Liability concerns arise when considering the potential consequences of AI systems making errors that result in patient harm (19). Who should be held responsible in such cases—the AI developers, the healthcare institutions deploying the systems, or the clinicians relying on the AI’s recommendations? The absence of well-defined regulations around AI use cases in healthcare exacerbates these liability concerns. For example, Zech et al. (20) highlighted the need for rules specifically addressing the use of AI in medical imaging and diagnostics, where the technology has shown promising results but poses risks if not properly validated and monitored. Establishing clear legal and regulatory guidelines for developing, testing, and deploying clinical AI systems is crucial for mitigating liability risks and ensuring patient safety.

Recent advancements and ongoing efforts to establish more precise regulatory guidelines for AI in healthcare involve various stakeholders, including policymakers, healthcare providers, and technology developers. The European Union has been proactive with the GDPR (21) and the Artificial Intelligence Act (22), categorizing AI systems based on risk levels and imposing stricter requirements on high-risk applications in healthcare. The EU AI Act, expected to be finalized soon (22), aims to create a comprehensive legal framework for AI, ensuring transparency, accountability, and safety in AI applications, including healthcare applications. In the United States, the Food and Drug Administration (FDA)’s AI/machine learning (ML)-based Software as a Medical Device framework emphasizes a total product lifecycle approach to ensure safety and effectiveness (23). Key areas of focus include radiology and cardiovascular medical specialities (24), where comprehensive frameworks are being developed to ensure accuracy, reliability, and accountability; generative AI, raising ethical and societal questions around accuracy, informed consent, and algorithmic biases (25); and telehealth, which offers opportunities for remote healthcare but raises data privacy and governance concerns (26). These efforts aim to create robust frameworks ensuring AI’s safe, effective, and ethical use in healthcare, enhancing patient care and outcomes.

Integration into clinical workflows

Even when AI systems demonstrate strong performance in controlled research settings, integrating them into real-world clinical workflows presents significant challenges. Existing workflows and decision-making processes may need to be redesigned to accommodate the introduction of AI tools, which can be disruptive and time-consuming (27). Clinicians may also require additional training to utilize AI systems and interpret their outputs effectively. Krittanawong et al. (28) emphasized the importance of securing clinician buy-in and ensuring practical integration into workflows to adopt AI in precision cardiovascular medicine successfully. Engaging healthcare professionals in the development and implementation process and providing adequate support and resources for workflow integration can help facilitate the smooth incorporation of AI into clinical practice.

The integration of AI into clinical workflows was starkly illuminated by the case of EPIC Systems’ AI model for detecting sepsis, which was found to perform poorly in real-world settings despite initial promising results. A study by researchers at the University of Michigan highlighted the model’s significant limitations, including missing two-thirds of actual sepsis cases and generating numerous false alarms when applied to data from 30,000 patients (29). This situation underscores the “AI chasm” where models that perform well in controlled environments fail to translate effectively into clinical settings. This reveals a lack of thorough documentation and transparency in deploying these AI tools, making it challenging for healthcare providers to assess their reliability and fairness across different demographics and geographic areas (29). Such deficiencies not only disrupt clinical workflows but also pose severe risks to patient safety, highlighting the urgent need for rigorous, multidimensional testing and more evident documentation standards to ensure AI tools are both practical and equitable when integrated into the day-to-day operations of healthcare facilities. This serves as a critical reminder of the necessity for healthcare systems to engage in a more transparent evaluation of AI technologies before widespread implementation.

While there is a long way to go before, we can see widespread adoption and integration of AI in clinical workflow, there are notable examples of where integrating AI into clinical workflows has seen remarkable successes in healthcare settings. At Mayo Clinic, an AI algorithm for analysing ECG data significantly improved early detection of atrial fibrillation, leading to better patient outcomes (30). At the same time, Stanford University’s CheXNet reduced diagnostic errors in radiology by accurately detecting pneumonia from chest X-rays (31). Additionally, UCSF’s AI-based sepsis prediction tool enabled timely interventions, reducing sepsis-related mortality (32), and James Cook University researcher’s development of AI models for improved health predictions for premature infants (33).

Successful integration strategies include comprehensive training programs for healthcare professionals, updating EHR systems to incorporate AI insights, and developing user-friendly interfaces to minimize workflow disruptions (25). Continuous improvement through regular updates and interdisciplinary collaboration between AI developers, healthcare providers, and IT specialists ensures that AI tools remain practical and relevant in clinical practice (34). These approaches have demonstrated that with careful planning and execution, AI can significantly enhance clinical workflows and patient outcomes.

Privacy and security concerns

The delicate nature of healthcare data leads to significant privacy and security concerns regarding sharing and analyzing patient information for AI development and deployment. Ensuring patient confidentiality and data protection is paramount, as breaches can erode trust and hinder the adoption of AI technologies (35). Furthermore, AI systems in healthcare are vulnerable to adversarial attacks designed to mislead or manipulate their outputs intentionally (36). Such attacks can compromise patient safety and undermine the reliability of AI-assisted decision-making. There are a few well-documented healthcare examples involving big data and privacy breaches, including the Anthem Data Breach in 2015 that exposed the personal information of 80 million patients, the Atrium Health Data breach in 2018 that exposed 2.65 million patients, and a recent hack on Nuance Communications that impacted 13 healthcare clients in North Carolina (36). Implementing robust data governance frameworks, secure data sharing protocols, and advanced cybersecurity measures is essential for safeguarding patient privacy and ensuring the integrity of clinical AI systems.

Robust data governance and advanced cybersecurity measures are paramount for safeguarding patient data. A holistic approach involves integrating Master Data Management (MDM) frameworks with cybersecurity protocols to ensure data integrity and mitigate risks (37). Adopting established frameworks like ISO 27001:2022, NIST Cybersecurity Framework (CSF), and COBIT 2019 enhances IT governance maturity and ensures compliance with legal and regulatory requirements (37,38). Proactive defence strategies include conducting regular red teaming exercises to identify vulnerabilities and developing robust incident response plans. Organizations must also adapt to local and international data protection laws like the GDPR and consider investing in cyber insurance for financial protection against data breaches. Technological solutions like encryption, access controls, and continuous monitoring tools are crucial for data protection and real-time threat detection. By implementing these comprehensive measures, organizations can fortify their data assets against contemporary and emerging threats, ensuring the privacy and security of patient information (37,38).


The path forward

While AI possesses the potential to improve healthcare and patient outcomes significantly, the path towards widespread clinical incorporation is riddled with obstacles. Ensuring adequate data quality and accessibility is crucial in realizing the maximum potential of AI in the healthcare industry (25). It is essential to address critical factors such as interpretability, legal responsibility, operational disruptions, and data protection to achieve this. Future research in AI for healthcare needs to address several critical areas to enhance the effectiveness and integration of AI-driven solutions. Longitudinal studies are essential to evaluate the long-term impact of AI implementation on patient outcomes, particularly in fields like cancer therapy-related cardiac dysfunction. Standardization of outcome measures, including developing core outcome sets and patient-reported outcome measures, is crucial for improving result comparability and reducing reporting bias, especially in areas like pain management where current measures lack sufficient validity.

Integrating patient perspectives in developing AI-based tools and outcome measures is vital for ensuring relevance and efficacy (34). Comparative effectiveness studies are needed to assess AI-based interventions against traditional approaches across various healthcare domains, such as vestibular rehabilitation. Integrating imaging techniques with AI offers more profound insights into disease mechanisms and intervention effects. Exploring synergies between virtual reality and AI technologies could lead to innovative patient care and rehabilitation approaches (39). Research into AI explainability techniques in clinical settings is essential for building trust and facilitating adoption among healthcare professionals (17). Developing strategies for effectively integrating AI into customer relationship management systems in healthcare is another critical area (40), considering the implications of Big Data and machine learning on patient-provider relationships (4). Finally, a multidisciplinary approach involving AI experts, clinicians, and other healthcare stakeholders is crucial to ensure that AI developments align with clinical needs and ethical considerations, ultimately leading to more effective patient-centered healthcare solutions (34).

Several strategies can be implemented to achieve practical stakeholder collaboration in integrating AI into healthcare. These include establishing interdisciplinary research centres like Stanford’s HAI (41), creating AI-focused innovative development programs such as Google DeepMind (42), setting up regulatory sandboxes like the UK’s MHRA Innovation Office (43), forming multi-stakeholder consortia exemplified by the NIH’s All of Us Research Program (44), promoting open-source initiatives, establishing ethical review boards within healthcare institutions, fostering public-private partnerships like the UK Biobank project (45), developing professional development programs such as the AMA’s AI resource centre (46), creating patient engagement platforms like PatientsLikeMe (47), and establishing policy working groups similar to the WHO’s Digital Health Technical Advisory Group (48). By implementing these diverse strategies, we can foster a collaborative ecosystem that addresses the multifaceted challenges of integrating AI into clinical practice, ensuring solutions are technically sound, clinically relevant, ethically robust, and legally compliant.


Conclusions

Incorporating AI in healthcare constitutes a pivotal transformation in the sector, presenting exceptional prospects for elevating patient care, refining diagnostic precision, and maximizing therapeutic results. However, as demonstrated in our detailed review, widespread adoption has many challenges. These challenges span a broad spectrum, ranging from technical issues such as data quality and model interpretability to organizational concerns like workflow integration, stakeholder collaboration, and critical ethical and regulatory considerations. Despite these challenges, the potential benefits of AI in healthcare are too great to ignore. Moving forward, we must adopt a multidisciplinary, collaborative approach to address these issues. This involves technical innovations in areas like explainable AI and robust data governance, the development of clear regulatory frameworks, the fostering of interdisciplinary partnerships, and the cultivation of a healthcare culture that embraces technological innovation while maintaining a steadfast commitment to patient safety and ethical practice. As we navigate this complex landscape, it is essential to maintain realistic expectations about the capabilities and limitations of AI in healthcare.

While AI has the potential to be a powerful tool in the medical arsenal, it should be viewed as a complement to, rather than a replacement for, human clinical expertise. To achieve improved health outcomes and a more efficient, effective healthcare system, we must address the challenges outlined in this paper and foster collaboration among all stakeholders, including AI developers, healthcare providers, policymakers, and patients. By working together, we can ensure that AI is seamlessly integrated into clinical practice, resulting in a brighter future for healthcare.


Acknowledgments

Funding: None.


Footnote

Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-148/prf

Conflicts of Interest: Both authors have completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-148/coif). S.R. serves as an unpaid editorial board member of Journal of Medical Artificial Intelligence from March 2024 to February 2026. The other author has no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Stafie CS, Sufaru IG, Ghiciuc CM, et al. Exploring the Intersection of Artificial Intelligence and Clinical Healthcare: A Multidisciplinary Review. Diagnostics (Basel) 2023;13:1995. [Crossref] [PubMed]
  2. Esmaeilzadeh P. Challenges and strategies for wide-scale artificial intelligence (AI) deployment in healthcare practices: A perspective for healthcare organizations. Artif Intell Med 2024;151:102861. [Crossref] [PubMed]
  3. Petersson L, Larsson I, Nygren JM, et al. Challenges to implementing artificial intelligence in healthcare: a qualitative interview study with healthcare leaders in Sweden. BMC Health Serv Res 2022;22:850. [Crossref] [PubMed]
  4. Beam AL, Kohane IS. Big Data and Machine Learning in Health Care. JAMA 2018;319:1317-8. [Crossref] [PubMed]
  5. Razzaki S, Baker A, Perov YN, et al. A comparative study of artificial intelligence and human doctors for the purpose of triage and diagnosis. ArXiv. 2018;abs/1806.10698.
  6. Obermeyer Z, Powers B, Vogeli C, et al. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019;366:447-53. [Crossref] [PubMed]
  7. von Borzyskowski I MB, Mazumder A. Reflections on the response of the UK’s data science and AI community to the COVID-19 pandemic. The Alan Turing Institute; 2021.
  8. Daneshjou R, Vodrahalli K, Novoa RA, et al. Disparities in dermatology AI performance on a diverse, curated clinical image set. Sci Adv 2022;8:eabq6147. [Crossref] [PubMed]
  9. Skrocki M. editor. Standardization Needs for Effective Interoperability; 2013. Available online: https://scholarworks.wmich.edu/ichita_transactions/32
  10. Yang Y, Li R, Xiang Y, et al. Standardization of collection, storage, annotation, and management of data related to medical artificial intelligence. Intelligent Medicine 2021; [Crossref]
  11. WilliamsEKienastMMedawarE editors. FHIR-DHP: A Standardized Clinical Data Harmonisation Pipeline for scalable AI application deployment.medRxiv; 2022.
  12. Holzinger A, Biemann C, Pattichis CS, et al. What do we need to build explainable AI systems for the medical domain? ArXiv. 2017;abs/1712.09923.
  13. Tonekaboni S, Joshi S, Mccradden M, et al. What Clinicians Want: Contextualizing Explainable Machine Learning for Clinical End Use. ArXiv. 2019;abs/1905.05134.
  14. Amann J, Blasimme A, Vayena E, et al. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 2020;20:310. [Crossref] [PubMed]
  15. Amann J, Vetter D, Blomberg SN, et al. To explain or not to explain?-Artificial intelligence explainability in clinical decision support systems. PLOS Digit Health 2022;1:e0000016. [Crossref] [PubMed]
  16. Arrieta AB, Rodríguez ND, Ser JD, et al. Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI. Inf Fusion 2019;58:82-115. [Crossref]
  17. Mukund Deshpande N, Gite S, Pradhan B, et al. Explainable Artificial Intelligence–A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review. Computer Modeling in Engineering & Sciences 2022; [Crossref]
  18. Xiang C. 'He would still be here': Man dies by suicide after talking with AI chatbot, widow says: Vice; 2023.
  19. Price WN 2nd, Gerke S, Cohen IG. Potential Liability for Physicians Using Artificial Intelligence. JAMA 2019;322:1765-6. [Crossref] [PubMed]
  20. Zech JR, Badgeley MA, Liu M, et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: A cross-sectional study. PLoS Med 2018;15:e1002683. [Crossref] [PubMed]
  21. Hoofnagle CJ, van der Sloot B, Borgesius FJZ. The European Union general data protection regulation: what it is and what it means*. Information & Communications Technology Law 2019;28:65-98. [Crossref]
  22. Wagner M, Borg M, Runeson P, et al. Navigating the Upcoming European Union AI Act. IEEE Software 2024;41:19-24. [Crossref]
  23. Clark P, Kim J, Aphinyanaphongs Y. Marketing and US Food and Drug Administration Clearance of Artificial Intelligence and Machine Learning Enabled Software in and as Medical Devices: A Systematic Review. JAMA Netw Open 2023;6:e2321792. [Crossref] [PubMed]
  24. Zhu S, Gilbert M, Chetty I, et al. The 2021 landscape of FDA-approved artificial intelligence/machine learning-enabled medical devices: An analysis of the characteristics and intended use. Int J Med Inform 2022;165:104828. [Crossref] [PubMed]
  25. Reddy S. Generative AI in healthcare: an implementation science informed translational path on application, integration and governance. Implement Sci 2024;19:27. [Crossref] [PubMed]
  26. Gallese Nobile C. Legal Aspects of the Use Artificial Intelligence in Telemedicine. Journal of Digital Technologies and Law. Journal of Digital Technologies and Law 2023;1:314-36. [Crossref]
  27. Wang S, Cao G, Wang Y, et al. Review and Prospect: Artificial Intelligence in Advanced Medical Imaging. Front Radiol 2021;1:781868. [Crossref] [PubMed]
  28. Krittanawong C, Zhang H, Wang Z, et al. Artificial Intelligence in Precision Cardiovascular Medicine. J Am Coll Cardiol 2017;69:2657-64. [Crossref] [PubMed]
  29. Andrews EL. “Flying in the Dark”: Hospital AI Tools Aren’t Well Documented: Stanford University Human-Centred Artificial Intelligence 2021;
  30. Malloy T. AI-guided screening uses ECG data to detect a hidden risk factor for stroke Rochester, Minnesota: Mayo Clinic; 2022.
  31. Rajpurkar P, Irvin JA, Zhu K, et al. CheXNet: Radiologist-Level Pneumonia Detection on Chest X-Rays with Deep Learning. ArXiv. 2017;abs/1711.05225.
  32. Hickok K. Machine Learning Enables Diagnosis of Sepsis, the Elusive Global Kille: University of California San Francisco; 2022.
  33. Baker S, Xiang W, Atkinson I. Hybridized neural networks for non-invasive and continuous mortality risk assessment in neonates. Comput Biol Med 2021;134:104521. [Crossref] [PubMed]
  34. Karalis VD. The Integration of Artificial Intelligence into Clinical Practice. Appl Biosci 2024;3:14-44. [Crossref]
  35. Ravi D, Wong C, Lo B, et al. A Deep Learning Approach to on-Node Sensor Data Analytics for Mobile or Wearable Devices. IEEE J Biomed Health Inform 2017;21:56-64. [Crossref] [PubMed]
  36. Finlayson SG, Bowers JD, Ito J, et al. Adversarial attacks on medical machine learning. Science 2019;363:1287-9. [Crossref] [PubMed]
  37. Pansara RR, Vaddadi SA, Vallabhaneni R, et al. Fortifying Data Integrity using Holistic Approach to Master Data Management and Cybersecurity Safeguarding. 2024 11th International Conference on Computing for Sustainable Global Development (INDIACom) 2024:1424-8.
  38. Taherdoost H. Understanding Cybersecurity Frameworks and Information Security Standards—A Review and Comprehensive Overview. Electronics 2022;11:2181. [Crossref]
  39. Aziz HA. Virtual Reality Programs Applications in Healthcare. Journal of Health and Medical Informatics 2018;9:1-3. [Crossref]
  40. Kumar Deb S, Jain R, Deb V. Artificial Intelligence —Creating Automated Insights for Customer Relationship Management. 2018 8th International Conference on Cloud Computing, Data Science & Engineering (Confluence) 2018:758-64.
  41. HAI S. About: Stanford University Human-Centred Artificial Intelligence; 2024. Available online: https://hai.stanford.edu/about
  42. DeepMind G. About: Google DeepMind; 2024. Available online: https://deepmind.google/about/
  43. GOV.UK. MHRA Innovation Office: Government of United Kingdom; 2024. Available online: https://www.gov.uk/government/groups/mhra-innovation-office
  44. NIH. All of Us Research Program: National Institutes of Health; 2024. Available online: https://allofus.nih.gov/
  45. Biobank U. UK Biobank: UK Biobank; 2024. Available online: https://www.ukbiobank.ac.uk/
  46. AMA. Augmented Intelligence in Medicine: American Medical Association; 2024. Available online: https://www.ama-assn.org/practice-management/digital/augmented-intelligence-medicine
  47. PatientsLikeMe. PatientsLikeMe: PatientsLikeMe; 2024. Available online: https://www.patientslikeme.com/
  48. WHO. WHO Digital Health Technical Advisory Group: World Health Organisation; 2024. Available online: https://www.who.int/groups/dh-tag-membership#cms
doi: 10.21037/jmai-24-148
Cite this article as: Reddy S, Shaikh S. The long road ahead: navigating obstacles and building bridges for clinical integration of artificial intelligence technologies. J Med Artif Intell 2025;8:7.

Download Citation