Unlocking the potential of qualitative research for the implementation of artificial intelligence-enabled healthcare
Editorial

Unlocking the potential of qualitative research for the implementation of artificial intelligence-enabled healthcare

Henry David Jeffry Hogg1,2,3^, Mark Philip Sendak4, Alastair Keith Denniston5,6,7^, Pearse Andrew Keane3,8^, Gregory Maniatopoulos1,9

1Population Health Sciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle Upon Tyne, UK; 2Newcastle Eye Centre, Newcastle upon Tyne Hospitals NHS Foundation Trust, Newcastle Upon Tyne, UK; 3Moorfields Eye Hospital NHS Foundation Trust, London, UK; 4Duke Institute for Health Innovation, Durham, NC, USA; 5Institute of Inflammation and Ageing, University of Birmingham, Birmingham, UK; 6University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; 7National Institute for Health and Care Research Birmingham Biomedical Research, Birmingham, UK; 8Institute of Ophthalmology, University College London, London, UK; 9School of Business, University of Leicester, Leicester, UK

^ORCID: Henry David Jeffry Hogg, 0000-0001-8044-7790; Alastair Keith Denniston, 0000-0001-7849-0087; Pearse Andrew Keane, 0000-0002-9239-745X.

Correspondence to: Henry David Jeffry Hogg, MBBS, MRes. Population Health Sciences Institute, Faculty of Medical Sciences, Baddiley-Clark Building, Newcastle University, Newcastle upon Tyne, NE2 4AX, UK. Email: Jeffry.hogg@ncl.ac.uk.

Comment on: Schouten B, Schinkel M, Boerman AW, et al. Implementing artificial intelligence in clinical practice: a mixed-method study of barriers and facilitators. J Med Artif Intell 2022;5:12.


Keywords: Implementation; qualitative research; computerised decision support tools; artificial intelligence (AI); machine learning


Received: 05 April 2023; Accepted: 08 June 2023; Published online: 30 June 2023.

doi: 10.21037/jmai-23-28


Artificial intelligence (AI)-enabled clinical decision support tools (CDSTs) are complicated technologies, which form the basis of complex AI-enabled healthcare interventions. Research of AI-enabled CDSTs has proliferated, with 57,844 model development studies and 5,073 comparative or real-world evaluation studies readily identifiable on PubMed at the time of writing (1). Despite this proliferation of evidence, a notable translational gap persists with little real-world implementation of AI-enabled healthcare interventions (2). While research communities have acknowledged the value and importance of studying AI implementation in real-world clinical settings, there is limited evidence on how to translate the potential of AI into everyday healthcare practices. This persistent translational failure is multifactorial, but there is clear opportunity for impact from the research community if they can deliver the evidence that healthcare systems’ decision makers need to fully evaluate complex interventions such as those involving AI-enabled CDSTs (2). This need for a holistic evidence base exists because AI-enabled CDSTs cannot be considered as inert and isolated technologies, but as components of a complex system which shape and are shaped by the adopters and organisations which enable their impact. The complexity surrounding the clinical implementation of AI tools and applications requires therefore to better understand the interplay between agency, social processes, and contextual conditions shaping implementation. Qualitative research provides a valuable approach to study AI implementation because it allows research communities to explore the interplay between social processes and contextual factors shaping the implementation of change (3). Qualitative research can also surface how these factors may be anticipated or modified to support judicious and successful implementation efforts across varied sociotechnical contexts. In so doing, it helps to answer complex questions such as how and why efforts to implement best practices may succeed or fail, and how patients and providers experience and make decisions in care (4).

Research of AI-enabled CDSTs using qualitative methods represents a minority of the literature (4). An update up to October 2022 of an AI implementation qualitative evidence synthesis search strategy identified just 201 studies, 98 of which focused on machine learning-enabled CDSTs (4). Schouten et al.’s mixed-methods study of the barriers and facilitators to clinical AI implementation represents a valuable contribution to this qualitative evidence base (5). Drawing upon qualitative interviews and focus groups with 15 physicians the authors aim to enhance the generalisability of their findings through the employment of a widely used framework to guide assessment of contextual determinants of implementation (6). Additionally, they go beyond a descriptive analysis of their data to support distant adopters of AI-enabled healthcare interventions in answering the key question of “how” they might succeed in their own context (7). Schouten et al.’s approach to clinical AI implementation research enhances the actionability of qualitative research and aligns with established guidelines in implementation science (8). Due to the sociotechnical complexity of AI-enabled healthcare interventions, however, there are unavoidable limits to the transferability of learnings derived from one specific pairing of intervention and context to another (9). This important caveat is not always explicit in the outputs of theoretical approaches and can risk misplaced reductionism for much of the clinical AI community for whom the evaluation of qualitative research is unfamiliar. This editorial will discuss how the contexts on which qualitative research focuses and the theoretical approaches which are applied to the data can respect these limits whilst delivering actionable and proportionately transferable insights. The aim is to support a wider range of current and potential adopters of AI-enabled healthcare interventions in unlocking the value of qualitative research within their scope of practice and to propose priorities for researchers progressing this valuable evidence base.


The role of implementation theory

There is a great and growing breadth of theoretical approaches for implementation researchers and practitioners to choose from (9). These theoretical approaches can be categorised under various taxonomies and put to various uses but are united by their purpose to abstract empirical insights from research to make them more transferable across implementation efforts (10).

Transferability is a valuable contribution which helps to compensate for the relative scarcity of qualitative research. This value is derived from the production of insights which can transcend differences in technological, clinical and social aspects to deliver impact outside of the specifications of the study itself. The extent to which insights generated through implementation efforts are transferable is influenced by the theoretical approach selected and its alignment with the underlying qualitative research (9). This is not to say that there is a “correct” theoretical approach to choose. However, a considered choice of theoretical approach is likely to enhance the degree and legitimacy with which insights derived from one implementation effort could be translated to another (Table 1). Understanding researchers’ rationale for selecting a theoretical approach can be useful in evaluating how this has been addressed in qualitative studies of AI-enabled clinical interventions, but is not commonly reported (4). Even in cases where this rationale is reported, it also seems likely that ready access to a full range of contender theoretical approaches and confidence in selecting between them is not common in the AI research community. As yet, there are no clear trends in these areas for improvement, but there are well-established mechanisms by which they could improve. Journal editors and reviewers have a role to play in advocating for relevant guidelines (8). In doing so, they promote a detailed and transparent explanation of all methodological aspects of a qualitative study, including its guiding theoretical aims and methodological principals. There are also systematically searchable libraries of theoretical approaches relative to implementation science and emerging training programmes in implementation science principles which can improve accessibility across a greater variety of theoretical approaches (9,16).

Table 1

Three diverse examples of considered selection and application of theoretical approaches in qualitative research of AI-enabled interventions

First author and year Research aim Theoretical approach Role in research Relevant characteristics (9)
Buck 2022 (11) “To investigate which determinants influence GPs’ attitudes toward AI-enabled systems in diagnosis” Two component model of attitude (12) Informing data analysis A process model focusing on individuals’ (GPs) characteristics and attitudes
Fujimori 2022 (13) “To evaluate the acceptance, barriers, and facilitators to implementing AI-based CDSSs in the emergency care setting” Consolidated Framework for Implementation Research (6) Informing data collection A determinant framework focusing on factors influencing implementation across policy, organisational and individual levels
Chen 2021 (14) “To explore the knowledge, awareness and attitudinal responses related to AI amongst professional groups in radiology, and to analyse the implications for the future adoption of these technologies into practice” Innovation-decision process framework (15) Informing data analysis A classic theory arising from change management focusing on networks and relationships between individuals

AI, artificial intelligence; GP, general practitioner; CDSS, computerised decision support systems.


The importance and scarcity of authentic insight

The accuracy with which qualitative data can represent stakeholder perceptions of real-world implementations of AI-enabled interventions is another important consideration in unlocking the value of qualitative research. Few AI-enabled interventions have been implemented in real-world care (2). Qualitative research of these real-world implementation efforts and the authentic insights they can provide are even more scarce. Drawing on authentic insights from adjacent interventions involving technologies such as rule-based CDSTs represents an opportunity to mitigate against this scarcity (4). AI-enabled CDST do hold certain sociotechnical distinctions, but there is a great deal of overlap with adjacent innovations such as rule-based CDSTs and authentic insights from their real-world implementation should not be undervalued. It is increasingly important because hypothetical AI-enabled healthcare scenarios compose the majority of the qualitative evidence base for implementation. The value of insights derived from studying hypothetical scenarios have clear limitations in guiding the implementation of specific AI-enabled interventions into specific contexts. Already, strong triangulation has been achieved on the themes raised by clinician and patient participants in diverse studies of hypothetical AI-enabled healthcare scenarios, which risks research waste from repeating similar studies (4). This presents a need for authentic insights, which is not unique to AI implementation but a general consideration across qualitative research. The methodology of phenomenology highlights this by requiring that the essence of a phenomenon is understood through the perspectives of individuals with lived experience of it (17). Before committing to qualitative study settings and designs, researchers should ask themselves where their resources and expertise can be applied most productively (4).

We would suggest two priorities for AI-enabled healthcare to be addressed through qualitative research. Firstly, in the common absence of clinically integrated AI-enabled interventions to study, hypothetical studies should pursue a narrow focus with a specified AI-enabled CDST and use case. This is exemplified by Schouten et al.’s work, which presented clinical vignettes involving a real pre-clinical AI-enabled CDST to predict the outcome of blood cultures to its potential users (5). Secondly, opportunities to explore perspectives from all stakeholders in clinically integrated AI-enabled healthcare interventions should be pursued. Whilst the insights will inevitably originate from a single specific context, they will have a high level of authenticity and the use of theoretical approaches can make their value transferable to other implementation efforts (Table 2). This offers a means to move the field beyond abstract syntheses of generalised perspectives and improve the actionability of the insights for practitioners seeking to close the translational gap for AI-enabled healthcare (2).

Table 2

Five example qualitative research studies of clinically integrated AI-enabled healthcare interventions

First author and year Intervention Context Insight
Sandhu 2020 (18) Electronic medical record-based sepsis diagnosis A US emergency department Having staff dedicated to the intervention improved the interface between clinicians and the tool
Lebovitz 2022 (19) Plain radiograph fracture identification A US radiology department An established mechanism for users to ‘interrogate’ AI outputs improved trust
Beede 2020 (20) Diabetic retinopathy screening on fundus photographs A nurse-led Thai community clinic End users were sensitive to the time and cost burden of false positives to patients which led to use-case drift
Singer 2022 (21) Electronic medical record-based prediction of hospital bed capacity and readmission risk Adult medical, surgical and paediatric inpatient services at a US hospital Place-based iterative collaborative development of AI tools between users and developers mitigated against tool abandonment
Barakat-Johnson 2022 (22) Wound segmentation on sequential photographs Inpatient and outpatient adult wound care in Australia Assistive AI tools supporting intuitive tasks for clinicians offer little value and have low uptake

AI, artificial intelligence; US, United States.


How can things improve?

Valuable contributions from qualitative research to AI-enabled healthcare interventions are approaching consensus on how stakeholders may feel about hypothetical scenarios (4). Further investments in qualitative research need to avoid replicating these insights to continue progressing the field (2). This progress will depend upon qualitative research that improves the design of a wide range of specific AI-enabled healthcare interventions and tailors strategies for their implementation across a range of contexts (7). Supporting such a breadth of intervention and context pairings requires insights from qualitative research to be transferable outside of the studies from which they arose, whilst remaining accessible to a broad community of implementation practitioners (9). Rather than creating more “novel” theoretical approaches, the research community can promote the accessibility of the insights they provide by applying more established theoretical approaches to satisfy the need for transferability (9). Researchers should also exercise restraint in pursuing hypothetical research questions, which may fail to deliver new insights. Instead, perspectives from any stakeholders with lived experience of integrated AI-enabled healthcare interventions should be prioritised (Table 2). A considered selection of theoretical approaches can then be applied to maximise the transferability of these authentic insights whilst respecting the diversity of AI-enabled interventions and contexts for their implementation (Table 1). Whilst the resource requirements of identifying integrated AI-enabled interventions and applying methods such as ethnography may be high, the depth and authenticity of the insights they provide may represent the most efficient means of progressing implementation from its current state (23).

In addition to the evidence elicited by dedicated qualitative researchers, there is an opportunity for other stakeholders such as developers to share important qualitative and quantitative insights, such as those derived from post market surveillance (24). It is important that developers and regulators understand the extent to which this performance and usability data may be helpful in our understanding of implementation (both for the specific CDST and more generally), and support the more open sharing of this data. Providers themselves could also work with developers to design their local procurement and implementation procedures to incorporate a local evaluation of the intervention (including implementation issues) as part of a trial phase prior to full contracting (25). Networks between providers could also help for peer-support and authentic insights into AI-enabled healthcare interventions and their implementation (25). These adaptions would help to leverage existent implementation insights arising outside of the research setting, but funding and strategic shifts from healthcare policy makers, leaders and managers to integrate researchers within the practice of AI implementation could expand this opportunity further (25).

It is time for the community of stakeholders in clinical AI to focus on qualitative research that is grounded in the real-world integration of AI-enabled interventions. This practical emphasis could unlock more of the value of qualitative research of AI-enabled healthcare interventions to secure and expedite scalable benefit for patients and providers across sociotechnical contexts.


Acknowledgments

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the editorial office, Journal of Medical Artificial Intelligence. The article did not undergo external peer review.

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-23-28/coif). HDJH is funded by the National Institute for Health Research (NIHR) through a doctoral fellowship award (NIHR301467). The funder had no role in the design or delivery of this study. MPS’s work is supported by grants from the National Institute of Health, the Gordon and Betty Moore Foundation and Patrick J. McGovern Foundation. MPS is listed as a co-inventor for intellectual property licenced to Cohere Med and Clinetic, and has received honoraria from Roche for conference presentation and NBER for textbook contributions and is an unpaid board member for Machine Learning for Health Care. AKD’s work is supported by the NIHR Birmingham Biomedical Research Centre (BRC). The views expressed are those of the author(s) and not necessarily those of the NIHR, Department of Health and Social Care or other funders. PAK’s work is supported by grants from UK Research and Innovation and the Moorfields Eye Charity. PAK is also listed on patents held by Google, participates on an advisory board for RetinAI and holds stock for Big Picture Medical and Bitfount. GM has no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Zhang J, Whebell S, Gallifant J, et al. An interactive dashboard to track themes, development maturity, and global equity in clinical artificial intelligence research. Lancet Digit Health 2022;4:e212-3. [Crossref] [PubMed]
  2. Sendak M, Gulamali F, Balu S. Overcoming the Activation Energy Required to Unlock the Value of AI in Healthcare. In: Agrawal A, Gans J, Goldfarb A, et al., Editors. The Economics of Artificial Intelligence: Health Care Challenges. Chicago: University of Chicago Press; 2022.
  3. Maniatopoulos G, Hunter DJ, Erskine J, et al. Implementing the New Care Models in the NHS: Reconfiguring the Multilevel Nature of Context to Make It Happen; 2020:3-27.
  4. Hogg HDJ, Al-Zubaidy MTechnology Enhanced Macular Services Study Reference Group, et al. Stakeholder Perspectives of Clinical Artificial Intelligence Implementation: Systematic Review of Qualitative Evidence. J Med Internet Res 2023;25:e39742. [Crossref] [PubMed]
  5. Schouten B, Schinkel M, Boerman AW, et al. Implementing artificial intelligence in clinical practice: a mixed-method study of barriers and facilitators. J Med Artif Intell 2022;5:12. [Crossref]
  6. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci 2009;4:50. [Crossref] [PubMed]
  7. Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci 2015;10:21. [Crossref] [PubMed]
  8. Hull L, Goulding L, Khadjesari Z, et al. Designing high-quality implementation research: development, application, feasibility and preliminary evaluation of the implementation science research development (ImpRes) tool and guide. Implement Sci 2019;14:80. [Crossref] [PubMed]
  9. Rabin B, Swanson K, Glasgow R, et al. Dissemination and Implementation Models in Health Research and Practice 2021. Available online: https://dissemination-implementation.org
  10. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci 2015;10:53. [Crossref] [PubMed]
  11. Buck C, Doctor E, Hennrich J, et al. General Practitioners' Attitudes Toward Artificial Intelligence-Enabled Systems: Interview Study. J Med Internet Res 2022;24:e28916. [Crossref] [PubMed]
  12. Rosenberg MJ, Hovland CI. Cognitive, Affective and Behavioral Components of Attitudes, in Attitude Organization and Change: An Analysis of Consistency Among Attitude Components. New Haven: Yale University Press; 1960.
  13. Fujimori R, Liu K, Soeno S, et al. Acceptance, Barriers, and Facilitators to Implementing Artificial Intelligence-Based Decision Support Systems in Emergency Departments: Quantitative and Qualitative Evaluation. JMIR Form Res 2022;6:e36501. [Crossref] [PubMed]
  14. Chen Y, Stavropoulou C, Narasinkan R, et al. Professionals' responses to the introduction of AI innovations in radiology and their implications for future adoption: a qualitative study. BMC Health Serv Res 2021;21:813. [Crossref] [PubMed]
  15. Battilana J, Casciaro T. The network secrets of great change agents. Harv Bus Rev 2013;91:62-8, 132. [PubMed]
  16. Schultes MT, Aijaz M, Klug J, et al. Competences for implementation science: what trainees need to learn and where they learn it. Adv Health Sci Educ Theory Pract 2021;26:19-35. [Crossref] [PubMed]
  17. Neubauer BE, Witkop CT, Varpio L. How phenomenology can help us learn from the experiences of others. Perspect Med Educ 2019;8:90-7. [Crossref] [PubMed]
  18. Sandhu S, Lin AL, Brajer N, et al. Integrating a Machine Learning System Into Clinical Workflows: Qualitative Study. J Med Internet Res 2020;22:e22421. [Crossref] [PubMed]
  19. Lebovitz S, Lifshitz-Assaf H, Levina N. To Engage or Not to Engage with AI for Critical Judgments: How Professionals Deal with Opacity When Using AI for Medical Diagnosis. Organization Science 2022;33:126-48. [Crossref]
  20. Beede E, Baylor E, Hersch F, et al. A Human-Centered Evaluation of a Deep Learning System Deployed in Clinics for the Detection of Diabetic Retinopathy. CHI '20: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 2020:1-12.
  21. Singer SJ, Kellogg KC, Galper AB, et al. Enhancing the value to users of machine learning-based clinical decision support tools: A framework for iterative, collaborative development and implementation. Health Care Manage Rev 2022;47:E21-31. [Crossref] [PubMed]
  22. Barakat-Johnson M, Jones A, Burger M, et al. Reshaping wound care: Evaluation of an artificial intelligence app to improve wound assessment and management amid the COVID-19 pandemic. Int Wound J 2022;19:1561-77. [Crossref] [PubMed]
  23. Rapport F, Smith J, Hutchinson K, et al. Too much theory and not enough practice? The challenge of implementation science application in healthcare practice. J Eval Clin Pract 2022;28:991-1002. [Crossref] [PubMed]
  24. United States Food and Drug Administration, 522 Postmarket Surveillance Studies Program. 2022.
  25. Sandhu S, Sendak MP, Ratliff W, et al. Accelerating health system innovation: principles and practices from the Duke Institute for Health Innovation. Patterns (N Y) 2023;4:100710. [Crossref] [PubMed]
doi: 10.21037/jmai-23-28
Cite this article as: Hogg HDJ, Sendak MP, Denniston AK, Keane PA, Maniatopoulos G. Unlocking the potential of qualitative research for the implementation of artificial intelligence-enabled healthcare. J Med Artif Intell 2023;6:8.

Download Citation