Clinician interaction with artificial intelligence systems: a narrative review
Review Article

Clinician interaction with artificial intelligence systems: a narrative review

Yasmine Madan1, Argyrios Perivolaris2, Robert Chris Adams-McGavin3, James J. Jung4,5

1Department of Health Sciences, McMaster University, Hamilton, Ontario, Canada; 2Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada; 3Department of Surgery, Temerty Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada; 4Department of Surgery, Duke University, Durham, NC, USA; 5Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA

Contributions: (I) Conception and design: All authors; (II) Administrative support: JJ Jung; (III) Provision of study materials or patients: None; (IV) Collection and assembly of data: All authors; (V) Data analysis and interpretation: Y Madan, A Perivolaris, JJ Jung; (VI) Manuscript writing: Y Madan, A Perivolaris, JJ Jung; (VII) Final approval of manuscript: All authors.

Correspondence to: James J. Jung, MD, PhD. Department of Surgery, Duke University, 407 Crutchfield Street, Durham, NC 27704, USA; Department of Biostatistics and Bioinformatics, Duke University, Durham, NC, USA. Email: james.jung@duke.edu.

Background and Objective: Artificial intelligence (AI) has potential to significantly enhance healthcare systems by improving diagnostic accuracy, treatment efficacy, and operational efficiency. Despite these advantages, AI adoption in clinical practice remains limited, primarily due to challenges with data integration, workflow disruptions, and distrust of AI recommendations. This study aims to perform a narrative review of the existing literature on clinician-AI interactions, focusing on how AI affects clinician experience and identify gaps in the current research for future investigations.

Methods: We conducted a narrative review using a comprehensive literature search across Medline, PsychINFO, Embase, and Scopus databases for studies published from inception time of respective databases to June 2022. Keywords related to clinician-AI interactions were used. Relevant data were extracted and thematically analyzed, focusing on the elements that play key roles in the interaction between clinicians and AI systems.

Key Content and Findings: We identified six thematic groups involving clinician interaction with AI systems. They were: (I) user satisfaction, learnability, and usability, demonstrating varied levels of clinician satisfaction and challenges in AI system usability; (II) accuracy of AI system outputs, with many studies validating high accuracy but noting occasional discrepancies; (III) perceived usefulness, showing that clinicians acknowledge potential benefits of AI but remain cautious; (IV) impact on clinician workflow, highlighting both improvements in efficiency and occasional workflow disruptions; (V) trust and acceptance, with trust in AI systems varying significantly among clinicians; and (VI) ethical and professional concerns, focusing on issues such as data privacy, bias, and the potential for AI to affect clinical judgment.

Conclusions: This review elucidates the multifaceted nature of clinician-AI interactions, underscoring AI’s potential to enhance clinician experiences through improved diagnostic accuracy and workflow efficiency. However, it also highlights significant barriers, including ethical concerns, trust issues, and challenges in system learnability. Further research is needed to address these barriers, focusing on developing transparent AI systems, ensuring robust data privacy, and improving AI usability and integration into clinical workflows.

Keywords: Narrative review; artificial intelligence (AI); clinicians; healthcare; perception


Received: 13 August 2024; Accepted: 05 November 2024; Published online: 18 December 2024.

doi: 10.21037/jmai-24-279


Introduction

Artificial intelligence (AI) is a rapidly advancing field with increasing influence across various healthcare domains. AI applications in healthcare include AI-driven algorithms for interpreting diagnostic images, such as the Aidence tool for lung nodule detection on computed tomography (CT) scans, and clinical decision support systems like IBM Watson for Oncology, which aid in diagnosis, recommending treatments, and predicting patient outcomes (1-4). These technologies promise to enhance clinicians’ clinical acumen and task efficiency (4,5). Despite these promising advancements, AI adoption in clinical practice remains limited. One predominant obstacle lies in the complexity of the healthcare environment, characterized by numerous interconnected roles and diverse tasks essential for delivering effective patient care. Key challenges impeding AI integration include the need for seamless integration with existing electronic health records (EHRs), concerns about data privacy, and varying levels of technology literacy among clinicians. Importantly, there is a significant gap in understanding how AI implementation affects human experiences and interactions within this complex environment.

The interaction between clinicians and AI systems is a complex exchange. This interaction may entail clinicians using AI systems for clinical decision support, thereby delivering more efficient and effective patient care. At the same time, AI systems learn and refine their algorithms using real-world clinical data and reinforcement from clinician’s input. The clinician-AI interaction shapes clinical workflows, decision-making processes, and patient experiences. Thus, it becomes imperative to evaluate the perspective of clinicians on various elements of interaction, such as system usability, trust, and ethical concerns when engaging with AI platforms.

This narrative review aims to explore existing literature on clinician-reported experiences after interacting with an AI system. By doing so, it seeks to characterize AI’s potential to improve clinician experiences while identifying gaps for future study. We acknowledge that our review is based on literature prior to the advent of generative AI, and therefore primarily pertains to traditional machine learning approaches. First, we summarize the literature findings on clinician perspective of system usability, including aspects such as learnability, accuracy of AI, and its impact on clinical workflow. Second, we discuss the impact of AI on clinical decision making and patient interactions, highlighting both positive and negative outcomes. Lastly, we explore clinician’s trust in AI systems and ethical and professional concerns arising from their use, such as bias in AI algorithms and implications for patient privacy. We present this article in accordance with the Narrative Review reporting checklist (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-279/rc).


Methods

We searched the Medline (OVID), PsychINFO (OVID), Embase (OVID), and Scopus (ELSEVIER) databases from the inception period of databases to June 2022 (Table 1). Our rationale for including all articles from the inception period, which spans from early AI technologies to contemporary applications, was to capture the evolution of key themes such as ethical and professional concerns, trust, and clinician-AI interaction. These foundational themes remain relevant today, despite technological advancements in AI. By including these earlier studies, we aim to provide a more comprehensive context for our review, offering insights into the historical and ongoing challenges in clinician-AI interactions. Additionally, we recognize that AI has continued to evolve rapidly since June 2022, and we recommend that future research builds upon our review by incorporating newer studies to capture emerging developments and insights. We used Medical Subject Headings (MeSH) related to AI and its subdomains (e.g., deep learning), which were combined with words related to health personnel, medicine, and perceptions. For example, search terms included “Clinician”, “Healthcare Provider”, “Artificial Intelligence”, “Deep Learning”, “Computer Vision”, “Computer-Assisted”, “Perception”, “Trust”, and “Attitude”. The search was limited to remove non-human studies, editorials, and letters [full search strategy in supplementary appendix (available at https://cdn.amegroups.cn/static/public/jmai-24-279-1.pdf)]. From a total of 17,660 publications found, we selected studies that reported on clinician experience and quality of interaction with an AI-enabled system (Figure 1). Included studies consisted of original research and were a randomized controlled trial, cohort, case-control, cross-sectional, case-series, qualitative, or mixed methods study. We excluded abstracts, dissertation/thesis work, unpublished reports or data, reviews, protocols, opinions, letter to the editors, and studies published in languages other than English.

Table 1

The search strategy summary

Items Specification
Date of search 2022-06-06
Databases and other sources searched Medline, PsychINFO, Embase, and Scopus
Search terms used See supplementary appendix (available at https://cdn.amegroups.cn/static/public/jmai-24-279-1.pdf)
Timeframe From inception time of databases to June 2022
Inclusion criteria Original research, published in English, and a randomized trial, cohort, case-control, cross-sectional, case-series, qualitative, or mixed methods study
Selection process Information was extracted by two reviewers independently
Figure 1 Flow chart of included studies.

Two reviewers (Y.M. and A.P.) independently screened the titles and abstracts using the Covidence software to identify studies eligible for full-text review. During the full-text screening, the same reviewers independently applied the inclusion and exclusion (Table 2) criteria to determine which studies would proceed to data extraction. Data extracted included study characteristics such as publication year, country, study design, clinician population and specialties, methods used to assess interaction quality (e.g., surveys, interviews, or questionnaires), details of the AI tool, and specific traits reflecting the quality of clinician-AI interactions. Both reviewers conducted data extraction separately, and any disagreements were resolved through discussion with a third reviewer (J.J.J.). Pertinent findings on the elements of interaction between clinicians and AI systems were summarized. We performed a thematic analysis to identify emerging themes that described the elements of clinician-AI interaction. Thematic analysis involved coding the data, identifying patterns, and developing themes.

Table 2

Study inclusion and exclusion criteria

Inclusion criteria
   • Population: healthcare workers (examples: physicians, surgeons, residents, nurses, allied health, etc.)
   • Intervention: use or interaction with an artificial intelligence clinical decision support tool
   • Outcome: measurement of clinician reported experience and quality of interaction with the AI system. This can include but is not limited to trust, perception of clinicians, interpretability, impact on workflow, impact on culture, etc.
Exclusion criteria
   • The study does not use an artificial intelligence designed clinical decision support tool (example: robotic system that works independent of clinician input)
   • The study does not measure clinician reported experience with the AI clinician support tool. This can include but is not limited to trust, perception of clinicians, interpretability, impact on workflow, impact on culture, feasibility of system, etc.
   • The AI system does not provide support for a clinical decision (examples: budgetary or scheduling AI systems)
   • The study only includes students in healthcare
   • The study is not an original study or a conference abstract (exclude review, opinion, case report or letter article)
   • The study was not reported in English

AI, artificial intelligence.


Results

User satisfaction, learnability, and usability

User satisfaction

Most studies reported high user satisfaction scores (6-13), with contributing factors including ease of learning (7,12) and perceived clinical usefulness (8). Conversely, low satisfaction was often due to inaccuracies and limited practical usefulness of the AI outputs (14). Notably, the longer clinicians used an AI system, the more positive their perceptions became (15,16). Other factors in increasing user satisfaction were when clinicians worked at institutions where there existed a long history of using AI technology in their practice or faced higher workloads (7,17,18).

Learnability

Several studies have reported that clinicians generally perceived AI systems as easy to learn (6,8,19-22). Notably, clinicians who received proper training on AI system usage demonstrated quicker learning (17). However, the quality of training varied considerably across studies, with clinicians often expressing a desire for more extensive training before employing AI systems in clinical practice (17,23-26). There appeared to be opportunities to optimize training, as several clinicians reported moderate to high stress levels while learning to use AI systems (17).

Usability

Several studies have reported that clinicians generally perceive AI systems as easy to use (6,8-10,15,16,20,22,27-30). However, certain features have been identified as obstacles to ease of use, such as increased time and effort required for data entry (7,31) and having to wear a virtual reality (VR) headset (32). Despite these issues, AI systems often reduced workload, enhancing usability (33-35).

Accuracy of AI system outputs

The effectiveness of AI in healthcare relies on the accuracy of its outputs. Most studies demonstrated high accuracy levels, corroborated by clinicians (9,10,30,36-39). However, low accuracy due to poorly tailored recommendations or high false positive rates was noted (14). It is imperative to address this, as fostering clinician trust in AI is dependent on system accuracy. Furthermore, the perceived accuracy and utility of system outputs varied between clinicians in different medical specialties (9,40). For instance, in one study, cardiologists were 3.5 times more likely to agree with the AI outputs than geriatricians in an AI-enabled risk assessment tool to aid clinical decision making regarding antithrombotic therapy for geriatric patients (9).

Perceived usefulness

Usefulness in clinical decision making

Most studies reported that clinicians perceive AI as useful for clinical decision-making (7,8,15,16,19,27,30,35,36,41-43), influencing clinicians to change their clinical decisions (8,27,36). For instance, the use of a VR system for visualizing patient anatomy during preoperative planning of ascending aortic surgery resulted in some surgeons switching their anticipated surgical approach from an open to clamped distal anastomosis, a less invasive approach with a lower complication rate (8). AI systems have also influenced clinical decisions by informing clinicians of treatments they had not previously considered. This has led to treatment diversification and optimization for conditions such as benign prostatic hyperplasia and depression when using AI-enabled clinical decision support systems (10,15,31). Furthermore, AI systems can remind clinicians of information they may have overlooked, as seen in an AI-driven clinical decision support system for sepsis intervention (44). Many clinicians have reported that AI positively reinforced their clinical decisions (15,43,45). However, a minority of studies have reported that clinicians saw no benefit (18,25), indicating a need for further evaluation of AI-driven decision changes on patient outcomes.

Impact on patient interaction with the healthcare system

Clinicians have reported improvements in patient understanding, rapport, and trust during clinical encounters when recommendations and predictions from AI systems were incorporated (15,19,24,40). Notably, AI predictions were beneficial for communicating patient outcomes, disease severity and the need for intensive care, as seen in an AI-enabled system to predict mortality in coronavirus disease 2019 (COVID-19) patients (19). In surgical settings, the use of mixed reality (MR) systems that allowed patients to visualize their injuries before surgery enhanced preoperative patient counselling by improving patient understanding (32,36). Furthermore, AI systems that provide automated feedback to clinicians on their quality of care based on audio recordings of patient interactions encouraged clinicians to reflect on their practices and facilitated a greater awareness of certain interpersonal dynamics (26,46).

Impact on clinician workflow

Most studies in our review showed that AI systems improved clinician workflow by enhancing communication between clinician colleagues (19,32,47,48) and streamlining clinical tasks (15,24,33,40,49,50), thereby increasing productivity (49). However, a few reported negative impacts, attributed to some AI systems being more time consuming (51,52). For instance, AI systems aiding in lung cancer treatment decisions that required clinicians to fill in multiple variables to get a result hindered workflow due to the time it took to search for the data (51).

Trust and acceptance

Trust

While most clinicians reported trusting AI (15,24,44,53), some were more reluctant, requiring evidence to support the validity, reliability, and accuracy of AI system outputs (18,29,33). To build trust, solutions suggested by clinicians included implementing AI systems that reported regulatory conformance like Conformité Européenne (CE) mark certification, Food and Drug Administration (FDA) clearance, as well as providing performance data and peer-reviewed publications supporting system accuracy (33). Notably, clinicians expressed that they trusted the system to only assist them, but they would not depend upon the AI system to complete tasks independently (18,19,23,33,54). Clinicians expressed that explainable AI, which refers to AI systems that describe their rationale and decision-making process, would improve trust (15,20,47). For example, clinicians express that obtaining corroborating explanations like heatmaps on radiographs that indicate the likelihood of abnormality, can improve confidence in AI decisions (20,55). These AI-generated explanations must be both precise and complete, as weak explanations were shown to further decrease clinician trust even more than having no explanation at all (56).

Acceptance

Clinicians believe that AI has an important place in healthcare (29), expressing a high clinical acceptance of AI systems (37,39,57). Many reported that using AI was better than using conventional methods without AI assistance to complete tasks (8,22,23,30,50). Furthermore, upon introduction to AI systems, clinicians were keen on implementing AI in their practice (10,11,15,20,24,28,30,58-60) and recommending it to their colleagues (11,30,33,40).

Ethical and professional concerns

Although AI has tremendous potential to transform present clinical decision-making practices, clinicians have raised valid concerns that must be addressed. It is imperative to be aware of these concerns and create systems and policies that overcome them. There are also other professional concerns that create barriers to using AI, including cost, technological barriers, and the need for adequate training.

Ethical concerns

We defined ethical concerns as involving a sense of what is morally right or wrong, which includes bias, patient privacy, and accountability for AI errors. One of the foremost ethical concerns reported by clinicians pertains to the possibility of security breaches of AI systems, posing risks to patient privacy (46,49). Given the immense volume of patient data these systems hold, solutions such as proper data security regulations or effective encryption technology are needed to safeguard that patient information against privacy attacks. Furthermore, clinicians expressed concerns regarding the algorithm bias due to inadequate diversity and outdatedness in datasets used to train AI systems (9,18,19). Recognizing and avoiding algorithm bias, specifically bias that reinforces past healthcare inequities of marginalized groups, is an enduring challenge for AI systems in medicine.

Professional concerns

Many clinicians expressed concerns about the high costs of AI systems, including the initial purchase, user training, and troubleshooting expenses (17,18,54,61). It is important to note, however, that AI can result in more efficient clinical care, which can ultimately lower long-term costs (62). The introduction of AI into medicine has also raised concerns about introducing technological barriers to clinical care, particularly if relied imperative AI systems are not accessible in certain areas or healthcare systems (42). In addition, some clinicians were worried that AI may eventually be capable of completing the tasks associated with their job and decrease staffing need (58). Clinicians had concerns about the psychological impact of AI on their self-perception and self-confidence. Clinicians expressed that AI could cause psychological distress, as it could have a negative effect on the clinician’s self-perception of the clinical plan (49). Moreover, clinicians using AI systems that provide automated feedback on their quality of care, reported that receiving low performance scores could lower their self-confidence (26,46). Lastly, clinicians expressed concerns about potential liability for errors caused by the use of an AI system (58). These uncertainties present a barrier to the use of AI in medicine, thus health policy that addresses accountability for AI-related errors is imperative.


Limitations

Our study is limited by the relatively small number of eligible articles (55 studies), which may restrict the generalizability of our review. This led to many of our findings being based on few studies, which may limit the strength of the evidence supporting our review’s conclusions. Moreover, due to the rapid evolution of AI, our search ending in June 2022 may not capture the most recent technological advancements. Lastly, our study is limited by its exclusion of studies published in languages other than English and those from unsearched databases, which may have led to the exclusion of some relevant studies. Despite these limitations, our study used a comprehensive search strategy from multiple databases and a systematic process by which two authors independently assessed study eligibility. This facilitates comprehensive results and intersubjectivity to provide valuable insights into clinician-AI interactions and to establish a foundation for future research.


Conclusions

The themes explored in this narrative review underscore AI’s significant potential to transform clinician experiences by reducing workload, saving time, and enhancing clinical decision-making. Our findings demonstrate high levels of user satisfaction, usability, and perceived usefulness among clinicians, particularly when AI systems are accurate and easy to learn. However, persistent ethical and professional concerns, such as data privacy, algorithm bias, and the high cost of AI systems, may hinder widespread adoption. To fully realize AI’s promise in healthcare, future studies must address these barriers by exploring effective strategies to improve system learnability, such as standardized and comprehensive training programs. Building trust through explainable AI systems, which provide clear and precise explanations for their recommendations, is essential. Additionally, robust data security measures must be implemented to safeguard patient information and maintain confidentiality.

A comprehensive understanding of AI’s impact on the healthcare environment, informed by diverse clinician perspectives, is crucial. Developing AI systems tailored to address the unique challenges and needs of the healthcare sector will be key to fostering widespread acceptance and integration. By addressing these critical areas, AI has the potential to significantly enhance the future of medicine, ultimately improving patient care and outcomes.


Acknowledgments

Funding: None.


Footnote

Reporting Checklist: The authors have completed the Narrative Review reporting checklist. Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-279/rc

Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-279/prf

Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-279/coif). The authors have no conflicts of interest to declare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Pinto-Coelho L. How Artificial Intelligence Is Shaping Medical Imaging Technology: A Survey of Innovations and Applications. Bioengineering (Basel) 2023;10:1435. [Crossref] [PubMed]
  2. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019;25:44-56. [Crossref] [PubMed]
  3. Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019;6:94-8. [Crossref] [PubMed]
  4. Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ 2023;23:689. [Crossref] [PubMed]
  5. Krishnan G, Singh S, Pathania M, et al. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell 2023;6:1227091. [PubMed]
  6. Abdel-Rahman SM, Breitkreutz ML, Bi C, et al. Design and Testing of an EHR-Integrated, Busulfan Pharmacokinetic Decision Support Tool for the Point-of-Care Clinician. Front Pharmacol 2016;7:65. [PubMed]
  7. Abidi SR, Stewart S, Shepherd M, et al. Usability evaluation of family physicians’ interaction with the Comorbidity Ontological Modeling and ExecuTion System (COMET). Stud Health Technol Inform 2013;192:447-51. [PubMed]
  8. Abjigitova D, Sadeghi AH, Peek JJ, et al. Virtual Reality in the Preoperative Planning of Adult Aortic Surgery: A Feasibility Study. J Cardiovasc Dev Dis 2022;9:31. [Crossref] [PubMed]
  9. Bajorek BV, Masood N, Krass I. Development of a Computerised Antithrombotic Risk Assessment Tool (CARAT) to optimise therapy in older persons with atrial fibrillation. Australas J Ageing 2012;31:102-9. [Crossref] [PubMed]
  10. Bouhadana D, Nguyen DD, Raizenne B, et al. Evaluating the acceptability of an online patient decision aid for the surgical management of lower urinary tract symptoms secondary to benign prostatic hyperplasia. Can Urol Assoc J 2021;15:247-54. [Crossref] [PubMed]
  11. Chen RC, Jiang HQ, Huang CY, et al. Clinical Decision Support System for Diabetes Based on Ontology Reasoning and TOPSIS Analysis. J Healthc Eng 2017;2017:4307508. [Crossref] [PubMed]
  12. Dunsmuir D, Daniels J, Brouse C, et al. A knowledge authoring tool for clinical decision support. J Clin Monit Comput 2008;22:189-98. [Crossref] [PubMed]
  13. Wong J, Huang V, Wells D, et al. Implementation of deep learning-based auto-segmentation for radiotherapy planning structures: a workflow study at two cancer centers. Radiat Oncol 2021;16:101. [Crossref] [PubMed]
  14. Romero-Brufau S, Wyatt KD, Boyum P, et al. A lesson in implementation: A pre-post study of providers’ experience with artificial intelligence-based clinical decision support. Int J Med Inform 2020;137:104072. [Crossref] [PubMed]
  15. Tanguay-Sela M, Benrimoh D, Popescu C, et al. Evaluating the perceived utility of an artificial intelligence-powered clinical decision support system for depression treatment using a simulation center. Psychiatry Res 2022;308:114336. [Crossref] [PubMed]
  16. Trivedi MH, Kern JK, Grannemann BD, et al. A computerized clinical decision support system as a means of implementing depression guidelines. Psychiatr Serv 2004;55:879-85. [Crossref] [PubMed]
  17. Silveira Thomas Porto C, Catal E. A comparative study of the opinions, experiences and individual innovativeness characteristics of operating room nurses on robotic surgery. J Adv Nurs 2021;77:4755-67. [Crossref] [PubMed]
  18. Allen B, Agarwal S, Coombs L, et al. 2020 ACR Data Science Institute Artificial Intelligence Survey. J Am Coll Radiol 2021;18:1153-9. [Crossref] [PubMed]
  19. Abdulaal A, Patel A, Al-Hindawi A, et al. Clinical Utility and Functionality of an Artificial Intelligence-Based App to Predict Mortality in COVID-19: Mixed Methods Analysis. JMIR Form Res 2021;5:e27992. [Crossref] [PubMed]
  20. Calisto FM, Santiago C, Nunes N, et al. Introduction of human-centric AI assistant to aid radiologists for multimodal breast image classification. International Journal of Human-Computer Studies 2021;150:102607. [Crossref]
  21. Im EO, Chee W. Nurses’ acceptance of the decision support computer program for cancer pain management. Comput Inform Nurs 2006;24:95-104. [Crossref] [PubMed]
  22. Iqbal H, Tatti F, Rodriguez Y, Baena F. Augmented reality in robotic assisted orthopaedic surgery: A pilot study. J Biomed Inform 2021;120:103841. [Crossref] [PubMed]
  23. Alberdi E, Gilhooly K, Hunter J, et al. Computerisation and decision making in neonatal intensive care: a cognitive engineering investigation. J Clin Monit Comput 2000;16:85-94. [Crossref] [PubMed]
  24. Benrimoh D, Tanguay-Sela M, Perlman K, et al. Using a simulation centre to evaluate preliminary acceptability and impact of an artificial intelligence-powered clinical decision support system for depression treatment on the physician-patient interaction. BJPsych Open 2021;7:e22. [Crossref] [PubMed]
  25. Berrondo C, Makari JH. Current Practice in Robotic Surgery Among Pediatric Urologists: A Survey Study. J Endourol 2022;36:740-4. [Crossref] [PubMed]
  26. Hirsch T, Soma C, Merced K, et al. “It’s hard to argue with a computer:” Investigating Psychotherapists’ Attitudes towards Automated Evaluation. DIS (Des Interact Syst Conf) 2018;2018:559-71. [PubMed]
  27. Carlile M, Hurt B, Hsiao A, et al. Deployment of artificial intelligence for radiographic diagnosis of COVID-19 pneumonia in the emergency department. J Am Coll Emerg Physicians Open 2020;1:1459-64. [Crossref] [PubMed]
  28. Moret-Tatay C, Radawski HM, Guariglia C. Health Professionals’ Experience Using an Azure Voice-Bot to Examine Cognitive Impairment (WAY2AGE). Healthcare (Basel) 2022;10:783. [Crossref] [PubMed]
  29. Scheetz J, Koca D, McGuinness M, et al. Real-world artificial intelligence-based opportunistic screening for diabetic retinopathy in endocrinology and indigenous healthcare settings in Australia. Sci Rep 2021;11:15808. [Crossref] [PubMed]
  30. Wang H, Wang F, Newman S, et al. Application of an innovative computerized virtual planning system in acetabular fracture surgery: A feasibility study. Injury 2016;47:1698-701. [Crossref] [PubMed]
  31. Zhang X, Svec M, Tracy R, et al. Clinical decision support systems with team-based care on type 2 diabetes improvement for Medicaid patients: A quality improvement project. Int J Med Inform 2021; Epub ahead of print. [Crossref] [PubMed]
  32. Lu L, Wang H, Liu P, et al. Applications of Mixed Reality Technology in Orthopedics Surgery: A Pilot Study. Front Bioeng Biotechnol 2022;10:740507. [Crossref] [PubMed]
  33. Thodberg HH, Thodberg B, Ahlkvist J, et al. Autonomous artificial intelligence in pediatric radiology: the use and perception of BoneXpert for bone age assessment. Pediatr Radiol 2022;52:1338-46. [Crossref] [PubMed]
  34. Juluru K, Shih HH, Keshava Murthy KN, et al. Integrating Al Algorithms into the Clinical Workflow. Radiol Artif Intell 2021;3:e210013. [Crossref] [PubMed]
  35. Kumar A, Aikens RC, Hom J, et al. OrderRex clinical user testing: a randomized trial of recommender system decision support on simulated cases. J Am Med Inform Assoc 2020;27:1850-9. [Crossref] [PubMed]
  36. Checcucci E, Amparore D, Pecoraro A, et al. 3D mixed reality holograms for preoperative surgical planning of nephron-sparing surgery: evaluation of surgeons’ perception. Minerva Urol Nephrol 2021;73:367-75. [Crossref] [PubMed]
  37. Garrett Fernandes M, Bussink J, Stam B, et al. Deep learning model for automatic contouring of cardiovascular substructures on radiotherapy planning CT images: Dosimetric validation and reader study based clinical acceptability testing. Radiother Oncol 2021;165:52-9. [Crossref] [PubMed]
  38. Jones CM, Danaher L, Milne MR, et al. Assessment of the effect of a comprehensive chest radiograph deep learning model on radiologist reports and patient outcomes: a real-world observational study. BMJ Open 2021;11:e052902. [Crossref] [PubMed]
  39. Künzel LA, Nachbar M, Hagmüller M, et al. Clinical evaluation of autonomous, unsupervised planning integrated in MR-guided radiotherapy for prostate cancer. Radiother Oncol 2022;168:229-33. [Crossref] [PubMed]
  40. Scheder-Bieschin J, Blümke B, de Buijzer E, et al. Improving Emergency Department Patient-Physician Conversation Through an Artificial Intelligence Symptom-Taking Tool: Mixed Methods Pilot Observational Study. JMIR Form Res 2022;6:e28199. [Crossref] [PubMed]
  41. Jaber D, Hajj H, Maalouf F, et al. Medically-oriented design for explainable AI for stress prediction from physiological measurements. BMC Med Inform Decis Mak 2022;22:38. [Crossref] [PubMed]
  42. Bairstow PJ, Mendelson R, Dhillon R, et al. Diagnostic imaging pathways: development, dissemination, implementation, and evaluation. Int J Qual Health Care 2006;18:51-7. [Crossref] [PubMed]
  43. Cheikh AB, Gorincour G, Nivet H, et al. How artificial intelligence improves radiological interpretation in suspected pulmonary embolism. Eur Radiol 2022;32:5831-42. [Crossref] [PubMed]
  44. Long D, Capan M, Mascioli S, et al. Evaluation of User-Interface Alert Displays for Clinical Decision Support Systems for Sepsis. Crit Care Nurse 2018;38:46-54. [Crossref] [PubMed]
  45. Onega T, Aiello Bowles EJ, Miglioretti DL, et al. Radiologists’ perceptions of computer aided detection versus double reading for mammography interpretation. Acad Radiol 2010;17:1217-26. [Crossref] [PubMed]
  46. Creed TA, Kuo PB, Oziel R, et al. Knowledge and Attitudes Toward an Artificial Intelligence-Based Fidelity Measurement in Community Cognitive Behavioral Therapy Supervision. Adm Policy Ment Health 2022;49:343-56. [Crossref] [PubMed]
  47. Ginestra JC, Giannini HM, Schweickert WD, et al. Clinician Perception of a Machine Learning-Based Early Warning System Designed to Predict Severe Sepsis and Septic Shock. Crit Care Med 2019;47:1477-84. [Crossref] [PubMed]
  48. Aldughayfiq B, Sampalli S. Patients’, pharmacists’, and prescribers’ attitude toward using blockchain and machine learning in a proposed ePrescription system: online survey. JAMIA Open 2022;5:ooab115. [Crossref] [PubMed]
  49. Zhai H, Yang X, Xue J, et al. Radiation Oncologists’ Perceptions of Adopting an Artificial Intelligence-Assisted Contouring Technology: Model Development and Questionnaire Study. J Med Internet Res 2021;23:e27122. [Crossref] [PubMed]
  50. Calisto FM, Santiago C, Nunes N, et al. BreastScreening-AI: Evaluating medical intelligent agents for human-AI interactions. Artif Intell Med 2022;127:102285. [Crossref] [PubMed]
  51. Ankolekar A, van der Heijden B, Dekker A, et al. Clinician perspectives on clinical decision support systems in lung cancer: Implications for shared decision-making. Health Expect 2022;25:1342-51. [Crossref] [PubMed]
  52. Kim EY, Kim YJ, Choi WJ, et al. Concordance rate of radiologists and a commercialized deep-learning solution for chest X-ray: Real-world experience with a multicenter health screening cohort. PLoS One 2022;17:e0264383. [Crossref] [PubMed]
  53. Choudhury A, Asan O, Medow JE. Effect of risk, expectancy, and trust on clinicians’ intent to use an artificial intelligence system -- Blood Utilization Calculator. Appl Ergon 2022;101:103708. [Crossref] [PubMed]
  54. Agharezaei Z, Bahaadinbeigy K, Tofighi S, et al. Attitude of Iranian physicians and nurses toward a clinical decision support system for pulmonary embolism and deep vein thrombosis. Comput Methods Programs Biomed 2014;115:95-101. [Crossref] [PubMed]
  55. Mahomed N, van Ginneken B, Philipsen RHHM, et al. Computer-aided diagnosis for World Health Organization-defined chest radiograph primary-endpoint pneumonia in children. Pediatr Radiol 2020;50:482-91. [Crossref] [PubMed]
  56. Goel K, Sindhgatta R, Kalra S, et al. The effect of machine learning explanations on user trust for automated diagnosis of COVID-19. Comput Biol Med 2022;146:105587. [Crossref] [PubMed]
  57. Dontchos BN, Yala A, Barzilay R, et al. External Validation of a Deep Learning Model for Predicting Mammographic Breast Density in Routine Clinical Practice. Acad Radiol 2021;28:475-80. [Crossref] [PubMed]
  58. Hogue SC, Chen F, Brassard G, et al. Pharmacists’ perceptions of a machine learning model for the identification of atypical medication orders. J Am Med Inform Assoc 2021;28:1712-8. [Crossref] [PubMed]
  59. Jordan D, Rose SE. Multimedia abstract generation of intensive care data: the automation of clinical processes through AI methodologies. World J Surg 2010;34:637-45. [Crossref] [PubMed]
  60. Worlton TJ, Braden J, Gadbois K, et al. The Impact of Robotic-Assisted Technology on Attitudes of Host Nation Individuals Participating in Pacific Partnership 2018: Improving Partnerships Through Technology. Mil Med 2020;185:368-70. [Crossref] [PubMed]
  61. Ahmad A, Ahmad ZF, Carleton JD, et al. Robotic surgery: current perceptions and the clinical evidence. Surg Endosc 2017;31:255-63. [Crossref] [PubMed]
  62. Khanna NN, Maindarkar MA, Viswanathan V, et al. Economics of Artificial Intelligence in Healthcare: Diagnosis vs. Treatment. Healthcare (Basel) 2022;10:2493. [Crossref] [PubMed]
doi: 10.21037/jmai-24-279
Cite this article as: Madan Y, Perivolaris A, Adams-McGavin RC, Jung JJ. Clinician interaction with artificial intelligence systems: a narrative review. J Med Artif Intell 2025;8:22.

Download Citation