Health technology assessment framework of artificial intelligence in radiology—perspectives from strategic decision makers
Highlight box
Key findings
• We developed a health technology assessment (HTA) framework for artificial intelligence (AI) in radiology.
• The framework was referenced from international tools.
• Initial statistical validation of the framework was done in Kenya.
What is known and what is new?
• Low adoption and implementation of AI in radiology is multifactorial.
• Co-creating supportive decision-making tools, e.g., HTA, improves AI literacy & adoption.
• Academic and research are critical for stakeholder capacity building.
What is the implication and what should change now?
• HTA framework will support governance levers in the deployment of AI in radiology.
• The findings encourage clinical validation and contextual modification of the framework.
• The study fosters implementation of the Kenya AI Strategy 2025–2030.
Introduction
Strategic investments in healthcare are necessary to navigate complex, dynamic and uncertain healthcare systems (1). Artificial intelligence (AI) represents a landmark innovation within healthcare technological advancement, with radiology leading the field by having the highest number of approved and operational AI tools (2). Radiology augments diagnostics, clinical care and health system performance (2-4). However, these benefits are sometimes undermined by human errors, fragmented workflows and user training gaps (5). Reviews of over 200 AI tools in radiology demonstrate benefits, such as improved lesion detection on various imaging modalities, alongside supporting diagnostic workflow and operational excellence (2). Kenya is exploring the use of AI in radiology by importing or developing relevant tools. However, this process is often dysregulated, costly, fragmented and siloed, partly due to a lack of clarity regarding the clinical and economic value proposition of these tools (6-9).
Automation of the radiology information system (RIS) is a proxy indicator of operational efficiency in workflow integration, robust data architecture, updated infrastructure, advanced technology and skills (4). Africa faces challenges including healthcare data poverty, meager infrastructure, limited skills and underdeveloped policy frameworks, all of which undermine the diffusion, development and implementation of advanced technology in radiology (5-7). Kawooya et al. contend that these can be circumvented through AI, research and collaboration (6). In Kenya, AI is reportedly applied in 12.6% of routine radiology practice, and 67.8% of radiologists express willingness to train AI models (8). This signals an opportunity, particularly as the same study highlighted a gap in familiarity with AI and AI-related concepts among radiologists and trainees in Kenya.
Health technology assessment (HTA) is an analytical framework used for healthcare priority setting, evaluating interventions such as software as medical devices, medical devices, medical procedures, medicines and population health programs on clinical, ethical, economic and social factors (9). Strategic decision makers support medical practice, clinical governance and knowledge management of AI implementation in radiology (10).
Systems thinking addresses interconnectedness within the healthcare system and highlights the need for multiple perspectives in strategic decision-making (1). Multicriteria decision analysis (MCDA) provides a structured approach to decision-making by systematically and simultaneously evaluating multiple alternatives (11). MCDA complements traditional HTA approaches, which are limited in comprehensive analysis of multiple attributes involving multiple stakeholders. While MCDA facilitates rationality in individual and group decision-making, psychological traps and intuition cannot be fully eliminated from process (12). This identifies policy gaps that support governance levers through policy recommendation.
A scoping review by Farah et al. on existing HTA frameworks highlights heterogeneity concerning AI evolution, data requirements, complexity, clinical validation, safety requirements, economic evaluation, and regulatory and ethical considerations (13). This explains barriers to the effective clinical translation of AI in radiology in Kenya. Despite the availability of internationally approved AI-enabled radiology devices, factors such as infrastructural incompatibility, the lack of a regulatory structure and poor understanding of AI and related concepts limit its diffusion and public access as a strategic investment (8,14). These barriers are replicated across African nations. Therefore, capacity building, the creation of governance frameworks and the establishment of national AI plans are key priority pillars in the Continental AI Strategy 2025–2030 (15).
Ghana acknowledges inadequate infrastructure, moderate awareness, and limited expertise in the use of AI applications in radiology (16). South Africa notes that AI in radiology is in its infancy and highlights the need to upskill radiologists and trainees (17). Nigeria highlights that the use of AI systems in medical imaging is still limited and that system acceptability depends on user knowledge of their application (18). These insights were corroborated by Antwi et al. in a study of 475 radiologists from Africa, who are willing to accept AI in medical imaging if supported by clear implementation roadmaps and AI-related training (19).
Local adaptation is encouraged for defining HTA expertise and contextualizing HTA processes to ensure comparability across different jurisdictions in policy, decision making and clinical practice (9,10). There is a paucity of local frameworks and tools for AI assessment in healthcare within Kenya (8,14). The main aim of the study is to develop an HTA tool for AI in radiology within Kenya, informed by international references and local contextual considerations.
This objective aligns with the Kenya Artificial Intelligence Strategy 2025–2030, which is premised on governance enablers of ethical AI through stakeholder collaboration and policy frameworks (15). Effective implementation of the strategy begins with foundational investment in capacity building and infrastructure, with key milestones including the development of a national AI policy and the establishment of AI research and innovation hubs. Specific to healthcare, the core of the strategy seeks to establish robust adoption frameworks that foster safe, ethical, sustainable and inclusive AI, locally and regionally as an AI research and application leader.
Methods
The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. Ethical permission was granted by the Strathmore University, Institutional Science Research and Ethics, reference numbers SU-ISERC2327/24 and licensed by the National Commission for Science Technology and Innovation, license No. NACOSTI/P/24/38065. Written informed consent was granted by each participant with relevant data handling procedures, and confidentiality was preserved through rights control and anonymity of the participants. This study applied action research to iterate a local HTA framework for AI in radiology. Key stakeholders involved in strategic decision-making for AI implementation in radiology include radiographers, radiologists, clinicians and healthcare managers, with governance support from policymakers and regulators, and knowledge management contributions from academics and researchers (10). Strategic decision makers gave expert opinions on deductive themes derived from existing frameworks. The opinions were analyzed using sequential elimination and multi-criteria decision analysis.
The Pharmacy and Poisons Board of Kenya (PPB) is the regulator for software as a medical device in Kenya. PPB does not have an explicit operational HTA evaluation framework for clinical translation of AI in radiology (20). Purposive sampling was preferred to support pragmatic stakeholder validation required for action-oriented research through stakeholder representation and thematic saturation.
Recruitment through an open invitation to the professional societies was made so as to mitigate participant self-selection. Member recruitment was based on practitioner recognition by the Kenya Medical Practitioners and Dentists Council. The professional societies include the Kenya Association of Radiologists (KAR), the Society of Radiographers in Kenya (SORK), the Kenya Medical Association (KMA). These practitioners are familiar with the radiology diagnostic workflow and speak English as a licensing prerequisite by the regulatory body. The study was stopped after two weeks of no new responses on the online platform, thematic saturation on the preferred choice in the ordinal scale and stakeholder representation among respondents.
Three technical domain experts, who are policy advocates in healthcare with more than five years of experience in clinical specialty practice, were recruited from the KAR, the SORK, and the KMA. The experts are familiar with an AI-enabled lesion detection software with heterogeneity in experience on the clinical accuracy and the autonomy of the AI application in radiology workflow integration. The tool was validated using a hypothetical AI lesion detection case.
Deductive themes were derived from: Multi-society practical considerations (MSC) during development, purchase, implementation and monitoring of AI tools in Radiology (21). The multi-society representatives were drawn from the American College of Radiology (ACR), Canadian Association of Radiologists (CAR), European Society of Radiology (ESR), Royal Australian and New Zealand College of Radiologists (RANZCR), and Radiological Society of North America (RSNA).
Radiology AI Deployment and Assessment Rubric (RADAR) is defined by seven attributes which include: RADAR-1 technical efficacy, RADAR-2 diagnostic accuracy efficacy, RADAR-3 diagnostic thinking, RADAR-4 therapeutic process, RADAR-5 actual patient outcomes, RADAR-6 cost effectiveness efficacy and RADAR-7 local efficacy. RADAR 1 & 2 are predominantly assessing clinical implementation. Radar 3, 4 & 5 assess actual patient outcomes and RADAR 6 & 7 assess technological and socio-economic items (22).
HTA model 3.0 provides an international framework for multi-dimensional value assessment of healthcare technologies developed by more than 70 institutions. It can be applied wholly or partially at any stage of the technology product lifecycle; before, during and after deployment of the technology. The tool is versatile in value standardization both internationally and nationally (23).
Data was collected over three months (1st July, 2024 and 30th September, 2024). The original tool developed by the researcher was validated by three domain experts; a radiographer, a radiologist and a health care manager in active clinical practice. Anecdotally, radiology in Kenya is practiced as an independent specialty or integrated in a clinical practice. The commonest imaging modality available is ultrasounds and X-rays, complemented by few computed tomography (CT) scans and magnetic resonance imaging (MRI) scans in advanced institutions. These few advanced institutions would also have the capacity to host a RIS favorable for AI deployment. CT scan is among the more advanced imaging modality in Kenya with better accessibility compared to the most advanced imaging modality in Kenya; MRI scan and positron emission tomography (PET) scan (24).
Respondents filled a semi-structured 3-point Likert scale online questionnaire with 66 items derived from the MSC and RADAR rubric. The first iteration was done in the first week of July 2024. The second iteration was done in the third week of July 2024. The final survey with 35 items was shared and filled out between August and September 2024 by the fifty-four decision makers. The link was closed on 3rd October 2024. All participants were allowed to provide independent/open ended feedback to the research team through email addresses or a phone call. Data completion, validity and iterations were confirmed by the researchers and mined on to a spreadsheet for analysis using SPSS v 25. Findings of the study were shared with participants who had consented to dissemination of the study findings through email, between 6th–11th March, 2025.
Statistical analysis
The 54 decision makers filled a 35-item semi structure 3-point Likert scale online survey. Descriptive analytics were presented as proportions and visualized as bar graphs, radar charts and pie charts. The first three iterations of the framework were done by three domain experts through sequential elimination and MCDA (Appendix 1).
Sequential elimination of items from the original tool was based on domain expert preference through prioritization, voting consensus and respondent validation. MCDA complements HTA in making structured and transparent decisions (11) by defining a problem, selecting an evaluation criterion, determining criteria weights, dealing with uncertainty and deliberating on the final findings (12).
The problem was to decide on the HTA items. The evaluation criteria were provided by the MSC and the RADAR rubric. Criteria weights were linked to the 3-point Likert ordinal scale in designating respondent preference/choice. Qualitative weighting of the ordinal scale is designated “high” to the highest preference, “moderate” to the moderate preference and “low” to the lowest preference. Systematic prioritization, voting consensus and respondent validation aimed at promoting agreement rather than unanimity (25). Parameter uncertainty was addressed through statistical validation and decision maker deliberations presented as the final HTA framework (Appendix 1).
Results
Participants’ profile
Fifty-four medical doctors and radiographers deliberated on the final tool representing decision makers in the implementation of AI in radiology. Most participants had practiced for more than 10 years; however most radiologists and radiographers had been practitioners for less than 10 years, as demonstrated in Figure 1.
Radiographers use AI in radiology more frequently than radiologists, as shown in Figure 2. There was no inquiry on the use of AI in radiology among the other cadres since the hypothetical use case was an explicit radiological skill which was competence reserved for radiologists and radiographers.
All participants were either medical doctors or radiographers with additional non-clinical areas of practice in policy/regulation, digital health, academics/research or administration/management. Digital health, policy/regulation practitioners were least represented compared to the other cadres, as shown in Figure 3.
Iteration of the tools
Domain experts ranked 66 items on the original tool as “high” to the highest preference, “moderate” to the moderate preference, and “low” to the lowest preference (Appendix 1). The 66 items were extracted from the RADAR rubric and the MSC considerations through unstructured elicitation technique between the authors, radiographers, radiologist and AI software developers supporting an ongoing AI-assisted radiology research project. Unstructured elicitation was through document research and brainstorming.
Domain experts expressed that the hypothetical AI application is of moderate to high complexity and recommended a 3-point Likert scale for ease of use as opposed to 5- or 7-point Likert scales. The first iteration was through elimination of items with a percent interrater agreement of 0.94. The highest dissent was on appraisal of its long-term stability, implied software autonomy such as diagnostic/prognostic capacity and software facing considerations such as software model accuracy (Appendix 1). The second iteration on the remaining 40 items confirmed elimination of software facing considerations which the respondents felt lacked objectivity and are best scored by IT teams or software developers with a percent inter rater agreement of 0.98 (Appendix 1). The final tool with 35 items was shared for respondent validation with the domain experts before it was shared with the decision makers.
MCDA
MSC as an evaluation criterion was ranked by decision makers against an ordinal scale of high, moderate and low. The majority of the decision makers preferred items from the third iteration. The highest indication for AI was clinical benefit (47; 87%) and clinical risk management (46; 85%). Fairness (33; 61%) had the least consideration as a bioethical principle and decision makers considered peer recommendation moderately (22; 41%) in the implementation of AI in radiology.
Evaluation criteria proposed by the RADAR rubric were also ranked by decision makers against an ordinal scale and majority of the decision makers ranked the items as high (Figure 4). The highest decision maker preference was on explainability of the results as a technical capacity (45; 83%), ability to detect lesion as a diagnostic capacity (43; 80%), and impact on patient outcomes as a contribution to patient management (41; 76%). Most decision makers assigned moderate scores to usefulness of AI results (21; 39%) and actionable insights from AI results (22; 41%). Affordability of AI implementation was least ranked on cost effectiveness of AI implementation (14; 26%).
Deductive items on the tools were not aggregated to preserve the construct of the items so that no item is lost in translation. The overall performance of all items and domains is presented in Figures 4,5.
Reliability of the tool
Face validity was assessed using voting consensus and qualitative feedback. This was supported by respondent validation during the iterations on the tool. Construct validity was done using input from face validity and statistical parameter validation. Statistical parameter validation of the tool was done to check on the reliability of the tools. Inter rate agreement was used to assess the degree of agreement among the domain experts during the iteration process. The domain experts consistently had >0.9 agreement. Cronbach’s alpha (α) was used to measure the internal consistency of the constructs within each domain. α is a value between 0 and 1 where value <0.5 is poor, 0.5–0.7 is moderate, >0.7 is acceptable. Where internal consistency was <0.7, confirmatory factor analysis (CFA) of the items was done with data reduction and factor rotation so as to improve internal consistency of the domain. F test was used to calculate statistical significance of the inter-rater reliability.
Cronbach’s alpha was calculated for MSC and RADAR rubric domains. Internal consistency was predominantly acceptable α>0.7 except for “indication” domain, α≤0.7. “Indication” in practical MSC had moderate reliability (α=0.64; 95% confidence interval: 0.46–0.77), as shown in Table 1. Table 2 showed acceptable internal consistency of all the RADAR rubric domains. To improve internal consistency of the MSC domain, data reduction of the line item “business as a competitive advantage” was done based on CFA. Furthermore, factor rotation of the line items “academic & research” and “cost effectiveness of the current practice” was done to improve interpretability of the line items aligned to the final HTA tool as shown in Tables 3,4. F test confirmed statistical significance on inter rater reliability in the “indication” domain F (53, 212) =2.76, P value <0.001, rejecting the null hypothesis which assumed no difference, as shown on table 1.
Table 1
| MSC domains | Cronbach’s alpha | 95% CI | F-test | |||||
|---|---|---|---|---|---|---|---|---|
| Lower limit | Upper limit | F value | df1 | df2 | P value | |||
| Indication | 0.638 | 0.4594 | 0.7709 | 2.7637 | 53 | 212 | <0.001 | |
| Ethics | 0.710 | 0.5582 | 0.8196 | 3.4535 | 52 | 156 | <0.001 | |
| Implementation | 0.784 | 0.6870 | 0.8601 | 4.6223 | 53 | 530 | <0.001 | |
CI, confidence interval; k, number of items; MSC, multi-society practical considerations; n, sample size.
Table 2
| RADAR rubric domains | Cronbach’s alpha | 95% CI | F-test | |||||
|---|---|---|---|---|---|---|---|---|
| Lower limit | Upper limit | F value | df1 | df2 | P value | |||
| Technical properties? | 0.745 | 0.5982 | 0.8446 | 3.9267 | 52 | 104 | <0.001 | |
| Diagnostic accuracy/threshold in lesion detection? | 0.777 | 0.6501 | 0.8634 | 4.4900 | 53 | 106 | <0.001 | |
| Preferred capacity in diagnostic thinking? | 0.794 | 0.6756 | 0.8734 | 4.8429 | 53 | 106 | <0.001 | |
| Contribution of AI radiology to patient management? | 0.738 | 0.5882 | 0.8393 | 3.8149 | 53 | 106 | <0.001 | |
| Rank cost effectiveness on implementation? | 0.780 | 0.6543 | 0.8651 | 4.5452 | 53 | 106 | <0.001 | |
AI, artificial intelligence; CI, confidence interval; k, number of items; n, sample size; RADAR, radiology AI deployment and assessment rubric.
Table 3
| HTA core domains | HTA tool for AI in radiology | High, n [%] | Moderate, n [%] | Low, n [%] | P value |
|---|---|---|---|---|---|
| Health problem & current use of technology | Clinical benefit | 47 [87] | 3 [6] | 4 [7] | 0.009 |
| Clinical risk management | 46 [85] | 5 [9] | 3 [6] | 0.009 | |
| Superiority to current practice | 31 [57] | 16 [30] | 7 [13] | <0.001 | |
| Adaptability of the AI solution to the local institution | 29 [54] | 14 [26] | 11 [20] | <0.001 | |
| Description & technical characteristics | Frequency of AI detecting the lesion correctly | 44 [81] | 9 [17] | 1 [2] | 0.002 |
| Ability of AI in detecting lesions correctly | 39 [72] | 14 [26] | 1 [2] | 0.005 | |
| Explainability of results from the AI software | 45 [83] | 8 [15] | 1 [2] | <0.001 | |
| Multi-dimensionality in lesion detection | 39 [72] | 9 [17] | 6 [11] | <0.001 | |
| Ease of using the AI in the diagnostic workflow | 41 [76] | 12 [22] | 1 [2] | <0.001 | |
| Willingness to use AI in actual radiology practice | 32 [60] | 10 [19] | 12 [22] | <0.001 | |
| Safety | Recommendations from peer review publication | 21 [39] | 22 [41] | 11 [20] | <0.001 |
| Ability to reduce diagnostic uncertainty | 34 [63] | 13 [24] | 7 [13] | <0.001 | |
| Clinical effectiveness | Clinical validation of the AI software by radiologists | 42 [78] | 7 [13] | 5 [9] | 0.001 |
| Ability to detect lesions accurately | 45 [83] | 8 [15] | 1 [2] | 0.004 | |
| Reliability of AI in detecting lesions accurately | 43 [80] | 4 [7] | 7 [13] | 0.002 | |
| Utility of the AI in clinical decision making | 33 [61] | 12 [22] | 9 [17] | <0.001 |
AI, artificial intelligence; HTA, health technology assessment.
Table 4
| HTA core domains | HTA items for AI in radiology | High, n [%] | Moderate, n [%] | Low, n [%] | P value |
|---|---|---|---|---|---|
| Cost & economic analysis | Affordability of AI implementation | 33 [61] | 7 [13] | 14 [26] | <0.001 |
| Cost effectiveness of AI implementation | 33 [61] | 17 [31] | 4 [7] | <0.001 | |
| Adaptability of the AI solution to the local institution | 29 [54] | 14 [26] | 11 [20] | <0.001 | |
| Installation cost | 33 [61] | 15 [28] | 6 [11] | <0.001 | |
| Ethical analysis | Doing good (beneficence) | 46 [85] | 7 [13] | 1 [2] | 0.007 |
| Doing no harm (nonmaleficence) | 46 [85] | 5 [9] | 3 [6] | 0.008 | |
| Patient freedom to choose (autonomy) | 26 [48] | 19 [35] | 9 [17] | <0.001 | |
| Fairness (justice) | 33 [61] | 4 [7] | 17 [31] | <0.001 | |
| Organizational aspects | Ease of hardware integration | 33 [61] | 16 [30] | 5 [9] | <0.001 |
| Ease of software integration | 39 [72] | 12 [22] | 3 [6] | <0.001 | |
| Ease of using the AI in the diagnostic workflow | 41 [76] | 12 [22] | 1 [2] | <0.001 | |
| Willingness to use AI in actual radiology practice | 32 [59] | 10 [19] | 12 [22] | <0.001 | |
| Usefulness of the AI results | 30 [56] | 21 [39] | 3 [6] | <0.001 | |
| Ability to integrate AI in the radiology diagnostic workflow | 39 [72] | 7 [13] | 8 [15] | <0.001 | |
| Patient & social aspects | Actionable insights from the AI results | 31 [57] | 22 [41] | 1 [2] | <0.001 |
| Influence on therapeutic interventions | 30 [56] | 20 [37] | 4 [7] | <0.001 | |
| Impact on patient outcomes | 41 [76] | 9 [17] | 4 [7] | 0.001 | |
| Legal aspects | Regulatory approval of the AI software | 33 [61] | 15 [28] | 6 [11] | <0.001 |
| New aspect | Academic and research | 32 [59] | 16 [30] | 6 [11] | <0.001 |
AI, artificial intelligence; HTA, health technology assessment.
Acceptable Cronbach’s α>0.75 was observed for diagnostic accuracy, preferred capacity in diagnostic thinking and cost-effective implementation of AI in radiology with F-test showing higher values and significant P value <0.001, rejecting the null hypothesis which assumed no inter-rater reliability as shown in Table 2. All domains in the RADAR rubric had acceptable α>0.7 with cautionary interpretation inferred from the confidence intervals.
HTA tool for AI in radiology
Statistical significance was referenced to median as a measure of central tendency since data was right skewed and reduced outlier effect was desired. Non-parametric Wilcoxon signed rank test was used to test statistical significance on all the items in the final tool against the preferred choice “high” on a 3-point ordinal scale. A P value of less than 0.05 was used to assess statistical significance. All preferred choices had lower P values as shown in Tables 3,4.
Discussion
Radiology is a highly tech-enabled and data rich medical field which provides a fertile ground for AI innovation in healthcare. Limited literacy on AI technology in radiology and the fear of replacement by AI (8,16-19) explains significant dissent on long term plans and the knowledge gap on frequency of AI use in radiology (Figure 1). Mwaniki et al. highlighted that almost 40% of radiologists are not willing to train AI/machine learning models in radiology which explains the low participation by radiologists and radiographers in the study (8) (Figure 2). Kawooya et al. highlight the meager technological infrastructure as a barrier to training and deployment of AI technologies (6).
Hua et al. posit that acceptability of AI in radiology is driven by AI literacy in value creation and technological adoption (26). This study provides a versatile social technical assessment tool which promotes knowledge management as a driver of AI acceptability in radiology within Kenya and similar low resource setup. The preferred choices and dissent items indicate that clinical stakeholders prefer clinical/patient facing attributes compared to the technical AI and economic attributes which are important considerations in the development and deployment of AI in radiology. This affects AI sustainability, hence the need for conversations between developers, investors, patients and healthcare practitioners in building stakeholder trust for strategic investment of AI in radiology (22,26).
Brady et al. invite decision makers to evaluate the appropriation of AI in radiology through collaborative action research (21). Boverhof et al., in the value assessment of AI in radiology, proffer clinical assessment through cross-sectional, longitudinal, randomized clinical trials and in silico trials (22). Boverhof’s approach muffles user centeredness in the social technical design. Rapid technological evolution is sustainably understood using the action research framework which favors pragmatic philosophy in co-creation (27). Multiple perspectives from stakeholders enrich standards, credibility and comprehensive inputs in the design. Multi-criteria decision analysis frames rationality and group decision making anchored on objectivity (12). Gongora-Salazar et al. in a scoping review of HTA in healthcare cites priority setting in decision making as a common application of MCDA (11) with statistical parameter validation (25).
Farah et al. highlight lack of explicit tools in the HTA assessment of AI in radiology when interrogating suitability of existing tools (13). Asia and Africa, including Kenya, lack explicit HTA tools (6,28,29). Decision makers and domain experts support stakeholder validation of the HTA framework for AI in radiology within Kenya (Appendix 1). External validation in regional contexts and in clinical setting will promote clinical validation and contextual modification supportive of AI inclusivity. Statistical parameter validation of the tool confirms scientific rigor, encouraging adoption and further iteration for deeper insight in the AI development lifecycle, including software development, business case development, policy recommendation and clinical implementation (23,25,26).
Literature review from Kenya, Ghana, Nigeria and South Africa (8,16-18) highlights adoption inertia of AI in radiology due to a lack of AI knowledge. Academics and research as a new aspect in the comprehensive HTA tool for radiology AI in Kenya underscores the relevance on knowledge management on AI in radiology which remains meager at best in Kenya and in Africa (8,16-19). Bounded awareness and bounded rationality between software developers, healthcare managers and radiologists corrupt clinical and economic value delivery of AI in healthcare when stakeholders operate in silos (21,26). This research frames relevant domains that can inform AI strategy in radiology addressing labor dynamics, technological sustainability and responsible AI (14).
Kawooya et al. propose collaboration and AI technology in improving safety in radiology within Africa (6). Safety as a deductive theme within the comprehensive HTA tool for Radiology AI in Kenya draws line items from both the MSC and the RADAR rubric (Appendix 1). Positional statement from Asia supports Kawooya’s proposition on adoption of safe AI as guided by bioethical principles (28). The HTA framework for Radiology AI in Kenya, provisions for the four common bioethical principles in medical practice (Appendix 1). This tool standardizes communication in favor of a collaborative framework responsive to Kawooya et al., Mwaniki et al., Antwi et al. and Wee et al. (6,8,19,28).
Europe and America conduct HTA through agencies and frameworks iterated in their own context. Leveraging on existing research reduces research waste however, where HTA approach is not explicit, novelty is warranted. AI in healthcare posits a new intricacy in HTA as governance levers beyond technical development of software as medical devices and clinical implementation (13). Value delivery is anchored on strong foundational principles such as HTA core model 3.0 in the holistic assessment of AI in radiology (10,13). The Comprehensive HTA tool for Radiology AI in Kenya proposes utility value points in low-resource setup, in the absence of robust e-governance structures in AI (14,15).
Value appropriation of the utility value points was delegated to strategic decision maker apex and articulated using MCDA. MCDA provides analytical versatility on complexity, skill, technology, modelling and iterations (11,12). Systematic literature review in 2020 on MCDA dominance in healthcare complements our study on its application in priority setting during decision making (25). It confirms high priority on clinical/patient facing attributes with low priority on cost and peer recommendation attributes. Potential bounded awareness on AI-related concepts, confirmed by Mwaniki et al., explains this divergence (8). Moreover, divergence on scoring and weighting in the systematic literature review was predominantly through modelling and linear regression techniques as opposed to expert opinions and Likert scales in this study. Our approach favors a small local strategic decision-making apex.
Generically, AI in radiology is a rapidly developing value laden technology supporting patient safety, operational efficiency and professional innovation. This warrants user acceptability by decision makers by demonstrating its clinical and economic value on multiple attributes against local and international standards. This study provides a decision-making framework in a low resource setups for decision makers who influence deployment of AI in radiology in health care management/clinical practice, digital health, policy/regulation, academics/research and radiology clinical practitioners.
Technically, attributes of the tool support AI literacy, shared communication and credibility in rationale and group decision making. Increased AI literacy lowers barriers to AI adoption once AI tools are perceived as enablers rather than competitors. Multi-disciplinary cross collaboration enriches AI literacy informing sound policy recommendations with evidence-based iterations and structured and systematic investment on AI tools and technology in healthcare.
Globally, AI favors business re-engineering which confers competitive advantage in radiology as a discipline as well as Kenya as a geographical location. This reduces the opportunity cost for AI adoption inertia through demand creation for AI tools and technology in healthcare. Systematic and structured development and deployment of AI in radiology, nested within relevant policies, hosts digital commodities which can be exported regionally and developed locally beyond the traditional competencies and technologies in radiology. This favorable innovative milieu within and beyond healthcare, attracting resources for learning and collaboration.
Limitations of the study
Multistakeholder collaboration and multi-criteria attributes support inclusivity. However, this study excluded non-English-speaking participants which is a potential language bias, as we aspire for regional dissemination of the tool. The HTA framework will need local translation in non-English speaking contexts to ensure broader accessibility and contextual considerations.
The responses provided in the study are in a fixed scoring HTA tool for AI deployment in radiology. Deeper insights can also be explored through open ended responses for a qualitative in-depth interview. This will enrich the robustness and clinical translation of the tool.
This study was based on hypothetical HTA application of a lesion detection software. This provides a clinical ethical safety net for consensus building and structured analytical/decision support framework. Clinical translation of the tool is best done through comparative studies, longitudinal, cross-sectional or randomized controlled trials in clinical validation of the tool within a radiology diagnostic workflow (22).
Conclusions
The newly developed HTA tool offers standardized evaluation of AI applications in radiology, supported by preliminary statistical validation. While stakeholder feedback confirms its relevance and adaptability in Kenya, additional clinical validation and in-depth qualitative interviews are recommended. These steps would yield richer insights while improving the tool’s accessibility and regional customization for broader use.
Acknowledgments
None.
Footnote
Data Sharing Statement: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-25-48/dss
Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-25-48/prf
Funding: None.
Conflicts of Interest: All authors have completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-25-48/coif). The authors have no conflicts of interest to declare.
Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. The study was conducted in accordance with the Declaration of Helsinki and its subsequent amendments. Ethical permission was granted by the Strathmore University-Institutional Science Research and Ethics, reference numbers SU-ISERC2327/24 and licensed by the National Commission for Science Technology and Innovation, license No. NACOSTI/P/24/38065. Written informed consent was granted by each participant with relevant data handling procedures, and confidentiality was preserved through rights control and anonymity of the participants.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Huebner C, Flessa S. Strategic Management in Healthcare: A Call for Long-Term and Systems-Thinking in an Uncertain System. Int J Environ Res Public Health 2022;19:8617. [Crossref] [PubMed]
- Yordanova MZ. The Applications of Artificial Intelligence in Radiology: Opportunities and Challenges. Eur J Med Health Sci 2024;6:11-4.
- Alowais SA, Alghamdi SS, Alsuhebany N, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ 2023;23:689. [Crossref] [PubMed]
- Nabrawi E, Alanazi AT. Imaging in Healthcare: A Glance at the Present and a Glimpse Into the Future. Cureus 2023;15:e36111. [Crossref] [PubMed]
- Mwanza J, Telukdarie A, Igusa T. Impact of industry 4.0 on healthcare systems of low- and middle- income countries: a systematic review. Health Technol (Berl) 2023;13:35-52. [Crossref] [PubMed]
- Kawooya MG, Kisembo HN, Remedios D, et al. An Africa point of view on quality and safety in imaging. Insights Imaging 2022;13:58. [Crossref] [PubMed]
- Musa SM, Haruna UA, Manirambona E, et al. Paucity of Health Data in Africa: An Obstacle to Digital Health Implementation and Evidence-Based Practice. Public Health Rev 2023;44:1605821. [Crossref] [PubMed]
- Mwaniki EK, Onyambu CK, Rodrigues JC. Artificial Intelligence in Diagnostic Radiology: Knowledge, Attitude and Practice of Radiologists and Radiology Residents in Kenya. Int J Curr Microbiol App Sci 2024;13:59-72.
- Evidence and lessons on health technology assessment and health benefit packages in the WHO African Region. Brazzaville: WHO Regional Office for Africa; 2023.
- Farič N, Hinder S, Williams R, et al. Early Experiences of Integrating an Artificial Intelligence-Based Diagnostic Decision Support System into Radiology Settings: A Qualitative Study. Stud Health Technol Inform 2023;309:240-1. [Crossref] [PubMed]
- Gongora-Salazar P, Rocks S, Fahr P, et al. The Use of Multicriteria Decision Analysis to Support Decision Making in Healthcare: An Updated Systematic Literature Review. Value Health 2023;26:780-90. [Crossref] [PubMed]
- Government Analysis Function. An introductory guide to multicriteria decision analysis. 2024. Available online: https://analysisfunction.civilservice.gov.uk/policy-store/an-introductory-guide-to-mcda/
- Farah L, Borget I, Martelli N, et al. Suitability of the Current Health Technology Assessment of Innovative Artificial Intelligence-Based Medical Devices: Scoping Literature Review. J Med Internet Res 2024;26:e51514. [Crossref] [PubMed]
- Government of Kenya, Ministry of Information, Communication and the Digital Economy. Kenya Artificial Intelligence Strategy 2025-2030. 2025. Available online: https://ict.go.ke/sites/default/files/2025-03/Kenya%20AI%20Strategy%202025%20-%202030.pdf
- African Union. Continental Artificial Intelligence Strategy. Harnessing AI for Africa’s development and Prosperity, July 2024.
- Edzie EKM, Dzefi-Tettey K, Asemah AR, et al. Perspectives of radiologists in Ghana about the emerging role of artificial intelligence in radiology. Heliyon 2023;9:e15558. [Crossref] [PubMed]
- Nciki AI, Hlabangana LT. Perceptions and attitudes towards AI among trainee and qualified radiologists at selected South African training hospitals. SA J Radiol 2025;29:3026. [Crossref] [PubMed]
- Adetinuke AJ, Smart AE, Atalab MO. Knowledge, attitude, and perception of radiologists about artificial intelligence in Nigeria. West African Journal of Radiology 2022;29:112-7.
- Antwi WK, Akudjedu TN, Botwe BO. Artificial intelligence in medical imaging practice in Africa: a qualitative content analysis study of radiographers’ perspectives. Insights Imaging 2021;12:80. [Crossref] [PubMed]
. Available online: https://web.pharmacyboardkenya.org/Pharmacy and Poisons Board of Kenya - Brady AP, Allen B, Chong J, et al. Developing, purchasing, implementing and monitoring AI tools in radiology: Practical considerations. A multi-society statement from the ACR, CAR, ESR, RANZCR & RSNA. J Med Imaging Radiat Oncol 2024;68:7-26. [Crossref] [PubMed]
- Boverhof BJ, Redekop WK, Bos D, et al. Radiology AI Deployment and Assessment Rubric (RADAR) to bring value-based AI into radiological practice. Insights Imaging 2024;15:34. [Crossref] [PubMed]
- Kristensen FB, Lampe K, Wild C, et al. The HTA Core Model(®)-10 Years of Developing an International Framework to Share Multidimensional Value Assessment. Value Health 2017;20:244-50. [Crossref] [PubMed]
- Gathuru LM, Elias GDO, Pitcher RD. Analysis of registered radiological equipment in Kenya. Pan Afr Med J 2021;40:205. [Crossref] [PubMed]
- Martelli N, Hansen P, van den Brink H, et al. Combining multi-criteria decision analysis and mini-health technology assessment: A funding decision-support tool for medical devices in a university hospital setting. J Biomed Inform 2016;59:201-8. [Crossref] [PubMed]
- Hua D, Petrina N, Young N, et al. Understanding the factors influencing acceptability of AI in medical imaging domains among healthcare professionals: A scoping review. Artif Intell Med 2024;147:102698. [Crossref] [PubMed]
- Shani AB, Coghlan D. Action research in business and management: A reflective review. Action Research 2021;19:518-41.
- Wee NK, Git KA, Lee WJ, et al. Position Statements of the Emerging Trends Committee of the Asian Oceanian Society of Radiology on the Adoption and Implementation of Artificial Intelligence for Radiology. Korean J Radiol 2024;25:603-12. [Crossref] [PubMed]
- Ferizovik N. Rtveladze. HTA255 Recommendations on the Use of Artificial Intelligence and Machine Learning in Systematic Literature Reviews Submitted as Part of the Evidence Package in Health Technology Assessment. Value in Health 2022;25:346.
Cite this article as: Miima M, Olukuru J, Onyango J. Health technology assessment framework of artificial intelligence in radiology—perspectives from strategic decision makers. J Med Artif Intell 2026;9:12.




