Disability and algorithmic fairness in healthcare: a narrative review
Introduction
Background
The field of artificial intelligence (AI) has witnessed rapid development in recent years, pervading various areas including healthcare and medicine (1,2). It is estimated that this sector will grow up to 41.4% between 2020 and 2027, amounting to $51.3 billion (2). As such, it is expected to alleviate many healthcare workers and simplify workflows in order to provide better healthcare quality, health accessibility, and equity (3). Indeed, it is argued that AI-driven systems can surpass humans in their abilities (1,3-5).
AI describes algorithms that are able to learn and abstract patterns based on a given input (6) and subsumes other areas such as machine learning (ML), which in turn involves artificial and neural networks (NN) as well as deep learning (DL) and natural language processing (NLP) (2,7). In healthcare, AI/ML models find use in diagnosis and prediction (8-11), medical image processing, visualization and virtual reality (5,9,10,12), therapeutical (10) or surgical robotics (5,12), automatic processing of electronic medical records (EMR), clinical notes or administration flow (5,9,12), chatbots and conversational AI (10,12), drug discovery and research (5,9,10), clinical decision support systems (2,9,13), or patient and health monitoring devices (2,5,9-11), to name a few. It is also employed in assistive technology (14), such as in smart homes and living assistants (8), assistive robots, cognitive assistants or social simulators (5) or assisting in communication and treatment plans (15) that may extend deep into the private lives of people with disabilities (PWD).
Rationale and knowledge gap
The involvement of AI in healthcare and medical settings has often caused the opposite effect, which is, it exacerbated health inequities, especially among minority groups (3). While many papers have discussed various biases and fairness issues with regards to ethnic minorities (16,17) or gender (18-20), there is, comparatively, little research that addresses the aspects of algorithmic unfairness towards PWD in healthcare (1,14,21). A lack of AI fairness further exacerbates inequalities in healthcare, creates more barriers and results in more harm towards vulnerable groups such as disabled persons (14), who comprise more than a billion people in the world (i.e., 15%) (22-24).
Fairness and bias can be interpreted along legal, ethical or technical aspects (25), which are jointly and broadly referred to under the umbrella term “AI fairness” (26). There is generally no consensus or agreement on (un)fairness (27), beyond avoiding “unjust, discriminatory, or disparate” outcomes (28). Nonetheless, technical definitions of fairness are predominant (4). How terms such as fairness, bias, justice or discrimination are defined, influences how we develop measurements against it and assess AI systems. In a recent scoping review by El Morr et al. (29) on disability and AI, it was found that none of the studies, in which AI models were built, quantified or discussed AI bias.
El Morr et al. (29) further discovered that AI for disability is assessed from a technological, medical perspective, ignoring the greater picture that is formed by social, economic, and ethical barriers for PWD. In a medical model, disability is misconceptualized as a “deficit” inherent to the individual that would need “correction” (30). However, as this model projects the issues that PWD face onto themselves and fails to account for societal and systematic barriers that are placed on them, the social and the human rights model was introduced, defining disability as the conditions that arise from societal barriers and to account for all persons deserving dignity and human rights (24,30), which renders a complex, dynamic and multi-faceted view on disability. Paccoud et al. (11) have also identified various socio-ethical issues connected to medical devices, such as lack of accessibility, clinical testing, algorithmic fairness issues and data security problems. Thus, there is little research on the issues and solutions to algorithmic fairness in healthcare and its effects on PWD from both the socio-ethical and technological sides.
Objective
Based on these preliminaries, the guiding question for this narrative review is: What does algorithmic bias in general healthcare applications for persons with disabilities involve and how can it be addressed? This will result in a synthesis of current issues and their potential solutions gathered from existing literature. This article is presented in accordance with the Narrative Review reporting checklist (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-415/rc).
Methods
A detailed description of the selection criteria and the literature research process is outlined in Table 1. Three databases were chosen: PubMed Central, Springer Link and Science Direct. The results of a preliminary search on Google Scholar were also included, which involved searching the first few result pages with the same research query and conditions. Using similar search keywords and phrases such as El Morr et al. (29) and Paccoud et al. (11), the search queries for this review were:
- PubMed Central, Springer Link, Google Scholar: (“fair*” OR “discrimin*” OR “bias”) AND (“algorithm*” OR “machine learning” OR “data”) AND (“disability” OR “impairment”) AND (“healthcare”).
- Science Direct: (“fair” OR “discriminatory” OR “bias”) AND (“algorithm” OR “machine learning” OR “data”) AND (“disability” OR “impairment”) AND (“healthcare”).
Table 1
Items | Specification |
---|---|
Date of search | 27/05/2024–02/09/2024; 01/01/2025–10/01/2025 |
Databases and other sources searched | PMC, Springer, Google Scholar, Science Direct |
Search terms used | • PMC, Springer, Google Scholar: (“fair*” OR “discrimin*” OR “bias”) AND (“algorithm*” OR “machine learning” OR “data”) AND (“disability” OR “impairment”) AND (“healthcare”) |
• Science Direct: (“fair” OR “discriminatory” OR “bias”) AND (“algorithm” OR “machine learning” OR “data”) AND (“disability” OR “impairment”) AND (“healthcare”) | |
Timeframe | January 2019 to December 2024 |
Inclusion and exclusion criteria | Inclusion criteria |
• English language | |
• Minimum of three out of four search subparts (i.e., fairness, AI/data, disability or healthcare) present in title, abstract or in full text to a great extent | |
• Publication format: articles, review articles, chapters | |
• Reputably published and peer-reviewed | |
Exclusion criteria | |
• Papers presenting AI/ML models/devices in healthcare ignoring fairness issues | |
• Grey literature | |
• Out of scope of research question | |
• PMC: first 3,000 results sorted by relevance | |
Selection process | The study was conducted by the author only and independently; selection process was based on the inclusion/exclusion criteria. Issues and solutions were collected from papers and synthesized into an overview |
AI, artificial intelligence; ML, machine learning; PMC, PubMed Central.
This search query is not intended to be exhaustive, but it proved to be adequate for identifying a wide range of relevant papers. The search results were further filtered in accordance with the exclusion and inclusion criteria detailed in Table 1. Whenever applicable in the databases, the results were filtered for time (January 2019 to December 2024), language (English), the type of publication (article, review article, book chapter), and subject areas (Medicine & public health, computer science, science, humanities and social sciences, multidisciplinary, public health, life sciences, law, philosophy, social sciences were all included). Since PubMed Central returned around 100,000 results sorted by relevance, only the first 3,000 instances were considered, after finding that the results of subsequent pages lacked in relevance. All other databases—Springer Link and Science Direct—were fully searched. No papers that involved presenting solely AI/ML models or systems were included—be it for the detection or prediction of specific illnesses, or the showcasing of assistive technologies—if they predominantly ignore (i.e., less a paragraph) discussing issues of performance, fairness or bias in healthcare and focus on the development of the tools instead. Papers were selected based on the presence of at least three of the four key themes of the search queries: AI & Data, Fairness & Law & Ethics, Healthcare & Medicine, Disability & Diversity. They had to cover key themes either in title, abstract, introduction, or conclusion. In cases of doubt, the papers were downloaded and inspected further as a key theme should be covered at minimum in a paragraph if it is not a central topic in the paper. Likewise, a paper was considered to cover the topic of disability, if it discussed key themes in at least a paragraph. This paper adopts the social, human rights model for the definition of disability (24,30), and uses a generalized lens to refer to all PWD collectively. The resulting pool of papers was screened for issues and solutions regarding fairness of AI in healthcare and its effects on PWD synthesized. For a full list of the final papers, please see Appendix 1.
Reviewing algorithmic fairness for PWD in healthcare
General trends
The vast majority of papers (n=49, 74%), do not cover disability to a great extent (i.e., at least a paragraph to the topic). Only a small subset of papers (n=17, 26%), was concerned with PWD. The results highlight that there is a great lack in research concerning the specific issues PWD face with regard to AI-driven systems in healthcare settings and how they could be improved. Papers discussing algorithmic bias in healthcare without discussing PWD in particular, are still relevant to portray a more comprehensive picture and were therefore included in the synthesis of issues and solutions. PWD are affected by issues that are wide-reaching across all socio-demographic groups, in addition to those that arise specifically for them.
While many problems arise from algorithmic and data-driven issues, it was evident that they are also embedded within algorithm-external factors that contribute to inequity, i.e., socio-ethical factors. The resulting split into socio-ethical (Table 2) and algorithmic, data-driven topics (Table 3) should provide a better overview to future researchers, but is not aimed to see these topics as disconnected. In fact, most biases interact with one another, such as privacy and legal issues influencing the availability of data, which could be seen as a hybrid or interdependent issue. Likewise, measures of fairness and definitions of disability may affect the design, data and model components in algorithm development. Rather than existing in isolation, the biases amplify and multiply once they are introduced into the development cycle of AI models. Thus, the following tables should not be seen as strict classifications of the issues and solutions, but rather as overarching trends.
Table 2
Category | Problems | Solutions |
---|---|---|
Fairness definition & measure | • Various definitions of fairness (mostly statistical) leading to different fairness outcomes and lack of methodology in testing for bias; conceptually and non-technically identify greater social and systemic issues | • Context-dependent and socio-ethical definitions of fairness and its measures; fine-grained test scenarios with expected outcomes developed with PWD; testing with a diverse group of PWD and expected statistical “outliers” or other datasets; individual fairness before group fairness; mitigate greater social and systemic issues |
Regulations, laws & rights | • Lack of accountability and liability; lack of regulation of AI applications in healthcare; lack of data and privacy, identity protection and storage regulations (disability information inferable); lack of agency in and accessibility to healthcare or AI applications; financial burden of AI tools | • Clarification of responsibility and liability; regulatory oversight required; legal adjustments, policies, and frameworks needed for PWD; improved data security and privacy is needed; responsible open-data sharing initiatives, synthetic data, federated learning, added noise, generative, and blockchain methods; terms of use, and informed consent; improve accessibility to healthcare; funding options |
AI, artificial intelligence; PWD, people with disabilities.
Table 3
Category | Problems | Solutions |
---|---|---|
Design & usage | • Conceptualization of AI tools without considering needs or consequences for PWD; “one-model-fits-all” attitude towards PWD; exclusion of various stakeholders in conceptualization and lack of collaboration; AI as “black box” lacking transparency, practitioner understanding, or simplicity; human issues (maleficence, overreliance, misuse, etc.); trust issues; AI tool maintenance and monitoring: bias over time (data or concept drift), scalability, computational/energy cost, and rigorous testing | • PWD actively involved/PWD-centric approach to problem formulation, design, and development process; specialized systems for PWD; interdisciplinary research and collaboration; exploring new modelling frameworks; improving explainability and interpretability (e.g., explainable AI); educating practitioners/patients in AI use; continuously inspecting, rigorously testing, and adapting AI tools as part of a feedback loop; accounting for data growth/energy in design/infrastructure |
Training data | • Non-categorical, nuanced nature of disability data; non-standardized, incomplete, historical, or incorrect data from various sources; insufficient data; under- or misrepresentation of PWD before/after preprocessing; unanticipated proxies | • Narrative, context-rich data collections with natural language processing methods; meta-learning; shared (labelling) standards in healthcare; collect more data of PWD (specific to the model purpose) and create benchmark/diverse datasets; highlight gaps in data transparently; internet of medical things; employ techniques against bias (re-weighing, suppression, massaging, resampling, ensembling, cost functions, hyperparameter tuning, etc.) |
Algorithmic structure | • Model/algorithm structure can amplify or create bias; overfitting, catastrophic forgetting | • Adjusted learning algorithms, e.g., adapted loss functions, adversarial approaches; cautious input and output correction |
AI, artificial intelligence; PWD, people with disabilities.
Socio-ethical factors
Fairness definitions and measures
The socio-ethical factors broadly involve issues around fairness definitions and measures aside from regulations, laws and rights (Table 2). Problems with regards to fairness definitions and measures involve varying definitions of “fairness” (31-36). For an overview of different fairness definitions, see e.g., Verma and Rubin (37) or Starke et al. (32). The term “AI fairness” is often used as an umbrella term for related concepts such as (non-)discrimination, (in)justice, (in)equity, or (in)equality (26), which all encompass different aspects of fairness. Bennett and Keyes (21) argue that with regards to disability, “justice” is more appropriate in a clinical setting. Comparatively, Giovanola and Tiribelli (38) speak of “equal right to justification”, and, according to Liu et al. (33), it is more appropriate to speak of “equity” in a healthcare context than “equality”. In addition, it is criticized that most studies define fairness, build measures, and formulate guidelines to check against bias from a purely technical or statistical perspective (31,32). This is also reflected in the various libraries that check for bias and model robustness [e.g., Javed et al. (9), FairMLHealth1, Fairness Indicators (Google)2]. Fairness concepts often fail to unite multiple aspects such as ethical, human-centric or legal interpretations (25,31,32) into one model of fairness. While uniting all perspectives would be ideal, this may be difficult. According to the “impossibility theorem”, it is impossible to satisfy all aspects or criteria of fairness simultaneously (31) as they can be at odds with each other (39). Consequently, it is argued that fairness as a concept and its measures should be defined context-dependently (29,32,40) and multi-dimensionally (41), such that it includes socio-ethical definitions as well (29,31,32,34). This should ensure that the algorithm does not create “unjust, discriminatory, or disparate consequences” (28) but rather enables equitable access, treatment and results for all patients, especially vulnerable ones such as PWD. In addition, when employing technical concepts of fairness, there are issues in the implementation of testing and measuring for bias from a non-technical, conceptual side (31,32,40,42). Testing methods such as binary splits of the data set (e.g., protected versus other groups) or applying group fairness (i.e., same outcome across all groups) principles before individual fairness (i.e., similar outcome for similar individuals) remain an issue for PWD (15,43). Group fairness is often the first solution in healthcare and public health settings due to their alignment to population health notions (33). It is, however, argued that individual fairness would be a better strategy for highly multi-faceted, variable and intersectional groups such as the disability group (15,33,44). It is also more recommendable to use the concept of “fairness through awareness”, rather than “fairness through unawareness”, as information on protected variables such as disability allows for measure against bias (44). Consequently, fine-grained and systematic means of testing are required (15) in addition to developing test scenarios and expected outcomes in collaboration with disabled people (15). Furthermore, fairness measures should be tested with a diverse group of PWD, especially with expected statistical “outliers” to the algorithm or with multiple other data sets (1,15,29). On a larger scale, it is necessary to do more research to identify and improve social and systemic determinants of bias (21,29,31,45-49). As Grote et al. (35) stress, there are no “quick fixes” in algorithms that could mend the “complexity of the nature and causes of social injustices in healthcare”.
Regulations, laws, and rights
The measurement of fairness also has implications for: guidelines, law and rights. The context-driven and non-definitive wording in legal definitions is especially problematic as the law has not yet adapted to the (sometimes inconspicuous) ways in which discrimination through AI can happen (40). Thus, a standardized repertoire of clearly defined means of measuring bias in AI is a necessary starting point also for legal matters (40). Here, the law needs to be further improved for PWD and people in general to adapt to the growing needs of protection against AI-caused harm. Similarly, the unclear status of AI as “discriminator” (40) and its legal personhood (50,51) in court creates a “liability gap” (40) between AI and non-discrimination law. It is necessary to clarify questions of accountability and liability (13,31,41,50-55). For example, it is suggested that both AI systems and operators need to be jointly liable (51).
Likewise, there is a need for the regulation and standardization of AI applications in healthcare by (inter-)national institutions that evaluate and validate such systems (9,41,49,50,53,54,56-58). The World Health Organization (WHO) began developing a framework for integrating technological advancements into healthcare in 20193 (59), the European Union (EU) released ethics guidelines for trustworthy AI4 in 2019, and the AI Act was established in the EU 20245. However, much more needs to be done in terms of accountability and guidelines for responsible AI use in healthcare, especially in context with PWD (26,57). In the United States of America (USA), the Food and Drug Administration (FDA) authorizes and offers guidance on AI-based medical tools (3,9,52,57). However, a study by Muralidharan (60) reveals that the reporting data of market-approved medical devices is inconsistent, with an underrepresentation in socio-economic and demographic data that may lead to worsening algorithmic bias.
Data privacy and equality law are central to PWD, but they are also potentially at odds when promoting fairness for PWD (26). Free access to data of disabled people would be beneficial to developing high-quality AI technology. The lack of data from PWD is thus also often due to (and rightly so) limited access to their sensitive healthcare data. In addition, the lack of accessibility to healthcare (2,45,61,62) or their individual agency (i.e., people may prefer to not disclose their disability status) (21), can further impact the availability of healthcare data. To mitigate data availability issues, it has been suggested to improve and extend responsible open-data sharing initiatives (1,45,63) on top of improving accessibility to healthcare in general (2,61). However, it is also important that their data is sufficiently protected, stored, and inaccessible to any other party. The lack of clarity as to what happens with the data, how protected it is, and who should be in charge, remain highly relevant issues (2,9,13,41,42,50,52,54,55,64-67) that need to be solved. The Health Insurance Portability and Accountability Act (HIPAA) and the General Data Protection Regulation (GDPR) offer guidelines to data collection, storage and handling (41), which should be adhered to and consistently improved. Nonetheless, Di Bidino et al. (68) highlight that there is currently no standardized framework that allows one to evaluate AI medical technology systematically. While this would be crucial to establish, it might be helpful to develop frameworks that are specialized to different healthcare settings involving PWD together with PWD and other stakeholders.
Even if the data was securely stored from data theft or similar, another problematic phenomenon is the potential for machine-learning methods to re-identify persons despite anonymization of data (42,55). This is because sensitive attributes in data can be inferred (15,42), as missing attributes (e.g., ethnicity, sex, diseases, life expectancy) can become predictable (18). In particular, PWD are more endangered to be re-identified than other demographic groups (15), leading to the misuse of sensitive information (29). A possible solution to the privacy issues and lack of data could be the use of synthetic data (i.e., using artificial data, generated with privacy-preserving generative adversarial networks) (9), federated learning (i.e., decentralized training that takes place on multiple devices simultaneously and only the updates are sent to the model) (47,69-72), adding noise to the data to hide private information (9) or using blockchain methods (65).
Lastly, it could be useful to educate patients and inform them before introducing terms of use or consent about AI tools (i.e., informed consent), providing control over their data use (2,9,18,52,54,57,58,73). El Morr et al. (29) stress, however, that PWD with intellectual, cognitive or technological difficulties are often not in a position to either provide consent or to realize what consequences it could have. De Micco et al. (58) similarly refer to this as a “digital divide” within society, exacerbating the relationship between AI and PWD. Finally, medical AI—especially assistive AI—needs to be affordable and accessible for PWD (29,58). As PWD are frequently also economically disadvantaged, it is crucial to provide assistive technologies and medical AI with insurances, tax breaks or subsidies (58). Therefore, El Morr et al. (29) argue for a “disability justice approach” centering around the needs and experiences of PWD alongside the social and political plane of AI ethics.
Data-driven and algorithmic factors
Design and usage
Beginning with the design and use components in Table 3, the greatest issue is the lack of involvement of disabled people from conceptualization, to deployment (1,2,15,29,41,45,46,59,74,75). This usually also implies that AI tools are created without considering the needs of PWD (1,3,15,33,38,42,43,50,63,76) and deciding what is best for them without their input. Paccoud et al. (11) stress that devices are predominantly built for able-bodied and digitally-literate users. It raises questions about how disability, in all its facets and complexities, is characterized by AI models. Trewin (15) advocates for a PWD-centered design as being inclusive, participatory and value sensitive. PWD should be included actively in the problem formulation, design and development process (1,2,15,29,63). Likewise, the current lack of collaboration among stakeholders (e.g., PWD, practitioners, care personnel, developers, policy makers) (1,2,10,15,29,41,45,46,49,54,58,59,66,67,74,75) makes it difficult to create tools that incorporate all perspectives that are crucial to the eventual performance of AI tools. While Bucher et al. (77) present design tools and frameworks that aim to increase equity in digital health tools for a range of under-represented groups, it would be crucial to also develop them for disability specifically. An inspiring and insightful approach has been taken by Smith and Smith (22), who reviewed existing literature and reported on the functionality of AI as persons with disabilities in the form of diaries exploring the usability of different tools. A user- and PWD-centric approach is needed (9,13,58,66,78) that is personalized to every setting and specific needs. This could even involve moving away from “one-size-fits-all” mentalities to more personalized devices that are developed or fine-tuned for the individual based on their specific needs (58,66).
Additionally, AI tools are obscure “black boxes” (1,6,28,38,41,48-51,54,61,73,75,78-80) which means that we do not know how the model arrives at a specific output. This causes problems in interpreting results, lack of trust, and, ultimately, adverse outcomes for patients (78). It is argued that models with greater interpretability lack in accuracy (47,78). Likewise, Chen et al. (47) also note that the algorithm performance might in fact drop in general across subgroups, when aiming for fairness (i.e., accuracy-fairness trade-off). In how far such a trade-off needs to be considered, is contested. For instance, Liu et al. (81) developed a fairness-aware, interpretable modelling framework (FAIM) that does not drop in accuracy. Another way is to find methods that can be applied to models to make AI more transparent and understandable such that errors become more detectable (2,9,28,31,38,47,61,75,79,82,83), which is part of the field of Explainable Artificial Intelligence (XAI) in healthcare contexts (79). Moreover, to make AI more explainable, one needs to keep the “human in the loop” (54,56,58) and ensure a smooth integration into medical workflows (2,13,41,49,53,65,84). Aside from this, it is crucial to educate practitioners and patients in the use of AI healthcare tools to increase user-competence (2,13,29,41,49,53,65,74) and diminish misuse, maleficence or overreliance (1,3,25,35,36,38,41,42,46,52,58,63,76). Gouveia (85) speaks of a “trust gap” between practitioners and the tools as well as between patients and the practitioners. Trustability between practitioner and patient is of particular importance with mental healthcare scenarios (54). There is also a “trust gap” between PWD and AI tools, which can only be bridged by maintaining autonomy and self-determinism (58,66). De Micco et al. (58) stress that increased explainability of AI will result in more trust and better user competence, which can include end-user education and user documentation (49). Especially with regard to mental disabilities, the human element is crucial for analyzing and interpreting the patient’s reactions accordingly (57,66).
Finally, it is crucial to do maintenance and monitoring in order to restrict the AI tool from becoming biased over time in terms of data or concept drift (1,9). Fairness measures and bias monitoring should not only be applied at the post-development stage or during the data-preparation stage, but should be constantly employed to monitor bias from conception to deployment (10,33,43,74,86) in addition to incorporating feedback on the performance at each stage (86). This includes rigorous testing in versatile healthcare- and clinical settings for AI tools (9-11). In addition, scalability issues (e.g., increasing amounts of data) need to be accounted for with adequate model design (65). Scalability also involves co-opting AI systems. Bajwa et al. (10) state that most medical AI systems are designed on context-sensitive features, which makes the systems rigid in scalability and adaptability when used in different contexts and for different groups such as PWD. Collins et al. (86) further outline how generalizing AI systems across a broader population could risk overlooking how it could affect specific subgroups. It should be known how the AI was developed and trained before using it for PWD and re-considered whether building broad-scale, generalistic systems would further the needs of PWD.
Training data
The biggest issue is arguably the data that is used to train the models. Both accuracy as well as fairness in AI models are strongly dependent on the training data, which is collected and curated by humans (36). Like with the definition and measurement of fairness, creating a data set that does not bias PWD but allows one to draw fair conclusions is not straightforward. As the picture of disability is a varied one for the definition of fairness, legal matters and design, so it is for the type of data that should be used. PWD are not categorically disabled or not: there may be changes in one’s status over time, multiple different disabilities present, and PWD may intersect with already marginalized subgroups of society (e.g., gender, ethnicity, socio-economic status) (21,29). This non-categorical, nuanced nature of disability data (21,29) causes problems to the traditional methods of machine-learning (44). A solution for this would be to move away from binary, decontextualized data entries to narrative, context-rich data collections [e.g., narrative electrical health records (EHRs) instead of ticking categorical boxes] that are made interpretable through NLP methods (6,21,80). Meta-learning has also been shown to be a promising strategy against scarce and variable datasets and changing healthcare contexts (80). Generally, introducing a shared labelling standard in healthcare (84) would improve the data quality and help mitigate biased decisions by the AI (41,84). Healthcare data for AI algorithms is drawn from various resources, such as wearable/monitoring devices, EMR (lab reports, medical reports, clinical notes, discharge notes, etc.), clinical trial databases, public health records, and health insurance companies (5,61). Therefore, the data is variable and multimodal (e.g., imagery, video, speech or text) (78). The resulting issue of non-standard, incomplete, historical, or wrong data from various sources poses one of the greatest problems for AI in healthcare (1,3,15,25,34,38,41,43,47,48,61,63,76,82). Likewise, the data privacy issues and accessibility problems mentioned above affect data sets in that many people, especially PWD, are not represented accurately in the data due to under- or misrepresentation (1,9,34,38,41-43,48,58,63,87). Indeed, there is great lack of data on disability, which ultimately causes the algorithm to make decisions without considering them (1,15). A solution against this is to collect more data and create benchmark diverse data sets (14,15,33,61) or to simply highlight gaps in data to process data transparently (15,49,59,73) and not oversell what the algorithm can show and predict. Alderman et al. (88) encourage such transparency with the STANDING Together recommendations for medical AI datasets (88). The Internet of things (IoT) could allow for collecting standardized, personalized data from medical devices (67), which can help bridge the gap between healthcare providers and PWD (72) or create assistive technology specialized for different kinds of disability (23). An overview on different AI systems and the IoT in assistive technology for different kinds of impairments is summarized in De Freitas et al.’s study (23). A study by Mennella et al. (89) quantified representation bias in healthcare datasets, showing that training a model on “heterogeneous distributions of disability attributes” achieved a considerable higher accuracy in activity recognition algorithms, stressing the importance of diversifying datasets. One can further employ techniques against bias (re-weighing, suppression, massaging, resampling, ensembling, cost functions, hyperparameter tuning, blinding, etc.) (1,14,43,47). On top of this, further misrepresentation and skewing of diversity in data can be introduced with data preprocessing (e.g., through cleaning, feature selection and engineering) (1,15) or even after finishing the model training, when there is a discrepancy between the data set and the current population (i.e., data set shift) (47,56). Apart from this, the data can contain bias by proxy through unanticipated proxies for protected variables (38). Disability data can also look similar to other subgroups in the data set by chance, which might cause inaccurate associations (15). Checking for biases in the data explicitly before training (1,15) is crucial, especially for less explicit ones such as social or historical bias (87), as it is in any other component of the process (59).
Algorithmic structure
The final component that is mentioned in the literature is the algorithm structure itself, which can either amplify or introduce bias (3,36,48,63). This can be counteracted with specific in-processing steps (e.g., optimization constraints or regularization constraints) or adjusted learning algorithms (e.g., adapt loss function, adversarial approach) (25,36,43,47). Leist et al. suggest using transfer learning methods in cases of biased data (34). Choosing the right algorithm structure therefore needs careful consideration and also thorough testing against biases. Furthermore, overfitting, like in most AI scenarios, needs to be carefully monitored (1) and catastrophic forgetting must be counteracted (90). Final adjustments can also be made through output correction (e.g., relabeling, level adjustments) (36,43,47). However, this is highly problematic as biases have to be recognized earlier in the system in order to address them effectively (86). A final issue is the computational cost of and use of energy resources by AI-driven systems, which may impede real-time applicability in medical or assistive scenarios (65,67,78,80).
Discussion
The purpose of this paper was to provide a synthesized overview of existent literature, outlining problems and solutions discussed thus far in order to provide respective stakeholders and researchers with a starting point towards further developing AI in healthcare that is inclusive towards PWD. It was found that there is a gap in the literature that focuses on the specific issues that PWD experience under AI tools in healthcare, but also within the fields of AI Fairness and Healthcare. This aligns broadly with the findings by El Morr et al. (29) in that most articles, even of those discussing AI systems and disability, would not discuss challenges of these systems or socio-ethical issues for PWD. It could also be shown that status-quo methods in modelling, error measurement or bias mitigation may indeed prove to be ineffective for algorithms that consider PWD (14).
In addition, it has become apparent that each stage in the development of an AI algorithm or model, from idea to deployment, can introduce bias (1,9,33,56,86). Algorithmic fairness is interdependent on socio-ethical issues as well. Health equity, particularly for PWD, involves a more holistic approach that considers and integrates socio-ethical and algorithmic-structural perspectives (29). As a result, development stages are neither purely cyclical nor isolated stages, but interdependent parts that extend far into society, ethics and politics. Indeed, one should talk about components within an interactive network of development and use instead of lifecycles (3,75,86) or pipelines (9).
This is most evident in the scarcity and insufficient quality of the data, which inspires Mitra (91) to call for a “disability data revolution”. AI systems, their fairness and their accuracy are reflective of the data that humans provide (36,92). These data issues are not isolated, as they again reflect the broader societal and legal issues in terms of disability visibility. This again intersects with another issue, the exclusion of PWD in the design process. A recent study in the form of diary entries by Smith and Smith (22) shows how AI tools often frustrate rather than help PWD whilst also acknowledging how useful they could be if PWD were more included in the development. It is exactly this that we as a society should not ignore, but ensure that those involved have a voice. This entails that there are regulations and laws in place that protect and can hold entities responsible when biases occur.
In view of these results, the question might also arise if it is at all possible to consider all fairness aspects in AI-driven medical systems, especially across all demographic groups. As such, the usability of generalistic, large-scale applicable AI healthcare algorithms, are less feasible and ultimately less fair than the use of specialized, small AI tools that allow for oversight (14,56).
By addressing the issues listed in this review, AI-driven healthcare tools such as remote patient monitoring systems could be significantly improved for PWD by (I) personalizing them to the needs of PWD in a case-by-case manner by developing them together with PWD; (II) ensuring high data and privacy security standards; (III) evaluating them in real-life testing and validation scenarios with PWD who can provide feedback; and (IV) training PWD, care-personnel and healthcare staff in the use and limitations of the system.
The limitations of this narrative review include the use of a limited selection of databases and the consideration of only those papers meeting the specified criteria, which excluded, for example, seminal papers that were published before 2019, non-English papers, and publication formats beyond reviews, articles and book chapters. Therefore, it does not represent a comprehensive review. Furthermore, not all examples that were grouped with socio-ethical or data-driven, algorithmic factors, were clearly categorizable and it was stressed that the tables should only serve as rough means of organizing recurrent themes in the literature. In addition, this paper analyzed disability through a broad, all-encompassing lens, ignoring to touch on issues that might arise for specific kinds of disability. Thus, future research should take a more fine-grained approach to disability and AI issues.
Despite these limitations, this narrative review’s strength lies in the synthesized, current issues and mitigation strategies of AI, healthcare and disability from various perspectives. It also aims to inspire for greater inclusivity amongst researchers to include more perspectives, particularly those of PWD, in their research design and undertaking. Furthermore, it has identified research gaps and also allows one to draw connections between issues and solutions for future researchers. It highlights that much more research is necessary to uncover in how far PWD are affected in particular and that fairness in medical AI for PWD presents itself is a nuanced and complex endeavor.
Conclusions
This narrative review has analyzed recent literature on the topic of AI fairness, disability and healthcare in order to identify the issues and solutions that it poses for PWD. Future reviews could explore the topic of AI fairness and disability in more specialized healthcare settings, or how PWD that are also disadvantaged by AI systems due to their ethnic or gender-related identity. It is necessary to see demographic groups not as categorically distinct, but rather as multidimensional. Fairness in AI healthcare is a complex, multi-faceted and highly sensitive area that can harm PWD more imminently than any other area involving AI. While AI can be promising in its intention to further alleviate care workers, clinical practitioners and support PWD, its remaining issues are manifold. It has become evident, that more research and changes need to happen in multiple areas simultaneously for PWD before we can finally speak of true algorithmic fairness.
Acknowledgments
The author would like to thank Sarah Ebling for her invaluable support in this process.
Footnote
1 https://github.com/KenSciResearch/fairMLHealth/tree/integration (last accessed: 28 May 2024).
2 https://github.com/tensorflow/fairness-indicators (last accessed: 28 May 2024).
3 https://www.who.int/publications/i/item/9789241550505 (last accessed: 27 January 2025).
4 https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai (last accessed: 3 May 2024).
5 https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689 (last accessed: 27 January 2025).
Reporting Checklist: The author has completed the Narrative Review reporting checklist. Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-415/rc
Peer Review File: Available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-415/prf
Funding: None.
Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at https://jmai.amegroups.com/article/view/10.21037/jmai-24-415/coif). The author has no conflicts of interest to declare.
Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Nazer LH, Zatarah R, Waldrip S, et al. Bias in artificial intelligence algorithms and recommendations for mitigation. PLOS Digit Health 2023;2:e0000278. [Crossref] [PubMed]
- Rudd J, Igbrude C. A global perspective on data powering responsible AI solutions in health applications. AI Ethics 2023; Epub ahead of print. [Crossref] [PubMed]
- Abràmoff MD, Tarver ME, Loyo-Berrios N, et al. Considerations for addressing bias in artificial intelligence for health equity. NPJ Digit Med 2023;6:170. [Crossref] [PubMed]
- Wilhelm C, Steckelberg A, Rebitschek FG. Benefits and harms associated with the use of AI-related algorithmic decision-making systems by healthcare professionals: a systematic review. Lancet Reg Health Eur 2024;48:101145. [Crossref] [PubMed]
- Bohr A, Memarzadeh K. The rise of artificial intelligence in healthcare applications. Artificial Intelligence in Healthcare 2020;25-60.
- Ostherr K. Artificial Intelligence and Medical Humanities. J Med Humanit 2022;43:211-32. [Crossref] [PubMed]
- Koteluk O, Wartecki A, Mazurek S, et al. How Do Machines Learn? Artificial Intelligence as a New Era in Medicine. J Pers Med 2021;11:32. [Crossref] [PubMed]
- Rong G, Mendez A, Bou Assi E, et al. Artificial Intelligence in Healthcare: Review and Prediction Case Studies. Engineering 2020;6:291-301. [Crossref]
- Javed H, El-Sappagh S, Abuhmed T. Robustness in deep learning models for medical diagnostics: security and adversarial challenges towards robust AI applications. Artif Intell Rev 2025;58:12. [Crossref]
- Bajwa J, Munir U, Nori A, et al. Artificial intelligence in healthcare: transforming the practice of medicine. Future Healthc J 2021;8:e188-94. [Crossref] [PubMed]
- Paccoud I, Leist AK, Schwaninger I, et al. Socio-ethical challenges and opportunities for advancing diversity, equity, and inclusion in digital medicine. Digit Health 2024;10:20552076241277705. [Crossref] [PubMed]
- Davenport T, Kalakota R. The potential for artificial intelligence in healthcare. Future Healthc J 2019;6:94-8. [Crossref] [PubMed]
- Ouanes K, Farhah N. Effectiveness of Artificial Intelligence (AI) in Clinical Decision Support Systems and Care Delivery. J Med Syst 2024;48:74. [Crossref] [PubMed]
- Guo A, Kamar E, Vaughan JW, et al. Toward fairness in AI for people with disabilities SBG@a research roadmap. ACM SIGACCESS Accessibility and Computing oi:
10.1145/3386296.3386298 10.1145/3386296.3386298 - Trewin S, Basson S, Muller M, et al. Considerations for AI fairness for people with disabilities. AI Matters 2019;5:40-63. [Crossref]
- Jain A, Brooks JR, Alford CC, et al. Awareness of Racial and Ethnic Bias and Potential Solutions to Address Bias With Use of Health Care Algorithms. JAMA Health Forum 2023;4:e231197. [Crossref] [PubMed]
- Hussain SA, Bresnahan M, Zhuang J. The bias algorithm: how AI in healthcare exacerbates ethnic and racial disparities - a scoping review. Ethn Health 2025;30:197-214. [Crossref] [PubMed]
- Fosch-Villaronga E, Drukarch H, Khanna P, et al. Accounting for diversity in AI for medicine. Computer Law and Security Review 2022;47:105735. [Crossref]
- Locke LG, Hodgdon G. Gender bias in visual generative artificial intelligence systems and the socialization of AI. AI Soc 2024; [Crossref]
- Isaksson A. Mitigation measures for addressing gender bias in artificial intelligence within healthcare settings: a critical area of sociological inquiry. AI Soc 2024; [Crossref]
- Bennett CL, Keyes O. What is the Point of Fairness? Disability, AI and The Complexity of Justice. Special Interest Group on Accessible Computing 2020. doi:
10.1145/3386296.3386301 10.1145/3386296.3386301 - Smith P, Smith L. Artificial intelligence and disability: too much promise, yet too little substance? AI Ethics 2021;1:81-6. [Crossref]
- de Freitas MP, Piai VA, Farias RH, et al. Artificial Intelligence of Things Applied to Assistive Technology: A Systematic Literature Review. Sensors (Basel) 2022;22:8531. [Crossref] [PubMed]
- World Health Organization, World Bank. World Report on Disability 2011 [Internet]. World Health Organization; 2011. [cited 2025 Jan 19]. Available online: https://iris.who.int/handle/10665/44575
- Wang Y, Song Y, Ma Z, et al. Multidisciplinary considerations of fairness in medical AI: A scoping review. Int J Med Inform 2023;178:105175. [Crossref] [PubMed]
- Binns R, Kirkham R. How Could Equality and Data Protection Law Shape AI Fairness for People with Disabilities? ACM Trans Access Comput 2021;14:17. [Crossref]
- Srivastava M, Heidari H, Krause A. Mathematical notions vs. Human perception of fairness: A descriptive approach to fairness for machine learning. In: Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Anchorage, AK, USA: Association for Computing Machinery; 2019:2459-68.
- Shin D, Park YJ. Role of fairness, accountability, and transparency in algorithmic affordance. Comput Human Behav 2019;98:277-84. [Crossref]
- El Morr C, Kundi B, Mobeen F, et al. AI and disability: A systematic scoping review. Health Informatics J 2024;30:14604582241285743. [Crossref] [PubMed]
- Brinkman AH, Rea-Sandin G, Lund EM, et al. Shifting the discourse on disability: Moving to an inclusive, intersectional focus. Am J Orthopsychiatry 2023;93:50-62. [Crossref] [PubMed]
- Sikstrom L, Maslej MM, Hui K, et al. Conceptualising fairness: three pillars for medical algorithms and health equity. BMJ Health Care Inform 2022;29:e100459. [Crossref] [PubMed]
- Starke C, Baleis J, Keller B, et al. Fairness perceptions of algorithmic decision-making: A systematic review of the empirical literature. Big Data Soc 2022; [Crossref]
- Liu M, Ning Y, Teixayavong S, et al. A translational perspective towards clinical AI fairness. NPJ Digit Med 2023;6:172. [Crossref] [PubMed]
- Leist AK, Klee M, Kim JH, et al. Mapping of machine learning approaches for description, prediction, and causal inference in the social and health sciences. Sci Adv 2022;8:eabk1942.
- Grote T, Keeling G. Enabling Fairness in Healthcare Through Machine Learning. Ethics Inf Technol 2022;24:39. [Crossref] [PubMed]
- Birzhandi P, Cho YS. Application of fairness to healthcare, organizational justice, and finance: A survey. Expert Syst Appl 2023;216:119465. [Crossref]
- Verma S, Rubin J. Fairness definitions explained. In: Proceedings of the International Workshop on Software Fairness (FairWare ‘18). IEEE Computer Society; 2018. doi:
10.1145/3194770.319477 . - Giovanola B, Tiribelli S. Beyond bias and discrimination: redefining the AI ethics principle of fairness in healthcare machine-learning algorithms. AI Soc 2023;38:549-63. [Crossref] [PubMed]
- Kleinberg J, Mullainathan S, Raghavan M. Inherent trade-offs in the fair determination of risk scores. In: Leibniz International Proceedings in Informatics, LIPIcs. Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing; 2017;67:43:1-43:23.
- Wachter S, Mittelstadt B, Russell C. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Computer Law and Security Review 2021;41:105567. [Crossref]
- Ueda D, Kakinuma T, Fujita S, et al. Fairness of artificial intelligence in healthcare: review and recommendations. Jpn J Radiol 2024;42:3-15. [Crossref] [PubMed]
- Grote T, Keeling G. On Algorithmic Fairness in Medical Practice. Camb Q Healthc Ethics 2022;31:83-94. [Crossref] [PubMed]
- Fletcher RR, Nakeshimana A, Olubeko O. Addressing Fairness, Bias, and Appropriate Use of Artificial Intelligence and Machine Learning in Global Health. Front Artif Intell 2021;3:561802. [Crossref] [PubMed]
Trewin S. AI Fairness for People with Disabilities: Point of View. ArXiv 1811.10670.2018 .- Balagopalan A, Baldini I, Celi LA, et al. Machine learning for healthcare that matters: Reorienting from technical novelty to equitable impact. PLOS Digit Health 2024;3:e0000474. [Crossref] [PubMed]
- Kuiler EW, McNeely CL. Panopticon implications of ethical AI: equity, disparity, and inequality in healthcare. In: Batarseh FA, Freeman LJ, editors. AI Assurance: Towards Trustworthy, Explainable, Safe, and Ethical AI. Academic Press; 2023:429-51.
- Chen RJ, Wang JJ, Williamson DFK, et al. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023;7:719-42. [Crossref] [PubMed]
- Baumgartner R, Arora P, Bath C, et al. Fair and equitable AI in biomedical research and healthcare: Social science perspectives. Artif Intell Med 2023;144:102658. [Crossref] [PubMed]
- Labkoff S, Oladimeji B, Kannry J, et al. Toward a responsible future: recommendations for AI-enabled clinical decision support. J Am Med Inform Assoc 2024;31:2730-9. [Crossref] [PubMed]
- Rodrigues R. Legal and human rights issues of AI: Gaps, challenges and vulnerabilities. Responsib Technol 2020;4:100005. [Crossref]
- De Micco F, Grassi S, Tomassini L, et al. Robotics and AI into healthcare from the perspective of European regulation: who is responsible for medical malpractice? Front Med (Lausanne) 2024;11:1428504. [Crossref] [PubMed]
- Meskó B, Topol EJ. The imperative for regulatory oversight of large language models (or generative AI) in healthcare. NPJ Digit Med 2023;6:120. [Crossref] [PubMed]
- Saheb T, Saheb T, Carpenter DO. Mapping research strands of ethics of artificial intelligence in healthcare: A bibliometric and content analysis. Comput Biol Med 2021;135:104660. [Crossref] [PubMed]
- Thakkar A, Gupta A, De Sousa A. Artificial intelligence in positive mental health: a narrative review. Front Digit Health 2024;6:1280235. [Crossref] [PubMed]
- Rosic A. Legal implications of artificial intelligence in health care. Clin Dermatol 2024;42:451-9. [Crossref] [PubMed]
- Mittermaier M, Raza MM, Kvedar JC. Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit Med 2023;6:113. [Crossref] [PubMed]
- Olawade DB, Wada OZ, Odetayo A, et al. Enhancing mental health with Artificial Intelligence: Current trends and future prospects. Journal of Medicine, Surgery, and Public Health 2024;3:100099. [Crossref]
- De Micco F, Tambone V, Frati P, et al. Disability 4.0: bioethical considerations on the use of embodied artificial intelligence. Front Med (Lausanne) 2024;11:1437280. [Crossref] [PubMed]
- Mennella C, Maniscalco U, De Pietro G, et al. Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon 2024;10:e26297. [Crossref] [PubMed]
- Muralidharan V, Adewale BA, Huang CJ, et al. A scoping review of reporting gaps in FDA-approved AI medical devices. NPJ Digit Med 2024;7:273. [Crossref] [PubMed]
- Kalina J. Appropriate artificial intelligence algorithms will ultimately contribute to health equity. In: De Pablos PO, Zhang X, editors. Artificial Intelligence, Big Data, Blockchain and 5G for the Digital Transformation of the Healthcare Industry: a Movement Toward more Resilient and Inclusive Societies. Academic Press; 2024:153-72.
- Newman-Griffis DR, Hurwitz MB, McKernan GP, et al. A roadmap to reduce information inequities in disability with digital health and natural language processing. PLOS Digit Health 2022;1:e0000135. [Crossref] [PubMed]
- Norori N, Hu Q, Aellen FM, et al. Addressing bias in big data and AI for health care: A call for open science. Patterns (N Y) 2021;2:100347. [Crossref] [PubMed]
- Mulvenna MD, Bond R, Delaney J, et al. Ethical Issues in Democratizing Digital Phenotypes and Machine Learning in the Next Generation of Digital Health Technologies. Philos Technol 2021;34:1945-60. [Crossref] [PubMed]
- Minocha S, Joshi K, Sharma A, et al. Research challenges and future work directions in smart healthcare using IoT and machine learning. Advances in Computers 2024;137:353-81. [Crossref]
- Alhuwaydi AM. Exploring the Role of Artificial Intelligence in Mental Healthcare: Current Trends and Future Directions - A Narrative Review for a Comprehensive Insight. Risk Manag Healthc Policy 2024;17:1339-48. [Crossref] [PubMed]
- Badawy M, Ramadan N, Hefny HA. A Survey on Deep Learning Techniques for Predictive Analytics in Healthcare. SN Comput Sci 2024;5:860. [Crossref]
- Di Bidino R, Daugbjerg S, Papavero SC, et al. Health technology assessment framework for artificial intelligence-based technologies. Int J Technol Assess Health Care 2024;40:e61. [Crossref] [PubMed]
- Giuffrè M, Shung DL. Harnessing the power of synthetic data in healthcare: innovation, application, and privacy. NPJ Digit Med 2023;6:186. [Crossref] [PubMed]
- Khalid N, Qayyum A, Bilal M, et al. Privacy-preserving artificial intelligence in healthcare: Techniques and applications. Comput Biol Med 2023;158:106848. [Crossref] [PubMed]
- Liu M, Li S, Yuan H, et al. Handling missing values in healthcare data: A systematic review of deep learning-based imputation techniques. Artif Intell Med 2023;142:102587. [Crossref] [PubMed]
- Lakhan A, Hamouda H, Abdulkareem KH, et al. Digital healthcare framework for patients with disabilities based on deep federated learning schemes. Comput Biol Med 2024;169:107845. [Crossref] [PubMed]
- Gerke S, Minssen T, Cohen G. Ethical and legal challenges of artificial intelligence-driven healthcare. Artificial Intelligence in Healthcare 2020;295-336.
- Chen Y, Clayton EW, Novak LL, et al. Human-Centered Design to Address Biases in Artificial Intelligence. J Med Internet Res 2023;25:e43251. [Crossref] [PubMed]
- Marabelli M, Newell S, Handunge V. The lifecycle of algorithmic decision-making systems: Organizational choices and ethical challenges. Journal of Strategic Information Systems 2021;30:101683. [Crossref]
- Bærøe K, Gundersen T, Henden E, et al. Can medical algorithms be fair? Three ethical quandaries and one dilemma. BMJ Health Care Inform 2022;29:e100445. [Crossref] [PubMed]
- Bucher A, Chaudhry BM, Davis JW, et al. How to design equitable digital health tools: A narrative review of design tactics, case studies, and opportunities. PLOS Digit Health 2024;3:e0000591. [Crossref] [PubMed]
- Ennab M, Mcheick H. Enhancing interpretability and accuracy of AI models in healthcare: a comprehensive review on challenges and future directions. Front Robot AI 2024;11:1444763. [Crossref] [PubMed]
- Caterson J, Lewin A, Williamson E. The application of explainable artificial intelligence (XAI) in electronic health record research: A scoping review. Digit Health 2024;10:20552076241272657. [Crossref] [PubMed]
- Rafiei A, Moore R, Jahromi S, et al. Meta-learning in Healthcare: A Survey. SN Comput Sci 2024;5:791. [Crossref]
- Liu M, Ning Y, Ke Y, et al. FAIM: Fairness-aware interpretable modeling for trustworthy machine learning in healthcare. Patterns (N Y) 2024;5:101059. [Crossref] [PubMed]
- Chen C, Feng X, Li Y, et al. Integration of large language models and federated learning. Patterns (N Y) 2024;5:101098. [Crossref] [PubMed]
- Hansson SO, Fröding B. Digital Technology in Healthcare—An Ethical Overview. Digital Society 2024;3:46. [Crossref]
- Grote T, Berens P. On the ethics of algorithmic decision-making in healthcare. J Med Ethics 2020;46:205-11. [Crossref] [PubMed]
- Gouveia SS. The Ethics of Artificial Intelligence in Medicine: Preliminary Remarks. Global Philosophy 2025;35:4. [Crossref]
- Collins BX, Bélisle-Pipon JC, Evans BJ, et al. Addressing ethical issues in healthcare artificial intelligence using a lifecycle-informed process. JAMIA Open 2024;7:ooae108. [Crossref] [PubMed]
- Fong N, Langnas E, Law T, et al. Availability of information needed to evaluate algorithmic fairness — A systematic review of publicly accessible critical care databases. Anaesth Crit Care Pain Med 2023;42:101248. [Crossref] [PubMed]
- Alderman JE, Palmer J, Laws E, et al. Tackling algorithmic bias and promoting transparency in health datasets: the STANDING Together consensus recommendations. Lancet Digit Health 2025;7:e64-88. [Crossref] [PubMed]
- Mennella C, Esposito M, De Pietro G, et al. Promoting fairness in activity recognition algorithms for patient’s monitoring and evaluation systems in healthcare. Comput Biol Med 2024;179:108826. [Crossref] [PubMed]
- Graham S, Depp C, Lee EE, et al. Artificial Intelligence for Mental Health and Mental Illnesses: an Overview. Curr Psychiatry Rep 2019;21:116. [Crossref] [PubMed]
- Mitra S. A data revolution for disability-inclusive development. Lancet Glob Health 2013;1:e178-9. [Crossref] [PubMed]
- Johnson SLJ AI. Machine Learning, and Ethics in Health Care. J Leg Med 2019;39:427-41. [Crossref] [PubMed]
Cite this article as: Vogt Y. Disability and algorithmic fairness in healthcare: a narrative review. J Med Artif Intell 2025;8:56.