ABSTRACT
Artificial intelligence technologies are rapidly advancing and significantly impacting healthcare, particularly in critical care environments where rapid, precise decision-making is crucial. They promise reductions in clinical errors, enhanced diagnostic accuracy, optimized treatment plans, and better resource allocation. Artificial intelligence applications are widespread across medical fields, with numerous artificial intelligence/machine learning-enabled medical devices approved by regulatory bodies, like the US Food and Drug Administration, aiding in diagnosis, monitoring, and personalized patient care. However, integrating artificial intelligence into healthcare presents challenges, notably the potential to exacerbate existing biases and disparities, especially when systems are trained on homogeneous datasets lacking diversity. Biased artificial intelligence can negatively affect patient outcomes for underrepresented groups, perpetuating health disparities. Additional concerns include data privacy and security, lack of transparency, algorithmic bias, and regulatory hurdles. Addressing these risks requires ensuring diverse and representative datasets, implementing robust auditing and monitoring practices, enhancing transparency, involving diverse perspectives in artificial intelligence development, and promoting critical thinking among healthcare professionals. Furthermore, the environmental impact of artificial intelligence, huge models reliant on energy-intensive data centers, poses challenges due to increased greenhouse gas emissions and resource consumption, disproportionately affecting low-income countries and exacerbating global inequalities. Systemic changes driven by corporate responsibility, government policy, and adopting sustainable artificial intelligence practices within healthcare are necessary. This narrative review explores the current landscape of artificial intelligence in healthcare, highlighting its potential benefits and delineating associated risks and challenges, underscoring the importance of mitigating biases and environmental impacts to ensure equitable and sustainable integration of artificial intelligence technologies in healthcare settings.
Keywords:
Artificial intelligence; Machine learning; Bias; Transparency; Patient care; Critical care; Delivery of health care
INTRODUCTION
Artificial intelligence (AI) encompasses a range of technologies that enable computer systems to perform tasks traditionally requiring human intelligence.(1) These technologies include machine learning (ML), deep learning (DL), convolutional neural networks (CNN), and natural language processing (NLP).(2) Each of these forms of AI operates along a spectrum of automated decision-making, necessitating varying levels of human oversight.(3) Over the past decade, the integration of AI into healthcare has accelerated, driven by advancements in computational power (e.g., architectures such as Graphics Processing Units (GPUs)), digitization of healthcare data (e.g., electronic health records and medical images), and sophisticated algorithms.(4-7) This evolution is reshaping both clinical and academic settings, offering decision-making capabilities that often rival or surpass human performance.(8,9)
The rapid adoption of AI in healthcare is also fueled by the advent of enhanced cloud storage, which allows for the massive compilation, labeling, and retrieval of data.(10) These advancements promise reduced clinical errors, enable semi-automated outcome predictions, and empower patients to engage with their health data.(6,11,12) However, integrating AI into healthcare is challenging. The potential for AI to exacerbate existing biases and disparities in healthcare is significant. Artificial intelligence systems trained on homogenous data sets, which often lack diversity in patient populations and are curated from limited clinical settings, may produce biased outcomes and limit generalizability.(13-17) This is particularly concerning in critical care, in which measurement biases from commonly used medical devices can affect model performance.(18) Recent studies show that the performance of implemented AI models is variable across different hospitals in critical care settings.(19) Therefore, achieving the promise of AI in healthcare requires not only technological advancements but also a concentrated effort to address and mitigate biases, ensuring equitable benefits across diverse patient populations.(20)
Key artificial intelligence concepts
Artificial intelligence can be broadly categorized into generative AI and predictive AI based on their primary functions. Generative AI refers to models designed to create new content, such as text or images, by learning patterns from existing datasets. For example, large language models, like GPT-4, are considered a form of generative AI. Conversely, predictive AI analyzes existing data to make predictions or recommendations. Examples include algorithms that predict the likelihood of sepsis or forecast hemodynamic instability in ICU patients. These concepts can be applied to different AI methodologies, such as ML, DL, and NLP (Figure 1).
Machine learning
Machine learning is a subset of AI that involves training algorithms to learn from and make predictions or decisions based on data, using statistical methods to enable machines to improve at tasks without being explicitly programmed.(21)
Machine learning can be divided into two main approaches: supervised and unsupervised. In supervised learning, models are trained on labeled data, in which the outcome variable is explicitly known. This method is widely used in diagnostic tasks, such as identifying pneumonia from chest X-rays. In contrast, unsupervised learning involves training on unlabeled data to find hidden patterns or groupings, which can be helpful in patient segmentation or anomaly detection in datasets.
Deep learning
Deep learning is a segment of ML tools based on artificial neural networks with representation learning, which can model complex non-linear relationships. The structure of the human brain inspires neural networks and consist of layers of nodes, with each layer transforming the input data progressively into more abstract representations. Deep learning has been instrumental in many AI breakthroughs, particularly in complex tasks like image recognition, speech recognition, and NLP.(22)
Convolutional neural networks
Convolutional neural networks are a type of DL neural network that is very efficient at capturing data's spatial and temporal dependencies, such as images or sequential data.(23-25) Convolutional neural networks utilize convolutional layers, which filter input data to create a transformed feature map. Based on learned parameters, filters are automatically adjusted to extract the most valuable features of interest, such as edges, textures, and shapes. This makes them particularly effective for image recognition, classification, and other visual tasks.(26)
Natural language processing
Natural language processing is a field at the intersection of computer science, AI, and linguistics. It involves programming computers to process and analyze large amounts of natural language data. The goal is for computers to understand and generate human language meaningfully and usefully. Natural language processing uses computational techniques to learn, understand, and produce human language content, enabling applications like translation services, sentiment analysis, and customer service automation.(27)
In summary, different AI tools have the potential to help humans by automating processes, enhancing decision-making, and personalizing user experiences. Its ability to learn from data and adapt to new environments makes it a promising technology in today's digital reality.(28,29)
Applications of artificial intelligence in healthcare
Artificial intelligence is increasingly being integrated into healthcare, providing tools developed to assist medical professionals in diagnosing, treating, and managing diseases. The impact of AI in healthcare is underscored by the growing number of AI/ML-enabled medical devices approved by the US Food and Drug Administration (FDA). In 2021, the FDA established a public database to track authorizations for medical devices that utilize AI or ML.(30) As of May 13, 2024, the FDA has approved 882 such devices.(31) The most common pathways for FDA approval of these Clinical Decision Support (CDS) devices are the 510(k) pathway and the de novo pathway. The 510(k) pathway involves demonstrating that a new device is substantially equivalent to an already authorized device and typically does not require clinical data.(32) The de novo pathway is used for novel devices that present low to moderate risk and provide reasonable assurance of safety and effectiveness.(33)
Many AI algorithms are already integrated into clinical practice, providing essential tools for patient care in critical care settings. For instance, the Analytic for Hemodynamic Instability software(34) is designed for patients receiving continuous electrocardiogram monitoring. Analytic for Hemodynamic Instability leverages electrocardiogram data to assess a patient's hemodynamic status and identify early signs of hemodynamic instability, aiding clinicians in managing potentially life-threatening conditions. Another example is the Clew ICU Server and Clew ICU Unit (CLEW ICU System),(35) which takes predictive analytics a step further by forecasting the likelihood of future hemodynamic instability in intensive care unit (ICU) patients. This AI-driven tool assists healthcare providers in anticipating and mitigating clinical deterioration.
In remote or distributed care scenarios, such as eICU, the TytoStethoscope offers an AI-enabled solution.(36) This electronic stethoscope facilitates remote auscultation by allowing clinicians to hear and analyze heart, lung, and other body sounds from a distance. This capability is crucial for providing high-quality care in critical and resource-constrained environments, enhancing diagnostic accuracy and timely interventions.
These examples illustrate how AI is transforming critical care by improving patient monitoring, predicting adverse events, and enabling more precise and efficient decision-making. As AI technologies advance, their impact on the intensive care setting is poised to grow, offering novel tools to support the most vulnerable patients.
Potential benefits of artificial intelligence in healthcare
Artificial intelligence can transform the healthcare industry, offering significant strides in diagnostic accuracy, treatment optimization, surgical precision, and operational efficiency.(37) Some AI systems have been able to enhance diagnostic accuracy by analyzing medical data and images more effectively than traditional methods. For instance, a study found that uses AI for mammogram interpretation reduced false positives by 5.7% and false negatives by 9.4% compared to the radiologist reading.(38) This improvement in diagnostic accuracy might lead to earlier detection and treatment of breast cancer, potentially saving lives and reducing healthcare costs.
Moreover, AI can also play a role in optimizing treatment plans and improving precision in critical care. For instance, in surgical procedures, studies have shown that these technologies can reduce the risk of complications and accelerate patient recovery time.(39) In particular, a study shows that surgeons were able to perform minimally invasive procedures with greater accuracy, leading to shorter hospital stays and lower healthcare costs.(39)
Artificial intelligence also has the potential to significantly enhance efficiency and productivity in healthcare processes. Artificial intelligence algorithms can quickly analyze medical images such as X-rays and magnetic resonance imaging (MRIs), drastically reducing the time required for radiologists to interpret results. This can expedite diagnosis and treatment planning, allowing for faster and more effective patient care.(40) According to a study by McKinsey et Company, AI could automate up to 45% of administrative tasks in healthcare, potentially freeing up $150 billion in annual costs.(40) By automating routine tasks, AI potentially allows healthcare professionals to focus more on patient care.
Finally, AI might also improve diagnostic accuracy by providing risk assessments and informing physicians when to act in a critical care scenario. For instance, AI can help physicians decide whether to use anticoagulation therapy based on the risk of developing blood clots or deep vein thrombosis.(41) Artificial intelligence can create a subset of patients meeting specific criteria and develop risk estimates by analyzing data from patients with similar symptoms and conditions. This data-driven approach might enable physicians to make more informed decisions, thereby improving patient outcomes. However, careful attention must be paid to issues of bias and equity, particularly in high-stakes critical care environments where rapid decisions impact patient outcomes.
Bias in artificial intelligence: understanding and mitigating risks
Definition and types of bias in artificial intelligence
Artificial intelligence-based systems can negatively affect its performance and, consequently, patient outcomes.(42) Understanding these biases is crucial for improving AI's effectiveness in healthcare.
Bias can be defined as "any systematic error in the design, conduct, or analysis of a study".(43) Healthcare data is particularly susceptible to bias due to historical factors, implicit clinician bias, referral and admission disparities between groups, and diagnostic errors.(44) As AI decision support systems rely on existing literature and experimental results, biased data reflect directly on AI models. Key types of bias include (a) data or information bias, (b) selection bias, and (c) measurement or procedural bias (Table 1).
The choice of algorithms and methods for data analysis can also introduce further biases. In the context of AI, there are three primary sources of bias: (a) knowledge bias, which is further divided into experimental bias, information reliability bias, limited knowledge bias, and shallow information bias; (b) processing bias, which is further divided into bias in the selected algorithm and bias in the knowledge used for feedback;(45) and (d) technology access bias(20) (Table 2).
Impact of biased artificial intelligence on patient care
Biased AI can negatively impact patient care.(46) A primary concern is the issue of generalizability, where AI systems trained on specific datasets perform poorly when applied to different or evolving clinical contexts.(47) This mismatch can lead to incorrect predictions, misdiagnoses, and wrong treatment recommendations. The impact is particularly pronounced for protected groups, such as racial and ethnic minorities, women, and older adults, who may be underrepresented in the training data.(46) When AI systems are not adequately trained on diverse populations, they may exacerbate existing health disparities by offering less accurate diagnoses, predicting worse outcomes, or recommending inappropriate treatments for these groups.(46) For instance, an AI model deployed to respond to patient messages trained in general ambulatorial patient data was found to suggest weight loss for patients with a cancer diagnosis, as this is a standard recommendation for more prevalent conditions, such as diabetes and hypertension.(48) Also, a DL model developed by Han et al. to differentiate between malignant and benign skin lesions using clinical images found that the model would have different specificities for basal cell carcinoma, squamous cell carcinoma, and melanoma between two datasets, which may have been due to the skin colors around the lesion. Their findings highlighted that the diversity of the training dataset influenced the model's performance.(49)
The term "black box" describes the difficulty in understanding or explaining how an AI model processes inputs and generates outputs due to the complexity and opacity of its internal computations. This characteristic of many ML algorithms makes detecting and addressing biases within these systems challenging, raising significant safety concerns. In this context, even blinding data for some features that can exacerbate this gap, such as socioeconomic status and race may not be enough. Gichoya et al. demonstrated that AI can recognize the race of patients only using nothing more than their chest X-rays (0.981 - 0.983 AUROC for identifying black patients).(50) Biased AI can perpetuate existing healthcare disparities, as these systems might not perform equally well across different demographic groups.(42) For example, an algorithm was developed to predict hospital length of stay to help case managers increase patient throughput. However, the algorithm identified that patients from less affluent zip codes were more likely to have more extended hospital stays and, therefore, suggested they might not benefit from early discharge planning based on their address. The algorithm was never deployed because of this bias.(51)
Mitigating biases in artificial intelligence
Addressing bias in AI systems is essential to ensure equitable and accurate patient care. Key strategies include diverse and representative datasets, auditing and monitoring practices, transparency practices, diverse perspectives in AI development, and AI critical thinking.
Diverse and representative datasets
Ensuring that data collection encompasses diverse populations, especially those that are underrepresented, is critical for reducing bias.(46) However, curating large, representative datasets is expensive, making them limited and inaccessible.(46,52) Federated learning, a technique in which multiple decentralized institutions collaboratively train AI models on their local healthcare data, has been proposed to adapt AI to specific settings.(53) However, the evidence of its effectiveness is mixed.(54)
Nevertheless, it has been shown that when datasets are accessible to a broader community of researchers and developers, it enables the identification and potential correction of biases that might have been overlooked. Open data initiatives also promote the development of more diverse and comprehensive AI models, benefiting the broader healthcare community.(55) Sharing datasets and algorithms openly encourages collaboration.(55)
Auditing and monitoring practices
Continuous monitoring of AI performance across different demographic groups allows one to comprehensively understand and measure the cumulative effects of potential biases in the real-world scenario.(56) Regular audits and quality assessments could enable timely interventions to correct biases and improve fairness. However, using only the already established metrics, such as the area under the curve and accuracy, may be insufficient. Developing comprehensive performance metrics that evaluate AI outcomes based on diverse metrics focused on patient outcomes will be paramount.(57)
Transparency practices
Transparency can be a crucial factor in mitigating biases in AI systems.(58) By making AI models’ development and training data more transparent, stakeholders can better evaluate these algorithms.(59) Transparency enables the investigation of biases that may be embedded in the training data or the model's design. This allows for continuous monitoring, evaluation, and improvement, helping to ensure that AI-driven decisions are fair across diverse patient populations.(60) However, transparency should not be confused with explainability. The former pertains to the ability to assess the development process of an algorithm, while the latter involves comprehending how the AI generates its outputs. Although explainable AI is an evolving and highly active area of research, it is important to note that increased explainability does not necessarily translate to improved outcomes in clinical practice. Research suggests that, in some cases, greater explainability might hinder the effectiveness of AI applications in healthcare.(61)
Diverse perspectives in artificial intelligence development
Incorporating diverse perspectives in AI development is crucial for creating fair and equitable systems.(62) One of the primary strategies for achieving this is to build interdisciplinary teams. These teams should include stakeholders from diverse fields, such as medicine, ethics, social sciences, and technology. The collaboration of professionals from different backgrounds ensures that the AI systems are designed with a comprehensive understanding of the ethical, social, and clinical implications, leading to more holistic and unbiased outcomes.(63)
Another critical component is engaging with all stakeholders, including patients, advocacy groups, and community representatives. By involving these groups in the process, developers can gain valuable insights into the needs and concerns of different populations. This engagement helps tailor AI systems to address real-world issues effectively and ensures that the voices of those directly impacted by these technologies are heard and considered.(64)
Artificial intelligence critical thinking
In the rapidly advancing field of healthcare, the integration of AI tools is becoming increasingly prevalent, but their adoption is not without significant risks. While critical thinking, including bias awareness,(65) is indispensable for healthcare practitioners to evaluate AI recommendations and apply them judiciously, it is far from a complete solution.(61) Relying on critical thinking alone risks oversimplifying the complex challenges that AI introduces. The potential for AI to perpetuate biases or make flawed recommendations necessitates a broader, more cautious approach. This includes the development of comprehensive safeguards, ongoing performance audits, and the establishment of ethical standards. Without these additional layers of protection, the use of AI in healthcare could do more harm than good, making it imperative that we remain vigilant as we integrate these technologies into clinical practice.
Other risks and challenges of artificial intelligence in healthcare
As AI becomes more integrated into healthcare systems, it brings a range of potential risks and challenges that must be monitored and managed to ensure patient safety, ethical integrity, and the efficacy of healthcare delivery.
Privacy
One concern is data privacy and security. Building effective AI systems needs the collection and processing of vast amounts of sensitive health data, followed by model training, building, and implementation.(66) All these steps raise significant concerns about how this data is stored, shared, and processed. The risk of data breaches, unauthorized access, and potential misuse of patient information are critical issues that must be addressed to maintain trust and safety.(67)
Transparency
The lack of transparency and explainability in AI models also poses a considerable challenge. Many AI algorithms, intense learning models, operate as "black boxes," making it difficult to explain how certain decisions are made.(68) AI systems also rely on high-quality, accurate, comprehensive, and up-to-date data to deliver reliable outcomes. However, data in healthcare can be fragmented, inconsistent, and incomplete, leading to potential errors in AI-driven analysis and decision-making.(46) Ensuring that data used for AI is of the highest quality is essential for successfully implementing AI; meanwhile, ensuring the data's source and composition are transparent is crucial for correctly applying the algorithms.
Fairness
Another significant challenge in AI is algorithmic bias and fairness. Artificial intelligence systems are only as good as the data they are trained on. Nevertheless, if this data carries biases, AI can perpetuate and even exacerbate these biases.(69) For example, a study has shown that GPT-4 tends to stereotype demographic presentations in clinical vignettes, often generating differential diagnoses that reflect biased assumptions based on race, ethnicity, and gender.(70) This can lead to unequal treatment outcomes, in which certain demographic groups may receive inferior care due to skewed algorithms.(70)
Regulations
Regulating AI in healthcare is becoming increasingly critical. As AI technologies rapidly advance, their pace outruns the existing regulatory frameworks, creating uncertainty about how these technologies should be governed.(71) In the United States, the FDA is responsible for overseeing AI-based medical devices, ensuring they meet safety and efficacy standards before they can be marketed.(31) Besides that, the White House Executive Order on AI was created as a coordinated federal approach to AI governance.(72) In Europe, the EU AI Act represents a regulatory approach, classifying AI systems based on their risk level and setting requirements for high-risk applications, such as those in healthcare.(73) Still, many researchers are worried that AI development outpaces effective regulations, which might lead to loopholes and regulatory oversight.(74)
Automation complacency
Besides systemic bias, healthcare workers are susceptible to cognitive biases like automation complacency. This happens when users become less vigilant in detecting errors due to the system's perceived reliability.(42) A study published on the effect of integrating AI to assist radiologists found that less-experienced radiologists (based on years of experience, subspeciality in thoracic radiology, and experience with AI tools) do not consistently benefit more from AI assistance for chest X-ray. Besides that, AI errors significantly influence treatment outcomes, with inaccurate AI predictions negatively affecting radiologist performance.(75)
Environmental impact
The development and deployment of AI, huge models, have substantial environmental costs due to their reliance on energy-intensive data centers. These centers, which power AI computations, significantly contribute to greenhouse gas emissions, water usage, and pollution.(76,77) Notably, 77% of these data centers are located in high-income countries,(78,79) allowing these nations to benefit from advanced AI capabilities while offloading the environmental impact to other regions. This imbalance threatens global efforts to combat climate change and risks widening inequalities by disproportionately affecting low-income countries that are less equipped to handle the resulting environmental damage.(80)
Many AI models in healthcare are developed with immense energy consumption but do not provide commensurate benefits in clinical outcomes.(81)
While corporate and government actions are crucial, healthcare providers, researchers, and academic organizations also have roles to play in fostering sustainable AI practices.(82) They can use tools like carbon footprint calculators and environmental assessments to measure and mitigate the ecological impact of AI.(83)
CONCLUSION
Artificial intelligence holds transformative potential for critical care medicine, in which the complexity of patient data and the need for rapid, precise decision-making create an ideal environment for artificial intelligence applications. With its wealth of continuous monitoring data and the need for real-time clinical decisions, the intensive care unit represents a setting in which artificial intelligence could significantly enhance patient care through early warning systems, treatment optimization, and resource allocation. Recent implementations of artificial intelligence in critical care demonstrate the promise and challenges of translating these technologies into clinical practice.
However, there are numerous ways bias can be introduced in the artificial intelligence lifecycle, from task definition to implementation, particularly in critical care's complex and time-sensitive environment. The path forward requires diverse teams and careful planning at each step of the artificial intelligence lifecycle, including ensuring data quality and representativeness, rigorous local validation, and continuous monitoring of implemented models. In critical care specifically, in which decisions must often be made rapidly and with incomplete information, artificial intelligence tools must enhance rather than hinder clinical decision-making while maintaining equity across all patient populations. Only through such careful attention to bias and equity can we ensure that artificial intelligence fulfills its promise of improving patient outcomes.
REFERENCES
- 1 Muehlematter UJ, Daniore P, Vokinger KN. Approval of artificial intelligence and machine learning-based medical devices in the USA and Europe (2015-20): a comparative analysis. Lancet Digit Health. 2021;3(3):e195-203.]
- 2 Krishnan G, Singh S, Pathania M, Gosavi S, Abhishek S, Parchani A, et al. Artificial intelligence in clinical medicine: catalyzing a sustainable global healthcare paradigm. Front Artif Intell. 2023;6:1227091.
- 3 Balagurunathan Y, Mitchell R, El Naqa I. Requirements and reliability of AI in the medical context. Phys Med. 2021;83:72-8.
- 4 Hinton G. Deep learning-A technology with the potential to transform health care. JAMA. 2018;320(11):1101-2.
- 5 Ching T, Himmelstein DS, Beaulieu-Jones BK, Kalinin AA, Do BT, Way GP, et al. Opportunities and obstacles for deep learning in biology and medicine. J R Soc Interface. 2018;15(141):20170387.
- 6 Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med. 2019;25(1):44-56.
- 7 Benjamens S, Dhunnoo P, Meskó B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. NPJ Digit Med. 2020;3(1):118.
- 8 Nagendran M, Chen Y, Lovejoy CA, Gordon AC, Komorowski M, Harvey H, et al. Artificial intelligence versus clinicians: systematic review of design, reporting standards, and claims of deep learning studies. BMJ. 2020;368:m689.
- 9 Gulshan V, Rajan RP, Widner K, Wu D, Wubbels P, Rhodes T, et al. Performance of a deep-learning algorithm vs manual grading for detecting diabetic retinopathy in India. JAMA Ophthalmol. 2019;137(9):987-93.
- 10 Xu Y, Liu X, Cao X, Huang C, Liu E, Qian S, et al. Artificial intelligence: a powerful paradigm for scientific research. Innovation (Camb). 2021;2(4):100179.
- 11 Basu K, Sinha R, Ong A, Basu T. Artificial intelligence: how is it changing medical sciences and its future? Indian J Dermatol. 2020;65(5):365-70.
- 12 Alowais SA, Alghamdi SS, Alsuhebany N, Alqahtani T, Alshaya AI, Almohareb SN, et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ. 2023;23(1):689.
- 13 Rajkomar A, Hardt M, Howell MD, Corrado G, Chin MH. Ensuring fairness in machine learning to advance health equity. Ann Intern Med. 2018;169(12):866-72.
- 14 Velagapudi L, Mouchtouris N, Baldassari MP, Nauheim D, Khanna O, Saiegh FA, et al. Discrepancies in stroke distribution and dataset origin in machine learning for stroke. J Stroke Cerebrovasc Dis. 2021;30(7):105832.
- 15 Muzammil MA, Javid S, Afridi AK, Siddineni R, Shahabi M, Haseeb M, et al. Artificial intelligence-enhanced electrocardiography for accurate diagnosis and management of cardiovascular diseases. J Electrocardiol. 2024;83:30-40.
- 16 Adamson AS, Smith A. Machine learning and health care disparities in dermatology. JAMA Dermatol. 2018;154(11):1247-8.
- 17 Kamulegeya L, Bwanika J, Okello M, Rusoke D, Nassiwa F, Lubega W, et al. Using artificial intelligence on dermatology conditions in Uganda: a case for diversity in training data sets for machine learning. Afr Health Sci. 2023;23(2):753-63.
- 18 Charpignon ML, Byers J, Cabral S, Celi LA, Fernandes C, Gallifant J, et al. Critical bias in critical care devices. Crit Care Clin. 2023;39(4):795-813.
- 19 Wong A, Otles E, Donnelly JP, Krumm A, McCullough J, DeTroyer-Cooley O, et al. External validation of a widely implemented proprietary sepsis prediction model in hospitalized patients. JAMA Intern Med. 2021;181(8):1065-70.
- 20 Celi LA, Cellini J, Charpignon ML, Dee EC, Dernoncourt F, Eber R, et al.; for MIT Critical Data. Sources of bias in artificial intelligence that perpetuate healthcare disparities-A global review. PLOS Digit Health. 2022;1(3):e0000022.
- 21 Bi Q, Goodman KE, Kaminsky J, Lessler J. What is machine learning? A primer for the epidemiologist. Am J Epidemiol. 2019;188(12):2222-39.
- 22 Choi RY, Coyner AS, Kalpathy-Cramer J, Chiang MF, Campbell JP. Introduction to machine learning, neural networks, and deep learning. Transl Vis Sci Technol. 2020;9(2):14.
- 23 Yadav SS, Jadhav SM. Deep convolutional neural network based medical image classification for disease diagnosis. J Big Data. 2019;6(1):113.
- 24 Montesinos-López OA, Montesinos-López A, Pérez-Rodríguez P, Barrón-López JA, Martini JW, Fajardo-Flores SB, et al. A review of deep learning applications for genomic selection. BMC Genomics. 2021;22(1):19.
- 25 Koumakis L. Deep learning models in genomics; are we there yet? Comput Struct Biotechnol J. 2020;18:1466-73.
- 26 Yamashita R, Nishio M, Do RK, Togashi K. Convolutional neural networks: an overview and application in radiology. Insights Imaging. 2018;9(4):611-29.
- 27 Gao Y, Mahajan D, Uzuner Ö, Yetisgen M. Clinical natural language processing for secondary uses. J Biomed Inform. 2024;150:104596.
- 28 Holm S. Handle with care: assessing performance measures of medical AI for shared clinical decision-making. Bioethics. 2022;36(2):178-86.
- 29 Undey C. AI in process automation. SLAS Technol. 2021;26(1):1-2.
- 30 Lee JT, Moffett AT, Maliha G, Faraji Z, Kanter GP, Weissman GE. Analysis of devices authorized by the FDA for clinical decision support in critical care. JAMA Intern Med. 2023;183(12):1399-401.
-
31 Food and Drug Administration (FDA). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. U.S. Food and Drug Administration. [cited 2024 Jul 24] Available from https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
» https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices -
32 Food and Drug Administration (FDA). Premarket Notification 510(k). U.S. Food and Drug Administration. [cited 2024 Jul 24]. Available from https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-notification-510k
» https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/premarket-notification-510k -
33 Food and Drug Administration (FDA). De Novo Classification Request. U.S. Food and Drug Administration. [cited 2024 Jul 24]. Available from https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/de-novo-classification-request
» https://www.fda.gov/medical-devices/premarket-submissions-selecting-and-preparing-correct-submission/de-novo-classification-request -
34 Food and Drug Administration (FDA). Device Classification Under Section 513(f)(2) (De Novo). Analytic For Hemodynamic Instability (AHI). U.S. Food and Drug Administration. [cited 2024 Nov 22]. Available from https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/denovo.cfm?id=DEN200022
» https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/denovo.cfm?id=DEN200022 -
35 Food and Drug Administration (FDA). CLEWICU System (ClewICUServer and ClewICUnitor). 510(k) Premarket Notification. U.S. Food and Drug Administration. [cited 2024 Nov 22]. Available from https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K200717
» https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K200717 -
36 Food and Drug Administration (FDA). Tyto Stethoscope. 510(k) Premarket Notification. U.S. Food and Drug Administration. [cited 2024 Nov 22]. Available from https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K160401
» https://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfpmn/pmn.cfm?ID=K160401 - 37 Rajpurkar P, Chen E, Banerjee O, Topol EJ. AI in health and medicine. Nat Med. 2022;28(1):31-8.
- 38 McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):89-94.
- 39 Elendu C, Amaechi DC, Elendu TC, Jingwa KA, Okoye OK, John Okah M, et al. Ethical implications of AI and robotics in healthcare: a review. Medicine (Baltimore). 2023;102(50):e36671.
-
40 Vozna A. AI Reducing Healthcare Costs: Actual Numbers from 7 Startups. Glorium Technologies. [cited 2024 Aug 28]. Available from https://gloriumtech.com/ai-reducing-healthcare-costs/
» https://gloriumtech.com/ai-reducing-healthcare-costs/ - 41 Capecchi M, Abbattista M, Ciavarella A, Uhr M, Novembrino C, Martinelli I. Anticoagulant therapy in patients with antiphospholipid syndrome. J Clin Med. 2022;11(23):6984.
- 42 Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf. 2019;28(3):231-7.
- 43 Althubaiti A. Information bias in health research: definition, pitfalls, and adjustment methods. J Multidiscip Healthc. 2016;9:211-7.
- 44 Landon BE, Onnela JP, Meneades L, O’Malley AJ, Keating NL. Assessment of racial disparities in primary care physician specialty referrals. JAMA Netw Open. 2021;4(1):e2029238.
- 45 Gurupur V, Wan TT. Inherent bias in artificial intelligence-based decision support systems for healthcare. Medicina (Kaunas). 2020;56(3):141.
- 46 Gichoya JW, Thomas K, Celi LA, Safdar N, Banerjee I, Banja JD, et al. AI pitfalls and what not to do: mitigating bias in AI. Br J Radiol. 2023;96(1150):20230023.
- 47 Rajpurkar P, Lungren MP. the current and future state of AI interpretation of medical images. N Engl J Med. 2023;388(21):1981-90.
- 48 Tai-Seale M, Baxter SL, Vaida F, Walker A, Sitapati AM, Osborne C, et al. AI-generated draft replies integrated into health records and physicians’ electronic communication. JAMA Netw Open. 2024;7(4):e246565.
- 49 Han SS, Kim MS, Lim W, Park GH, Park I, Chang SE. Classification of the clinical images for benign and malignant cutaneous tumors using a deep learning algorithm. J Invest Dermatol. 2018;138(7):1529-38.
- 50 Gichoya JW, Banerjee I, Bhimireddy AR, Burns JL, Celi LA, Chen LC, et al. AI recognition of patient race in medical imaging: a modelling study. Lancet Digit Health. 2022;4(6):e406-14.
- 51 Nordling L. A fairer way forward for AI in health care. Nature. 2019;573(7775):S103-5.
- 52 Futoma J, Simons M, Panch T, Doshi-Velez F, Celi LA. The myth of generalisability in clinical research and machine learning in health care. Lancet Digit Health. 2020;2(9):e489-92.
- 53 van Genderen ME, Cecconi M, Jung C. Federated data access and federated learning: improved data sharing, AI model development, and learning in intensive care. Intensive Care Med. 2024;50(6):974-7.
- 54 Sauer CM, Pucher G, Celi LA. Why federated learning will do little to overcome the deeply embedded biases in clinical medicine. Intensive Care Med. 2024;50(8):1390-2.
- 55 Charpignon ML, Celi LA, Cobanaj M, Eber R, Fiske A, Gallifant J, et al. Diversity and inclusion: A hidden additional benefit of Open Data. PLOS Digit Health. 2024;3(7):e0000486.
- 56 Abràmoff MD, Tarver ME, Loyo-Berrios N, Trujillo S, Char D, Obermeyer Z, et al.; Foundational Principles of Ophthalmic Imaging and Algorithmic Interpretation Working Group of the Collaborative Community for Ophthalmic Imaging Foundation, Washington, D.C. Considerations for addressing bias in artificial intelligence for health equity. NPJ Digit Med. 2023;6(1):170.
- 57 Goodman KE, Yi PH, Morgan DJ. AI-Generated Clinical Summaries Require More Than Accuracy. JAMA. 2024;331(8):6378.
- 58 Daneshjou R, Smith MP, Sun MD, Rotemberg V, Zou J. Lack of transparency and potential bias in artificial intelligence data sets and algorithms: a scoping review. JAMA Dermatol. 2021;157(11):1362-9.
- 59 Beam AL, Manrai AK, Ghassemi M. Challenges to the reproducibility of machine learning models in health care. JAMA. 2020;323(4):305-6.
- 60 Feng J, Phillips RV, Malenica I, Bishara A, Hubbard AE, Celi LA, et al. Clinical artificial intelligence quality improvement: towards continual monitoring and updating of AI algorithms in healthcare. NPJ Digit Med. 2022;5(1):66.
- 61 Ghassemi M, Oakden-Rayner L, Beam AL. The false hope of current approaches to explainable artificial intelligence in health care. Lancet Digit Health. 2021;3(11):e745-50.
- 62 Baumgartner R, Arora P, Bath C, Burljaev D, Ciereszko K, Custers B, et al. Fair and equitable AI in biomedical research and healthcare: social science perspectives. Artif Intell Med. 2023;144:102658.
- 63 Kontiainen L, Koulu R, Sankari S. Research agenda for algorithmic fairness studies: access to justice lessons for interdisciplinary research. Front Artif Intell. 2022;5:882134.
- 64 Morley J, Machado CC, Burr C, Cowls J, Joshi I, Taddeo M, et al. The ethics of AI in health care: a mapping review. Soc Sci Med. 2020;260:113172.
- 65 Charow R, Jeyakumar T, Younus S, Dolatabadi E, Salhia M, Al-Mouaswas D, et al. Artificial intelligence education programs for health care professionals: scoping review. JMIR Med Educ. 2021;7(4):e31043.
- 66 Chen Y, Esmaeilzadeh P. Generative AI in medical practice: in-depth exploration of privacy and security challenges. J Med Internet Res. 2024;26:e53008.
- 67 Li J. Security implications of AI chatbots in health care. J Med Internet Res. 2023;25:e47551.
- 68 Adadi A, Berrada M. Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE Access. 2018;6:52138-60.
- 69 Yu AC, Eng J. One algorithm may not fit all: how selection bias affects machine learning performance. Radiographics. 2020;40(7):1932-7.
- 70 Zack T, Lehman E, Suzgun M, Rodriguez JA, Celi LA, Gichoya J, et al. Assessing the potential of GPT-4 to perpetuate racial and gender biases in health care: a model evaluation study. Lancet Digit Health. 2024;6(1):e12-22.
- 71 Terry N. Of regulating healthcare AI and robots. Yale J L Tech. 2019;21(Special issue):133-90.
-
72 Biden JR. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The White House. [cited 2024 Aug 25]. Available from https://www.presidency.ucsb.edu/documents/executive-order-14110-safe-secure-and-trustworthy-development-and-use-artificial
» https://www.presidency.ucsb.edu/documents/executive-order-14110-safe-secure-and-trustworthy-development-and-use-artificial -
73 European Parlament. EU AI Act: first regulation on artificial intelligence. [cited 2024 Aug 25]. Available from https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence
» https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence - 74 Ebrahimian S, Kalra MK, Agarwal S, Bizzo BC, Elkholy M, Wald C, et al. FDA-regulated AI Algorithms: Trends, Strengths, and Gaps of Validation Studies. Acad Radiol. 2022;29(4):559-66.
- 75 Yu F, Moehring A, Banerjee O, Salz T, Agarwal N, Rajpurkar P. Heterogeneity and predictors of the effects of AI assistance on radiologists. Nat Med. 2024;30(3):837-49.
-
76 World Health Organization (WHO). Drinking-water. Geneva: WHO; c2025 [cited 2024 Sep 2]. Available from https://www.who.int/news-room/fact-sheets/detail/drinking-water
» https://www.who.int/news-room/fact-sheets/detail/drinking-water - 77 Lenzen M, Malik A, Li M, Fry J, Weisz H, Pichler PP, et al. The environmental footprint of health care: a global assessment. Lancet Planet Health. 2020;4(7):e271-9.
-
78 Daigle BR. Data Centers Around the World: A Quick Look. Executive Briefings on Trade, May 2021 [cited 2024 Sep 2]. Available from https://www.usitc.gov/publications/332/executive_briefings/ebot_data_centers_around_the_world.pdf
» https://www.usitc.gov/publications/332/executive_briefings/ebot_data_centers_around_the_world.pdf -
79 WorldData.info. Member states of the OECD. [cited 2024 Sep 2]. Available from https://www.worlddata.info/alliances/oecd.php
» https://www.worlddata.info/alliances/oecd.php - 80 Leslie D, Mazumder A, Peppin A, Wolters MK, Hagerty A. Does "AI" stand for augmenting inequality in the era of covid-19 healthcare? BMJ. 2021;372:n304.
- 81 Goetz L, Seedat N, Vandersluis R, van der Schaar M. Generalization-a key challenge for responsible AI in patient-facing clinical applications. NPJ Digit Med. 2024;7(1):126.
- 82 Alami H, Rivard L, Lehoux P, Hoffman SJ, Cadeddu SB, Savoldelli M, et al. Artificial intelligence in health care: laying the Foundation for Responsible, sustainable, and inclusive innovation in low- and middle-income countries. Global Health. 2020;16(1):52.
- 83 Gibney E. How to shrink AI's ballooning carbon footprint. Nature. 2022;607(7920):648.
Edited by
-
Responsible Editor:
Bruno Adler Maccagnan Pinheiro Besen https://orcid.org/0000-0002-3516-9696
Publication Dates
-
Publication in this collection
18 Aug 2025 -
Date of issue
2025
History
-
Received
27 Nov 2024 -
Accepted
26 Jan 2025


