Application of Artificial Intelligence in the Health Care Safety Context: Opportunities and Challenges

Samer Ellahham, MD, Nour Ellahham,  and Mecit Can Emre Simsekler, PhD

There is a growing awareness that artificial-intelligence (AI) has been used in the analysis of complicated and big data to provide outputs without human input in various health care contexts, such as bioinformatics, genomics, and image analysis. Although this technology can provide opportunities in diagnosis and treatment processes, there still may be challenges and pitfalls related to various safety concerns. To shed light on such opportunities and challenges, this article reviews AI in health care along with its implication for safety. To provide safer technology through AI, this study shows that safe design, safety reserves, safe fail, and procedural safeguards are key strategies, whereas cost, risk, and uncertainty should be identified for all potential technical systems. It is also suggested that clear guidance and protocols should be identified and shared with all stakeholders to develop and adopt safer AI applications in the health care context.

Artificial intelligence (AI) is revolutionizing health care. The primary aim of AI applications in health care is to analyze links between prevention or treatment approaches and patient outcomes. AI applications can save cost and time for the diagnosis and management of disease states, thus making health care more effective and efficient. AI enables fast and comprehensive analysis of huge data sets to effectively enable decision making with speed and accuracy. AI is largely described to be of 2 types: virtual and physical.

Virtual AI includes informatics from deep learning applications, such as electronic health records (EHRs) and image processing, to assist physicians with diagnosis and management of disease states. Physical AI includes mechanical advances, such as robotics in surgery and physical rehabilitation.

1 Algorithms have been developed to train data sets for statistical applications to enable data processing with accuracy. These principles underlie machine learning (ML), which enables computers to make successful predictions using past experiences.

2,3 Although both AI and ML can provide these advances, such technology also may raise safety concerns, which may cause serious issues for both patients and all other health care stakeholders. Data privacy and security is one such concern because most AI applications rely on a huge volume of data to make better decisions. Furthermore, ML systems usually use data—often personal and sensitive data—to learn from and improve themselves. This makes them more at risk for serious issues such as identity theft and data breach. AI also may be associated with low prediction accuracy, which raises safety concerns. For instance, convolutional neural networks (CNNs) are trained and validated using data sets in clinical settings, which may not translate well to a larger population: for example, in one particular study, the surveillance of skin lesions for detection of skin cancer because these may be more diverse in the general population.4 Therefore, such an AI system may make false or inaccurate predictions. To address potential issues, the research team presents an overview of implications of AI and ML for the health care safety context. Furthermore, the team discusses the opportunities and challenges for the development and safe deployment of AI in health care.