Integrating AI Into Healthcare

Responsible AI can automate routine tasks that burden healthcare professionals and assist in analyzing large datasets.

Photo of a Stethoscope

Artificial intelligence (AI) has transcended science fiction to become a fundamental part of our daily lives. In fields like medicine, we've witnessed AI learning in ways that mimic human cognition, particularly in processing immense datasets at remarkable speeds.

While AI algorithms often surpass humans in data processing capabilities, they lack essential human qualities such as empathy and creativity, which are integral to nuanced decision-making in fields like healthcare. The ultimate aim of AI is to replicate human behavior and perform tasks traditionally executed by humans, but there are significant questions about its feasibility and ethical implications, especially in domains where responsible AI practices are crucial.

A report published by McKinsey in early 2023 suggests AI could automate up to 30% of work hours for U.S. employees by 2030, with a more moderate impact expected in healthcare. The report indicates that allied health professionals may see a 4% to 20% increase in automated tasks, while healthcare professionals could expect up to 18% of their work to be automated with AI by 2030. Rather than view AI as a replacement for human expertise, we should see it as a complement—a synergy between humans and computers that leverages the strengths of both. 

Many, myself included, believe that the optimal interaction between humans and AI in healthcare involves a blend of human expertise and AI augmentation. This balance can automate routine tasks that often burden healthcare professionals, such as electronic medical record (EMR) documentation, administrative reporting and even triaging radiology scans. Additionally, AI can assist in analyzing large datasets, providing valuable insights for physician oversight and decision-making.

See also: Streamlining Medical Record Reviews Via AI

AI stands as a transformative force, offering significant advancements in the operation of medical devices and diagnostic capabilities. Deep learning algorithms, for instance, have demonstrated the ability to interpret CT scans at a pace far exceeding human capacity. 

Despite its immense potential, AI in medicine encounters several hurdles that warrant careful consideration. Privacy concerns loom large, as the use of patient data for AI algorithms raises questions regarding data security and patient confidentiality. Moreover, ethical biases embedded within AI algorithms pose a significant challenge, as they have the potential to perpetuate or exacerbate existing disparities in healthcare delivery. 

One of the most pressing issues surrounding AI in healthcare is the lack of comprehensive regulatory oversight. Unlike medical devices or pharmaceuticals, AI software is dynamic and continuously evolving, making it challenging for regulatory bodies such as the FDA to monitor and oversee effectively. As AI technology advances rapidly, regulatory frameworks struggle to keep pace, resulting in a regulatory landscape that is fragmented and often inadequate. 

In response to these challenges, proposals for public-private assurance labs have emerged. These labs would serve as independent entities tasked with assessing the safety, efficacy and ethical implications of AI applications in healthcare.

My journey into the realm of AI was marked by collaboration and a deep dive into the complexities of medical diagnostics. Teaming up with experts from Harvard Medical School, we embarked on an ambitious project to integrate AI with surface electromyography (EMG) readings, aiming to enhance diagnostic accuracy and efficiency. Initially, the allure of AI lay in its potential to streamline the interpretation of surface EMG data, a task traditionally requiring specialized expertise. However, as our endeavor progressed, we encountered the intricate nature of AI and its application in medical diagnostics. 

Surface EMG interaction involves a multitude of variables and considerations,  which needed to be broken down into discrete steps. From analyzing muscle activity to interpreting data points, each step in the process presented unique challenges that AI alone struggled to overcome. It was the human guidance for the AI process that was necessary for the better long-term outcomes. 

The complexity inherent in muscle activity analysis necessitated a comprehensive understanding of various factors, including muscle groups evaluated, movement expectations and spatial-temporal and functional  integration. It became evident that AI, alone, while powerful, lacked the nuanced insight and contextual understanding inherent in human decision-making. 

As we delved deeper into the nuances of AI integration, we recognized the critical role of human guidance in the process. Unlike conventional algorithms, which may operate within predefined parameters, AI in the medical domain demands monitoring and refinement. Factors such as medical history, demographics and individual characteristics must be meticulously accounted for to ensure accurate diagnosis and treatment recommendations. Achieving a seamless integration of AI into medical practice requires not only technological prowess but also human expertise to navigate the intricacies of patient care. 

Our journey with AI underscored the importance of recognizing its limitations and the indispensable role of human involvement in shaping its evolution. This early exposure to AI in the context of musculoskeletal (MSK) conditions underscored the intricate nature of developing AI solutions in medicine. While AI holds immense promise in revolutionizing medical diagnostics, its efficacy ultimately hinges on the quality of data inputted and the oversight provided by human experts. 

Our AI integration road map is grounded in a comprehensive understanding of the complexities inherent in EMG integration and medical indications. Collaborating with experts in these fields, we prioritize a phased approach to algorithm development, recognizing the need for iterative refinement. 

See also: Data Science Is Transforming Public Health

In my experience, the choice between deploying AI or relying on human expertise necessitates a thorough consideration of unintended consequences. While AI presents vast potential, its reliance on training data introduces the risk of bias, a factor that often goes unnoticed in decision-making. Moreover, AI exhibits a slower adaptability to unnecessary changes, lacking the imaginative and innovative capacities intrinsic to human cognition. Humans, on the other hand, possess a distinctive aptitude for exercising discernment, multitasking proficiently and comprehending information in nuanced ways beyond the capabilities of machines. 

The integration of AI into healthcare holds immense potential to improve efficiency and outcomes. However, it must be approached with caution and a keen awareness of the ethical considerations involved. By embracing a collaborative approach that combines human expertise with AI augmentation, we can harness the full potential of technology while prioritizing patient care and safety.


MaryRose Reaston

Profile picture for user MaryReaston

MaryRose Reaston

Dr. MaryRose Reaston is the co-founder and CEO of Segen-Health. 

She is an expert in diagnostic techniques for the evaluation and management of soft tissue injuries.

MORE FROM THIS AUTHOR

Read More