Since ChatGPT burst into our collective mind a few months ago, it has generated incredible hype about AI. This article in Fast Company imagines a future where Artificial Intelligence will solve all of healthcare’s problems. It will make medical errors, health inequities, and workforce shortages bad memories of an antiquated past. The article does not pay bother to pay even lip service to the potential unintended consequences of AI in healthcare.
Everything has unintended consequences. At the risk of hearing, “Okay, boomer,” even though I am not one, I want to hear about the unintended consequences of AI in healthcare. And I want to hear how to mitigate them. Otherwise, we will end up with another technology, like the Electronic Health Record (EHR). EHR is an unqualified boon for data collectors – whether in the insurance industry or the scientific research community. But, it has caused endless frustration to clinicians.
If doctors don’t focus much on their notes while the patient is still in the exam room, they could spend hours on them afterward. This means they have to look at a screen more than the patient. Physicians working in both outpatient and hospital settings know that it worsens the quality of their interactions with patients. With almost all clinicians now working in EHRs, in a few years, no one will remember what it was like when doctors and nurses gave patients more than a cursory look.
EHRs are not the only example of unintended consequences in healthcare. Our current opioid epidemic results from the entire medical system trying to improve pain management without considering the tool’s potential harms. The fentanyl epidemic results from the entire medical system trying to abruptly stop prescribing opioids to a generation of patients addicted to them without simultaneously building a robust system of addiction care. The liberal use of antibiotics – a revolutionary tool – leads us to a future where superbugs could be the rule and not the exception.
Three things concern me the most about AI in healthcare:
- Loss of human touch
- Amplifying bias
- Ethical muddying
Losing The Human Touch
Every article about AI in healthcare salivates at potential efficiency gains. Healthcare functions between mission and margin – the patient’s need vs. the business’s profitability. We must forever hold this tension taut without resolving it in favor of either margin or mission. Our failure to realize this is at the root of our healthcare challenges.
Efficiency is a business need, not a patient need. Scanning the healthcare landscape, I don’t see much reason to be hopeful that the corporate entities that already employ most doctors will soon say, “AI has made us incredibly efficient at repetitive tasks. You don’t need to see more patients. Just spend more time with the patients you currently have.”
AI could help by responding to patient messages sent to their care team through the patient portal. As AI integrates into the EHR, most patients will never know if a real person even saw their message, and if they did, do they care about the patient as much as the AI-generated, empathetically worded response suggests.
Amplifying Bias
AI algorithms are only as good as the data they are trained on. If the data used to train AI models is biased, algorithms will perpetuate and amplify existing healthcare disparities. This could lead to biased diagnoses and unequal treatment recommendations. Existing inequalities in care would worsen.
Ethical Muddying
Who should be held accountable if an AI algorithm makes a medical error? Are the builders of the algorithm at fault? The health system that purchased and implemented the model throughout its hospitals and clinics? The doctor who used the model to help with delivering efficient care?
AI models improve by continuously training on vast amounts of data. What are the ethical implications of using patient data for things that benefit the margin more than the mission?
I don’t know how to solve these challenges, but I want to see as many articles exploring these challenges and ways to manage them as I want to see touting the fantastic potential of AI in healthcare.