How AI Algorithms Are Influencing Healthcare Treatment
July 2, 2020 - 7 minutes readArtificial intelligence (AI) algorithms show great promise in many industries. Its efficiency in healthcare, however, varies. While some image recognition algorithms perform really well to identify problem areas in radiological scans, other AIs have been found to harbor racial or socioeconomic biases.
Experts say that, although the intent is to ultimately help patients, these AI-powered medical applications can entrench bias and amplify the damage of their errors. They can have long-lasting, severe consequences on patients, and when implemented in a community, they’re downright harmful.
The Error of Our Ways
Numerous hospitals and government agencies have invested in AI technology to improve the medical experience for both providers and patients. These algorithms are applied in a variety of ways, such as determining which patients need prioritized care or figuring out how to allocate benefits.
In Arkansas, the state health department started using AI in 2016 to manage healthcare benefits for patients on a state disability program. The program helped some of the neediest Arkansas residents stay afloat through monetary and reduced payment benefits. Using the program, a patient could have Arkansas pay for visits from a caregiver so that they could remain home and avoid moving to a full-time care facility.
Prior to the AI application, humans determined the case severity by having a nurse visit the patient at their home and calculate how many weekly hours the patient needs from a caregiver. The AI algorithm, on the other hand, calculated the weekly hours by taking in a variety of inputs about the patient’s health problems, medical history, and their abilities. For many people on the program, their assigned hours were drastically cut by the algorithms. It later turned out that the algorithm had made errors in several cases in which the patient’s hours were cut.
The Legal Aid of Arkansas filed a lawsuit against the state for using the AI to determine hours without human oversight. It came out during the investigation that the tool hadn’t accounted for major diseases like cerebral palsy and even diabetes. Those two diseases accounted for hundreds of hours for patients.
Black Boxed Algorithms
For the patients, it was impossible to challenge the algorithm calculations because the decisions weren’t transparent. And although Arkansas moved to a new algorithm in 2019, the result was that one-third of patients were cut from eligibility completely.
Several states were also burned by the AI algorithms they thought would save them money and time. Some states, like Arkansas, were hit with lawsuits from affected patients as a result.
Idaho used a similar algorithm to Arkansas’s in 2011 to help the state calculate costs for in-home care. But some patients saw their benefits drop by up to 42%.
Because the state kept the AI’s formula hidden, saying it was a “trade secret”, due process was impossible for patients. Whenever a patient would challenge the algorithm’s output, an official would investigate and conclude that it was beyond their “authority and expertise to question the quality of this result.”
The ACLU of Idaho conducted an investigation that found that the state’s data was so flawed that the state immediately threw it out. Unfortunately, the state decided to proceed with using the program anyway. According to the ACLU, the result was a program that relied on data which was useless, leading to unfound decisions about care. Eventually, the state changed the system.
Deeply Hidden Biases
Even if an algorithm works as expected, there is a chance for bias to infiltrate the results. In 2019, a study by Ziad Obermeyer and Sendhil Mullainathan, co-founders of Chicago-based Nightingale Project (not related to Google’s Project Nightingale), showed that an algorithm used widely in healthcare had an implicit bias in its results. It affected millions of patients, and black patients were disproportionately underserved.
When the results were reported to the company that developed the tool, Optum, the company ran its own tests and reached the same conclusion. The algorithm calculated which patients had the most complex healthcare needs and which patients would benefit most from increased medical intervention and attention.
It used inputs like the cost of caring for the patient, but it didn’t include race as a factor (perhaps in an effort to act in a color-blind manner). The problem arose when it became obvious that black patients historically have less money spent on their medical needs compared to white patients with the same illnesses. Perhaps it’s because black patients contend with discrimination from their doctors, have historically less access to care facilities, or live in poverty disproportionately.
But with the AI, only 17.7% of black patients received extra care. The researchers found that if the bias was removed, 46.5% of black patients would have received additional care.
A Non-Fully-Automated Future
These numbers are staggering and point to a deeper problem: AI can’t be trusted to determine outcomes and results without human oversight and constant checking in.
Imagine if an AI showed only one-third of patients who have cancer as actually having cancer. That’s the risk right now: researchers are using AI to detect cancer, eye disease, heart attacks, the likelihood of death, and much more. With these lofty goals, we must not place AI in a situation where the bias or error could make the difference between life and death.
Would you let an AI diagnose you without the doctor doing a quick check into the results? Why or why not? Let us know in the comments!
Tags: AI and healthcare, AI app development Chicago, AI in medicine, app development Chicago, artificial intelligence app development, Chicago AI app developer, Chicago app development, chicago ehealth app development, Chicago MedTech app developers, Chicago mobile app developer, medical app developer, medical app developers, medical app development, medical apps, MedTech app developer Chicago, mobile app developers chicago, technology in medicine