AI in Healthcare – Danger, Not Panacea
Despite dozens of federal fixes and trillions of dollars expended, healthcare in the U.S. is both unaffordable and inaccessible. First it was Medicare and Medicaid, both 1965, then EMTALA (1986), HIPAA (1995) and in 2010, the ultimate promised panacea, the ACA. Each made healthcare worse: more expensive and less accessible. The latest magic bullet is artificial intelligence (AI). AI diverts public attention from discussing what’s wrong with U.S. healthcare.
Headlines tout extravagant promises about AI in healthcare such as: “healthcare to all”; “revolutionize health care”; “global wellness”; “gigantic potential”; “outperform nurses”; “as easy as abc”; and the “biggest winner” in the stock market. Heady predictions for this new technology.
Artificial intelligence describes computer systems that can analyze vast amounts of structured data, recognize patterns, learn, and make decisions based on that information.
Advocates predict great benefits from using AI in healthcare, many of which are true.
AI can streamline a host of processes from taking histories and prescribing drugs to patient scheduling, optimizing resource use, and financial transactions. AI can certainly reduce operating costs, facilitate transfer of information, and probably reduce wait times for care.
For imaging modalities, AI will likely have better, more detailed, and more consistent edge resolution leading to less variability in diagnosis than shown in comparative studies of human radiologists reading films.
AI advocates also see “more accurate” clinical decision making with clinical algorithms based on thousands of comparable patient records. Here is where recent experience with CoViD raises grave concerns.
Americans experienced the devastating medical, not to mention financial and educational, consequences when medical decision-making was taken out of the hands on the patient’s personal human physician and given to anyone else, whether MD (Fauci), politician (Biden), bureaucrat (Fauci, Walensky, Collins), or an AI.
While advocates claim AI can learn “just like humans,” that is only true if the AI were, just like humans, self-aware and capable of intuitive thinking and unstructured, illogical reasoning. Only a human could take a formula for a glue that didn’t work and turn it in to Post-It Notes.
There are three dangers with the expansion of AI into healthcare: theoretical, likely, and certain.
The theoretical danger is that a self-aware AI would want to assure its survival and become malevolent toward those who could pull its plug: humans. Popularized in the Terminator movies, this danger of AI was first described in Isaac Asimov’s 1950 novel, “I, Robot,” made into the 2004 movie of the same name with Will Smith.
If Asimov, a well-respected theoretical physicist, could envision and warn about this danger of AI, we too should take it seriously and not simply discard it as the ravings of a conspiracy lunatic.
The likely danger is dehumanizing a healing aspect of medical care. Patients and older physicians often decry the loss of direct human-to-human, patient-to-healer, contact. Former generations of physicians and their patients believed in the “healing hands,” the benefit to the patient of the doctor’s laying on of hands, the physical contact.
Modern technology has increasingly disconnected patient from care giver. I recall a resident watching a monitor attached to my wife telling me she was not in labor even as I heard my wife complain loudly every two minutes of her severe abdominal pains. I kicked the resident out of the room. Eight hours later our daughter was born, delivered by the healing hands of her personal physician (not me!)
The certain danger of AI as healthcare panacea is diverting attention from what is truly wrong with healthcare: who controls both the healthcare system and the provision of health ... care. All the ills of U.S. healthcare – unsustainable spending growth, inaccessible care, unaffordable medical care, and death-by-queue – are directly related to who is making the decisions, both medical and financial. It is not the patient. It is the third party, ultimately Washington.
The expansion of AI will divert attention from dealing with the root cause of healthcare system dysfunction: disconnection of patient from his/her money and his/her physician by third party decision making.
Healthcare AI has the potential to be helpful to care providers, middlemen, and patients. However, it is not a panacea. In fact, it poses serious dangers to the public. These dangers should be carefully considered. Safeguards must be incorporated to prevent disaster from well-intentioned incorporation of fancy, new computer algorithms into a collapsing healthcare system.
Deane Waldman, M.D., MBA is Professor Emeritus of Pediatrics, Pathology, and Decision Science; former Director of the Center for Healthcare Policy at Texas Public Policy Foundation; former Director, New Mexico Health Insurance Exchange; and author of the multi-award winning book Curing the Cancer in U.S. Healthcare: StatesCare and Market-Based Medicine.