Doctor AI

A look inside the ways our medical industry is using AI—and how things could change in the future

Those of you that know me are well aware of my love hate relationship with computers which borders on disdain.  I think that the greatest advance that we could make in computers, would be making one that could take a punch. Having said that, I really learned a lot researching and interviewing for this topic. Realizing my limited knowledge, I called a close friend of mine Mike Kramer MD MBA. He is both a doctor and was the Chief Health Informatics Officer leading the implementation of AI in Ohio Health and Inova in Northern Virginia. This is his wheelhouse and allowed me to pepper him with questions. 

 Many of us, including me, have preconceived biases when talking about AI.  Images of Skynet in Terminator or other sci-fi movies where AI becomes self-aware and malevolent causes us to fear AI.  Students from high school to college are well aware how useful AI can be to compose papers or even answer complex questions.  

 Most believe AI is relatively novel since GPT (generative pre-trained transformer) was introduced in 2018 and the most recent iteration, GPT 4, in 2023.  However, forms of AI have been around since the 1950’s.  IBM showcased their Deep Blue rules-based AI to beat world champion chess master Garry Kasparov in 1996. The definition of AI changes as advances in the space continues from rules-based to computational models to large language-based models with the ability to predict data which can then be tested to ensure reliability and veracity. Conceptually in health care there are patient facing AI, provider facing AI, and AI that supports administrative facing applications. In addition, AI can be used to data mine for research purposes as well. A data model called, “alpha fold” AI is being used to model human proteins and develop new drugs. To do what used to take months, researchers can now build and simulate thousands of new drugs in minutes.

AI has been used in medicine for years. Even back in my medical school and residency days, EKG’s had interpretations at the top of the readout. Although these interpretations were not exceedingly detailed, they were an example of how AI could augment medical care. This is one of the essences of AI in medicine: how to augment our current care to make it faster at the point of care, more accurate, and with fewer providers. In this case, providers were and still are able to get an accurate EKG interpretation without having a cardiologist present which facilitates care especially in an urgent situation.   

Multiple types of AI exist. Basic logical models use mathematical and statistical methods to find patterns. Some of these models can learn and improve without being explicitly programmed, these are called machine learning, or ML, AI models. Advanced models can predict diagnoses and conditions. For example, one common model is able to predict the likelihood for a patient to fall while in the hospital. Using ML methods, data scientists identified variables from a large data set of and predict fall risk and need for subsequent safety measures. Another very common model detects sepsis. Sepsis is a systemic infection that can quickly lead to multiorgan system failure and death. If treated early, this demise can be mitigated. AI is able to use data from the electronic health record to include textual documentation, lab results, and other information to determine the chance that the patient is septic. Over 900 variables were analyzed and 66 were correlated with sepsis and ultimately used to predict sepsis. Using powerful cloud computing, a computer can scan all the patients in the hospital continuously looking for any signs of change. These predictions can then be validated and acted upon by providers to render lifesaving care more quickly. These data are integral to care in the ED and hospital. It is now considered a standard of care to have these models running in US hospitals.  

Data science has now become a very exciting space designing many different programs using AI. Some of these programs can use correlative data from large EHR data bases. The more data that can be stored and mined, the more correlations can be made. For example, an EHR database can be queried to see if there is a correlation between inflammatory bowel disease and Alzheimer’s dementia. If a correlation is found, then the question could be further refined by dialing down on specific symptoms or objective findings.  These data are out there to be sifted, but it requires AI to sort through so many data points. This is how two separate fields of medicine might be related, but yet unknown. This is a common problem in medicine with more and more information being developed in each specific field. We have become more siloed experts in our respective specialties. I know very little about inflammatory bowel disease but it could be related to osteoporosis and fractures which are in my field. We can use AI to bridge medical specialties by making correlations which could then be tested for causation and allow for cross pollination and subsequent treatment.  

Another provider facing example of AI is using deep learning neural networks to analyze visual data like mammograms or brain CT’s looking for cancer and a neurologic stroke. Visual heavy specialties like radiology, pathology, and dermatology depend on detailed review of many images. These lend themselves well to the use of AI. Software can be tuned to analyze the visual data and with reinforcement learning and can enhance the interpretation of images flagging studies for quicker human review or areas of the image to scrutinize more closely. In gastroenterology, AI is being used in colonoscopies to spot cancer polyps and other findings to make the procedure more sensitive and maybe more accurate.  

 There is a real existential fear that providers will be marginalized with such AI capabilities. Health care providers still need to confirm the differential diagnoses by combining the AI generated information with textual information from the patient and treatment rendered.  However, there is existing software that is able to combine all these data and do it well. Is this good or bad?  Depends on your point of view. As a provider, it raises concerns about being displaced by AI. On the other hand, there is a dearth of primary care physicians especially in rural areas. Could AI be used to make diagnoses and then implemented by mid-level providers that cost less?   

We have mentioned manners in which AI can augment our current care but are there ways that we can use AI to solve or diagnose problems with which we currently struggle? Concussions come to mind especially on the sideline. What if we could use visual data from eye movements and gait and combine that with symptoms to help determine if a player is currently concussed and can even return to play later. 

Large language models like GPT are being used now in a patient facing ways in clinic. The physician in real time records the conversation and exam with the patient.  Immediately the information can be merged with data from the EHR to create an accurate chart note that saves time and improves accuracy. I currently have to dictate clinic notes which I abhor as it takes time away from the patient. Other physicians have to not only see patients but manually input data into an EHR. Think of the time they could save by ditching their data entry job in lieu of more patient interaction. In addition, prompt engineering could be used design notes that facilitate preauthorization for tests and surgeries. Insurance companies are currently using AI to evaluate provider’s notes to deny services. It would save time and improve patient care if we could use learning software based off previous denials to create an AI generated note that never gets a denial for services needed by our patients.   

Dr. Kramer mentioned another patient facing application where bots are checking on patients after discharge.  A call will be made from a bot that is able to respond with empathy and discuss issues like medication usage. In fact, Dr. Kramer cited a study that asked patients about their experience with a bot compared to a busy provider. The patients felt that the bot interaction was actually more empathetic than the human response. I was shocked at first but then reality set in. Most providers are being more burdened with administrative responsibilities not the least of which is arguing with insurers on behalf of patients for needed services. The actual time spent with patients is being eroded by these nonclinical responsibilities. Bots might actually be able to help respond to patients in an efficient empathetic manner all hours of the day freeing up providers for other responsibilities. I can assure you that a bot will be more empathetic at 3 a.m. than a surgeon who has eight surgeries the next day.   

There are also uses in the back office for AI. Submitting authorizations for services where AI can pick out the buzz words required by insurance companies. Billing is made more efficient and accurate in the same manner.   

Up to this point we have discussed AI applications in clinical medicine, but what about procedural medicine like surgery? Although robotic surgery is not AI, it has become standard of care for prostatectomies. The use of robots and AR (augmented reality) is expanding in other surgical procedures like joint replacement as well. This AR platform allows data such as anatomy and movements to be recorded and compared with outcomes in learning software to predict efficient moves. It is hard to imagine robots with AI learning software replacing surgeons as the tactile feel of the human hand is hard to imitate. Right now, the only real limitation is the amount of data needed to refine the accuracy of these programs and robots to do it. I know this is a crazy thought, but who thought that we would be driving autonomous cars and using autonomous drones to deliver payloads?   

There are risks with AI beyond the concern for self-awareness and malevolence. AI is only as good as the data to which it has access. The more complete and accurate data that is available, the less variance in the predictions. However, patients have to be willing to give access to their personal data. I am personally reticent to allow unrestricted access as we don’t know where all the data are going and who has access to it. Furthermore, the predicted AI generated outcomes need to be continually tested for reliability and veracity. Finally, there needs to be transparency in the AI algorithms and regulation.   

 In the world of social media, it is estimated that 75 percent or more is generated by bots. The ability to determine what is human generated versus bot generated is difficult if not impossible. It is interesting that this was one of the first definitions of AI. This concerns me as well as others. What is real and what is bot? I hope that AI in medicine does not devolve to that.   

There is no question that AI will be a powerful influence in the future of healthcare in many ways. The ability for AI to quickly assimilate textual and visual data exceeds what the human mind can do. We can harness this power and use it for good.  But as providers are we digging are own graves? Responsible clinicians and health systems will need to manage this new technology like any other. Any device or test has its limitations, error rates, and requirements for interpretation. Responsible organizations will be judicious in selecting and implementing tools that monitor their ongoing performance. Like any technology, clinicians and patients are very important in asking questions, critically reviewing the results and working to assure the safest best care. The good news is that AI might be able increase efficiency and reliability in our health addressing some of the biggest challenges in health care, rising cost, varying quality, and unreliable experience. I remain cautiously optimistic. Now I’m sure that you are asking…did I write this or did chat GPT?   

Comments are closed.