The Prompt

October 13, 2023

The Promise And Risk Of Generative AI in Healthcare

A sensational article about ChatGPT diagnosing a rare disease reminds us we need to stay grounded about Generative AI (and AI in General).

The Promise And Risk Of Generative AI in Healthcare
Gilad  Barash

by Gilad Barash

Senior Data/AI Strategist at Who's Your Data?

A recent Today show article told the story of a boy who saw 17 doctors over 3 years for chronic pain, with no remedy for his condition. Eventually it was ChatGPT that found the diagnosis. This case illustrates how AI can assist in complex medical diagnoses, particularly when traditional avenues fail to provide answers, but it also highlights the need for caution and validation when using AI in healthcare.

While this story may seem thrilling, it deserves to be examined closely for what it means in regards to generative AI and what it doesn’t.

Evaluating Generative AI as a New Technology

When new technologies become available, it is important to discern between the ones that are overhyped and the ones that are commercially viable and bring actual tested value.
The Gartner hype cycle provides a graphic representation of the maturity and adoption of technologies and how they evolve over time, providing an expectation of when they will become commercially meaningful.

In 2023, Gartner placed generative AI at the peak of inflated expectations. At this phase early publicity produces a number of well-publicized success stories along with many less publicized failures. People can’t stop talking about it. It seems like it’s the answer to everything. Overhyping the limited achievements of the technology leads to eventual disappointment and waning interest. The lack of interest is indicative of the next stage in the hype cycle - “the trough of disillusionment”.

It’s stories like this one that cause technologies to be placed in this phase, since it raises the expectation from the technology without providing the proper context as it relates to AI in Healthcare.

To be sure, the happy ending in this story is amazing. The fact that ChatGPT successfully diagnosed a rare disease that 17 doctors were unable to is noteworthy. But what conclusions can we draw from this story and what conclusions should we be wary of?

Does this mean that ChatGPT outperforms specialists in their fields? Not necessarily. This is one successful case that is being reported. And while it changes the young boy's life, we don’t know statistically how many wrong diagnoses ChatGPT provided for similar rare diseases and what the rate of true positives is (actual diseases that ChatGPT diagnosed correctly).

Humans have a tendency to think that if an algorithm predicts the correct outcome in a case where an expert was wrong, then this means that the algorithm makes more accurate predictions than the human on average. But this is not necessarily true. It’s a fallacy. What’s more, humans and machines could make different kinds of mistakes, and we can’t come to any conclusions on their relative performance based on one example.

What we should conclude from this story is that we need to think about the role that generative AI may have in medicine and how to support it so that its contribution is meaningful. There are some wonderful advantages, but we should also be aware of the risks.

Patient Engagement Decision-Making Using Generative AI

One of the main challenges outlined in the article is that “No matter how many doctors the family saw, the specialists would only address their individual areas of expertise”. This story highlights how siloed American Healthcare is and how disconnected doctors are from one another. Patient data doesn’t get shared between providers.

Perhaps one of the big lessons in this story is that the healthcare industry needs to be better about sharing and synthesizing information from disparate sources - genetic information, patient history, lab results and much more. Inputting everything into ChatGPT allows synthesizing all available data together not only for diagnostic purposes (as was done in the article), but also for cohesive pre- and post-treatment documentation and personalized patient engagement. This can further be achieved by speech-recognition AI that can be utilized to transcribe patient visit notes in order to save caregivers time so that they can focus on diagnosis and treatment.

The merging of disparate data sources is especially compelling for healthcare-based engagement decision-making. Having full visibility into patient behavior throughout their entire health journey would be very valuable for recognizing patterns and personalizing communications such as treatment instructions, follow-up surveys, etc. ChatGPT’s understanding of the synthesized patient journey will provide decision support to healthcare professionals in all stages of the patient’s medical journey.

The Risks

Of course ChatGPT is not likely to replace human medical experts anytime soon. One of the big risks in using ChatGPT is that sometimes when it cannot find answers, it makes them up. This

is known as “Hallucinations”. Hallucinations can include made up facts, citations of papers that don’t exist, etc.. This misinformation could potentially lead to incorrect diagnoses, improper treatment instructions, wrong personalization and other issues.

We must remember that ChatGPT and other AI models are not infallible and are only as good as the data they were trained on. If the training data contains errors or biases, that may be reflected in the answer. In the case of the article a rare condition was suggested rightfully, but there could be cases where more common conditions would have been the right diagnosis.

Although ChatGPT can provide a crucial datapoint in the journey to find a diagnosis, it’s important to view it as a tool at the disposal of healthcare professionals and not a replacement. One can think of it as an expert opinion that other doctors can consider in their diagnostic effort.

The different view that ChatGPT has on the data is very beneficial and may cover human blindspots, but it also gets things wrong, and that needs to be taken into account. As in other industries, ChatGPT will not replace medical professionals anytime soon, but those that know how to harness it and take advantage of its strengths (while being mindful of its weaknesses) will be more successful.

The really exciting news is that the rate of change in tech today is mind-boggling - every week there is news of meaningful advances in what AI can do. This accelerated rate of change may keep generative AI out of the “trough of disillusionment” stage, but it will still face challenges ahead as practical limitations get in the way. New ways to use it will emerge constantly.

As long as we keep grounded about what it can and can’t do, we will see meaningful advances in various tasks, such as research, diagnosis, patient monitoring and personalized engagement/outreach. Exciting times are ahead!

healthcare researchhealthcare industrygenerative AIartificial intelligence

Comments

Comments are moderated to ensure respect towards the author and to prevent spam or self-promotion. Your comment may be edited, rejected, or approved based on these criteria. By commenting, you accept these terms and take responsibility for your contributions.

Disclaimer

The views, opinions, data, and methodologies expressed above are those of the contributor(s) and do not necessarily reflect or represent the official policies, positions, or beliefs of Greenbook.

More from Gilad Barash

AI: The Next Opinion Leader in Health Communication
Healthcare Insights Edge

AI: The Next Opinion Leader in Health Communication

Discover how Generative AI combats health misinformation by providing accurate info through social listening and chatbots, with ethical use key to its...

Sign Up for
Updates

Get content that matters, written by top insights industry experts, delivered right to your inbox.

67k+ subscribers