Generative AI in Healthcare & Life Sciences: Balancing Risk & Reward
In no other industry is the potential of generative AI (GenAI) more promising than healthcare and life sciences. With the application of GenAI, we may be able to discover new drugs faster, better understand patient and provider journeys, provide self-service solutions to share medical insights or access to clinical trial registries, and automate processes that drain resources and money. All in the service of improving care and lowering costs.
But it’s also true that no other industry faces as much risk. The laws and ethical obligations around patient data privacy, and the potential for harm via mistakes, render GenAI in healthcare and life sciences (HCLS) a prospect that’s as daunting as it is exciting.
GenAI technology is evolving at a rapid pace, while recommendations and regulations guiding the best use of GenAI in HCLS are under pressure to keep up. The speed of change, coupled with the risks, will deter some companies from experimenting.
That would be a mistake. The smart move is to proceed with caution. Sitting on the sidelines would jeopardize opportunities to take advantage of this extraordinary technical development. At the same time, expecting GenAI to instantly transform business also comes with enormous risk. How do HCLS companies strike the right balance to continue innovating, minimize risk and drive measurable outcomes that justify the investment? In short: Empower your people.
Application of GenAI without humans in the loop could jeopardize patient safety. Employing GenAI as an agent that supports tasks performed by humans unleashes the full potential of GenAI, while strengthening safeguards.
GenAI in Healthcare & Life Sciences: A Primer
Generative AI is the branch of AI that is trained on enormous amounts of data to create new content, whether written, visual or auditory. GenAI learns from the patterns and structure of existing artifacts, such as natural language text or images, to create new artifacts with similar characteristics.
Large language models (LLMs) are a form of GenAI trained on mass volumes of public data to contextualize lexical meanings. This inherent understanding enables models to generate context-relevant, convincing, human-like responses.
But until humanity feels confident that AI won’t misdiagnose a heart attack or hallucinate something that looks like medical advice, we shouldn’t use GenAI without human oversight. Legally, no one can practice medicine without a license, and we don’t license robots (yet). Even as testing shows that GenAI is demonstrating improved understanding and response to medical content, it’s still too soon to incorporate the technology into healthcare without close supervision.
So, what can we do with this groundbreaking, mind-shifting resource? Quite a bit, actually.
The Dos & Don’ts of GenAI in HCLS
GenAI isn’t a cure-all. It’s important to take a design-thinking approach to any new technical intervention. That means we first need to define the problem or challenge, understand the status quo and, only then, evaluate which technologies – including GenAI – might provide the greatest return on investment.
If GenAI does seem like the right tool for the problem, we must understand the risks and effort before determining if the potential reward is worth it. If GenAI produces hallucinations (false information) or offers answers that lack context, how will that impact patients and stakeholders?
That said, there are areas where GenAI can accelerate access to insights without undue risk. Right now, the best approach is to think of GenAI not as a replacement for humans but as an enhancement, helping us do what we do faster and more efficiently.
GenAI can be used with minimal risk to support work where subject matter experts are part of the value chain. Example use cases include:
- Summarization of text extracted from proprietary content to be shared with internal users only
- Generation of draft content to be subsequently reviewed and approved through appropriate channels
- Extraction of prompts from forms and/or text mining to enhance semantic search programs and to develop curated datasets
- On-call support to internal employees to disseminate best practices and standard operating procedures and improve worker productivity
- Metadata extraction and relational mapping, enabling the ability to use natural language to query proprietary data assets
What these use cases have in common, of course, is human input to ensure the outputs are validated and contained. People are aware that the outputs are computer-generated, and the business process is designed to include review.
On the flipside, there are several areas that might seem tempting for GenAI but require more thorough testing and validation. We advise the following cautions:
- Limit the provision of medical information or education to patients and healthcare providers due to the risk of hallucination
- Do not use LLMs as the Q&A interface for patients or healthcare providers without constraints; instead, use Retrieval-Augmented Generation (RAG) methods to enhance the quality of the data, for example.
- Do not employ LLMs to dynamically generate content without following processes to review and approve it first
- Do not share personal information to an LLM without gaining permission to do so. Data privacy laws must still be considered as we think about which datasets we expose to GenAI applications
Making It Real
So, how does all of this translate to the real world? We are currently developing resources for clients, while simultaneously building our own ideas into tools that are future-ready. Here’s a snapshot of some of our use cases:
Semantic Search System for Enhanced Evidence Generation
Our solution uses natural language processing to extract text from unstructured data assets to make these chunks available for search and retrieval. Application of LLMs provides an opportunity to summarize extracted insights while supporting a longitudinal view of trends over time.
Training Data Generation
In collaboration with several multinational pharmaceutical and biotech companies, we are developing self-service search or chatbot interfaces and experimenting with the use of GenAI to extract question-and-answer pairings from client-owned data. These additional golden datasets are then used to support intent recognition training.
Generative AI Semantic Document Ingestion
The current process to extract information from clinical documents is to have humans review each document, create data fields and manually enter the corresponding data into these defined fields, all of which is expensive and time-consuming. We developed a proof-of-concept using a hybrid approach.
We used an LLM to extract text from clinical documents and then converted the text into a structured format by auto-assigning fields. We worked with a human in the loop to review what the technology had identified as key findings.
Features of our approach include:
- Optical character recognition to parse text-in-image files (when applicable)
- LLM to split documents into distinct cases (when applicable)
- LLM to extract semantically relevant fields into a predefined JSON format
Accelerator to Apply LLM to Visualize & Classify Study Termination Trends
We created an accelerator to help researchers more efficiently understand signals of clinical trial safety by identifying reasons for early study termination with a taxonomy of problem areas (e.g., recruitment issues, efficacy issues, sponsor decision, business decision or funding issues). These termination reasons were grouped and exposed in a front-end interface with a confidence score for each classification. Along with the classification, we developed a dashboard that allows users to visualize data and drill down to specific studies.
The Smart Use of GenAI
In the HCLS industry, data exists that will enable us to improve lives — if we can harness it. GenAI, used wisely, will drive innovation farther and faster than ever before. Get ready.
Special thanks, for their feedback and collaboration, to Jonathan Rioux, Managing Principal, Data Analytics Consulting, EPAM, and Eric McVittie, Manager, Data Analytics Consulting, EPAM.