- ChatGPT began taking the world by storm in 2022, marveling users with humanlike replies to prompts.
- Healthcare companies are angling to use the AI breakthrough despite its many mysteries.
- They’re trying to strike the right balance between speed and safety for patients.
CHICAGO — Industries are grappling with the potential of ChatGPT, a breakthrough app that generates humanlike responses to prompts.
Healthcare is no exception. Hospitals are desperate to use “generative artificial intelligence,” ChatGPT’s underlying technology, to fix key problems in the business and science of medicine. The companies inventing the next generation of AI products can barely keep up with their appetites.
The behemoth health-records company Epic, which touches all corners of the healthcare ecosystem, is embracing generative AI as a necessary priority. The pace with which Epic is making new tools and the interest it’s getting from customers are unprecedented, Seth Hain, Epic’s head of research and development, told Insider.
“I’ve been working in healthcare software for 19 years, and I’ve been part of building out our machine-learning and AI capabilities throughout that time, and I can’t think of something comparable,” he said.
Since ChatGPT came to fruition, Microsoft, which has a deep partnership with the chatbot’s creator, OpenAI, has been thrust into the healthtech spotlight.
Microsoft and Nuance, a healthtech company it bought that’s making tools with OpenAI models, have been “inundated” with interest from providers, health plans, and life-science companies looking to learn about and use generative AI in different ways, Peter Durlach, Nuance’s chief strategy officer, told Insider.
“There’s pain points that people want to solve everywhere,” Durlach said.
Already, the large language models powering ChatGPT are automating medical notes, speeding up research, and assessing patient populations for signs of disease, Insider has learned. Many of the projects aren’t public.
It’s a stunning moment for an industry that’s long been resistant to change, and all signs suggest this new chapter of AI could help patients and doctors.
But providers are pressing into uncharted territory. Unwilling to risk getting left behind, hospitals are cautiously moving forward with their own experiments, trying to strike the right balance between speed and safety. They’ll have to work out the scientific, ethical, practical, and legal mysteries surrounding the leading generative-AI models as they go.
Perhaps nobody understands this better than Peter Lee, Microsoft’s head of research, who recently offered what looked like a warning to healthcare executives.
For eight months, he and a team at Microsoft have been working with OpenAI to explore how generative AI could work in healthcare. For instance, it could offer guidance for doctors on patients’ treatment plans, he said during an April panel at the prominent healthcare conference put on by the Healthcare Information and Management Systems Society.
“What we found is: Things are complicated,” Lee said onstage in Chicago. “There are very significant, awe-inspiring benefits but also some scary risks.”
Scientific, ethical, and legal mysteries behind ChatGPT
Lee’s panel drew top leaders from tech and healthcare, and their questions on generative AI were sweeping.
Among them: Who do you sue when something goes wrong? How quickly should healthcare organizations move, or shouldn’t they? What if the AI models perpetuate the biases that make healthcare inequitable?
For the ethicist and consultant Reid Blackman, one ethical sticking point is that ChatGPT’s rationale behind its answers is essentially unknowable, a problem for prospective patients.
“Look, if you’re making a cancer diagnosis, I need to understand exactly the reasons why,” he said.
Lee seemed to be grappling with this issue in real time. He said the latest GPT model was so sophisticated that his team had been unable to conclude whether it could identify its own biases and explain itself, calling it a “frustrating research roadblock.”
“There are still deep scientific mysteries that we’re just trying to come to grips with, in addition to the ethical and legal mysteries,” Lee said.
Providers are starting in less-risky areas
Ambitious projects using generative AI are underway regardless.
At the HIMSS conference, executives spoke with Insider with wide eyes about how the models could be used to prescribe medicine, a regulatory no man’s land. People are already turning to ChatGPT for free therapy, and mental-health apps are experimenting with it.
Companies are pitching Microsoft to train its large language models on vast stores of de-identified patient data, which would make them more useful in clinical settings, BJ Moore, the health system Providence’s chief information officer, said.
Health systems are using generative AI to identify patients at risk for conditions like sepsis, Julius Bogdan, the head of HIMSS’s advisory service, said. Kaiser Permanente, one of the country’s largest health systems, has a project like this underway for heart disease but wouldn’t discuss it with Insider, citing its early stages.
For now, a lot of the industry’s early work in generative AI is centered around unsexy, back-office paperwork and mundane tasks.
The Mayo Clinic’s initial generative-AI goals are modest but could save providers hours each day, Cris Ross, Mayo’s chief information officer, said. The low-stakes practice could leave Mayo more equipped to take the tools further.
“I want to put a chatbot in front of my help desk immediately for my doctors whose computer isn’t working,” Ross said. “Simple stuff, right?”
Epic’s first priority in generative AI is helping clinicians be more efficient, Hain said. In April, it launched a tool that drafts messages for providers responding to patients.
A futuristic presentation shows the promise of generative AI
Nuance, which started off making dictation tools for doctors in the ’90s, recently combined its own AI models with GPT-4, the latest model powering ChatGPT.
At the HIMSS conference, the company held invite-only presentations, giving a glimpse into how the combination could transform aspects of care.
In a presentation seen by Insider, Dr. Julie OConnor, a Nuance consultant, asked a “patient,” a volunteer from the audience, what brought him into the urgent-care clinic, the first venue showing Nuance’s tools. She pretended to do a physical exam.
Behind her, a screen showed Nuance’s DAX Express, its newest product, furiously recording a word-for-word transcript of their conversation. Seconds after the visit ended, it used the transcript to write a medical note.
The note recounted the patient’s narrative — he was struggling to breathe, especially in the past few days — as well as OConnor’s findings, such as evidence of fluid in the lungs.
OConnor edited the note to fix a mistake and clicked a button to transfer it directly into the patient’s health record.
The next portion of the presentation took place in a hospital, where OConnor dictated to an app that Nuance is making for nurses. It automatically filled out more paperwork, recording things like the patient’s urine output.
After the patient’s discharge, for what turned out to be an exacerbation of congestive heart failure, Nuance imagined a generic health app that could help him from home.
“I’m on a keto diet. Is that a problem?” the patient, now played by the Nuance employee Jon Dreyer, asked the app.
It responded in the affirmative, writing that keto diets could raise blood pressure, a problem for folks with congestive heart failure.
There’s a graveyard of tech bets in healthcare
Nuance has been working closely with Epic. Because of that work, its medical-note generator, for instance, will be set up to work within the records clinicians already use. That gives Nuance and Microsoft a clear path for putting their generative-AI tools to work.
Other companies and app makers may not enjoy that same degree of partnership.
Health systems are still confused about how the generative-AI models can be safely deployed within their own data ecosystems, according to Insider’s discussions. Some fear less technologically savvy providers will fall behind, creating a backlog of requests for tool makers like Epic.
Generative AI is far from a slam dunk, HIMSS’s Bogdan said, citing adoption and privacy concerns. As a former chief data officer of a health system, he’s seen phenomenal AI algorithms sit on the shelf.
“If the clinicians don’t understand and adopt it, you can build the best things and nothing happens,” he said.