The race to implement artificial intelligence (AI) technology in meaningful ways has never been more fierce. Specifically, generative AI has recently taken the world by storm, creating an entire domain of applications, technology, and potential value.
J.P. Morgan Insights recently published an article titled “Is Generative AI a Game Changer?” explaining that “Generative AI — a category of artificial intelligence algorithms that can generate new content based on existing data — has been hailed as the next frontier for various industries, from tech to banking and media.” Gokul Hariharan, Co-Head of Asia Pacific Technology, Media and Telecom Research at J.P. Morgan, further reiterated that “Fundamentally, generative AI reduces the money and time needed for content creation — across text, code, audio, images, video and combinations thereof”—paving the way for disruptive innovation.
Undeniably, technology companies want to be at the forefront of this innovation.
Earlier last week, Google announced its much anticipated next step forward with regards to generative AI. In Google’s official blog, The Keyword, Sissie Hsiao, Vice President of Product, and Eli Collins, Vice President of Research, introduced open access to Bard, an experiment which allows users to directly interact with Google’s generative AI platform and share feedback accordingly.
The authors explained: “Today we’re starting to open access to Bard, an early experiment that lets you collaborate with generative AI […] You can use Bard to boost your productivity, accelerate your ideas and fuel your curiosity. You might ask Bard to give you tips to reach your goal of reading more books this year, explain quantum physics in simple terms or spark your creativity by outlining a blog post. We’ve learned a lot so far by testing Bard, and the next critical step in improving it is to get feedback from more people.”
The article also explains the concept behind a large language model (LLM), the technology that powers the system: “Bard is powered by a research large language model (LLM), specifically a lightweight and optimized version of LaMDA, and will be updated with newer, more capable models over time. It’s grounded in Google’s understanding of quality information. You can think of an LLM as a prediction engine. When given a prompt, it generates a response by selecting, one word at a time, from words that are likely to come next. Picking the most probable choice every time wouldn’t lead to very creative responses, so there’s some flexibility factored in. We continue to see that the more people use them, the better LLMs get at predicting what responses might be helpful.”
LaMDA, short for “Language Model for Dialogue Applications,” is Google’s breakthrough in building an adaptive conversation language model, trained on advanced dialogue and the nuances of human language. Now, Google is employing an iteration of this breakthrough with Bard, to hopefully shape the technology into something that can be useful and create value for users.
Without a doubt, this technology has incredible potential implications for healthcare. The most obvious application is that with appropriately trained and tested models, patients may start seeking medical advice and recommendations from the system, especially if the conversational interface is robust. Of course, this needs to be approached with caution, as the models are only as good as the data they are trained with, and even then, can make mistakes.
The authors of the article explain that “While LLMs are an exciting technology, they’re not without their faults. For instance, because they learn from a wide range of information that reflects real-world biases and stereotypes, those sometimes show up in their outputs. And they can provide inaccurate, misleading or false information while presenting it confidently. For example, when asked to share a couple suggestions for easy indoor plants, Bard convincingly presented ideas…but it got some things wrong, like the scientific name for the ZZ plant.” They go on to present the example of how the system proposed an incorrect scientific name for the Zamioculcas Zamiifolia plant.
However, if done correctly, there is so much potential with regards to enabling medically literate conversation, perhaps even as a way to aid physicians and specialists in creating diagnostic plans or bridging care for their patients.
On a larger scale, the ability to train intuitive models such as these provides great opportunity to derive robust insights from data. Healthcare is a trillion dollar industry with terabytes of data being produced annually. Overlaying advanced artificial intelligence and machine learning models to this data may provide significant opportunities to better understand and use this information for the greater good.
Assuredly, there are many ethical and safety challenges to consider with AI generally and specifically with generative AI. In products like these, there are numerous risks that technology companies must solve for, ranging from the production of hate speech and language that can be misused, to the generation of misleading information, which can be especially dangerous in a healthcare setting. Without a doubt, patients should seek medical care only from trained and licensed medical professionals.
Nonetheless, Google and other companies creating such advanced tools have great potential in solving some of the world’s toughest problems. Accordingly, they also undertake significant responsibility in creating these products in a safe, ethical, and consumer-focused manner. However, if done correctly, the technology may potentially change healthcare for generations to come.
The content of this article is not implied to be and should not be relied on or substituted for professional medical advice, diagnosis or treatment by any means, and is not written or intended as such. This content is for information purposes only. Consult with a trained medical professional for medical advice.