Tag Archives: Mayo Clinic

Google Cloud Partners With Mayo Clinic To Use AI In Health Care



Google’s cloud business is expanding its use of new artificial intelligence technologies in health care, giving medical professionals at Mayo Clinic the ability to quickly find patient information using the types of tools powering the latest chatbots, CNBC reported.

On Wednesday, Google Cloud said Mayo Clinic is testing a new service called Enterprise Search on Generative AI App Builder, which was introduced Tuesday. The tool effectively lets clients create their own chatbots using Google’s technology to scour mounds of disparate internal data.

In health care, CNBC reported, that means workers can interpret data such as a patients’ medical history, imaging records, genomics or labs more quickly and with a simple query, even if the information is stored across different formats and locations. Mayo Clinic, one of the top hospital systems in the U.S. with dozens of locations, is an early adopter of the technology of Google, which is trying to bolster the use of generative AI in the medical system.

Mayo Clinic will test out different use cases for the search tool in the coming months, and Vish Anantraman, chief technology officer at Mayo Clinic, said that it has already been “very fulfilling” for helping clinicians with administrative tasks that often contribute to burnout.

According to CNBC, generative AI has been the hottest topic in tech since late 2022, when Microsoft backed OpenAI released the chatbot ChatGPT to the public. Google raced to catch up, rolling out its Bard AI chat service earlier this year and pushing to embed the underlying technology into as many products as possible. Health care is a particularly challenging industry, because there’s less room for incorrect answers or hallucinations, which occur when AI models fabricate information entirely.

Recently, Google posted on The Prompt: “Let’s talk about recent AI missteps”. From the article:

…By now, most of us have heard about “hallucinations,” which are when a generative AI model outputs nonsense or invented information in response to a prompt. You’ve probably also heard about companies accidentally exposing proprietary information to AI assistance without first verifying that interactions won’t be used to further train models. This oversight could potentially expose private information to anyone in the world using the assistance, as we discussed in earlier editions of “The Prompt”…

Google also wrote a blog post titled: “Bringing Generative AI to search experiences”. From the article:

…For example, building search by breaking long documents into chunks and feeding each segment into an AI assistant typically isn’t scalable and doesn’t effectively provide insights across multiple sources. Likewise, many solutions are limited in the data types they can handle, prone to errors, and susceptible to data leakage…. Even when organizations make these efforts, the resulting solutions tend to lack feature completeness and reliability, with significant investments of time and resources required to achieve high quality results…

Google also points out that their Gen App Builder lets developers create search engines that help ground outputs in specific data sources for accuracy and relevance, can handle multimodal data such as images, and include controls over how answer summaries are generated. Google also indicates that multi-turn conversations are supported so that users can ask follow up questions as they peruse outputs, and customers have control over their data – including the ability to support HIPAA compliance for healthcare cases.

Personally, I would prefer to talk to an actual human being about whatever questions I might have about my health care needs. Giving this over to an generative AI, that could easily make mistakes or have “hallucinations”, sounds like a gimmick that could potentially cause harm to patients.