Google recently removed several of its AI-generated health summaries after reports found they contained inaccurate or misleading information. The move comes in response to scrutiny over the company’s AI Overview feature, which uses generative AI to provide concise summaries of search results at the top of the page, particularly for health-related queries.
Earlier this month, The Guardian reported issues with multiple health-related AI Overview snippets. These summaries, intended to provide quick and accessible information, were found to contain omissions or misleading statements that could affect patient understanding. In one notable example, the AI response to the query, “What is the normal range for liver blood tests?” failed to account for critical factors such as the patient’s nationality, sex, age, and ethnicity. According to the report, the lack of these contextual details could lead to misunderstandings about what constitutes normal results.
Experts have expressed concerns that inaccurate AI health summaries may pose real risks to users. There is a possibility that seriously ill individuals could mistakenly interpret their results as normal, potentially causing them to delay or skip important follow-up care. Health professionals warn that such misunderstandings could have serious consequences, highlighting the importance of ensuring that AI-driven health information is both accurate and contextually appropriate.
In response to these concerns, Google has removed AI Overviews for searches such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests,” according to The Guardian. The company, which holds more than 90% of the global search engine market, emphasized that AI-generated results are continuously reviewed and updated when gaps, inaccuracies, or misleading information are identified.
Despite these removals, AI Overviews remain available for other medical queries, including topics related to cancer and mental health. These areas have also faced criticism for providing potentially misleading or dangerous information. However, Google explained that it did not remove these summaries because the AI results were based on well-established and reputable sources.
“Our internal team of clinicians reviewed what’s been shared with us and found that in many instances, the information was accurate and supported by high-quality websites,” the company stated. This highlights the careful balancing act Google faces in providing AI-generated summaries while maintaining the accuracy and reliability of critical health information.
Last year, Google announced new features aimed at improving Search for healthcare purposes. These included enhancements to AI Overviews and the introduction of health-focused AI models designed to improve the quality and reliability of medical search results. The company has maintained that these tools are intended to help users quickly access relevant health information while still relying on established medical sources.
Vanessa Hebditch, Director of Communications and Policy at the British Liver Trust, described the removal of the liver test summaries as “excellent news.” However, she cautioned that this action only addresses part of the broader problem. In an interview with The Guardian, she said:
“Our bigger concern is that this is just nitpicking a single search result. Google can switch off AI Overviews for one query, but it doesn’t address the broader issues with AI Overviews for health.”
Hebditch’s concerns underscore the ongoing debate over the reliability and safety of AI-generated health information. While individual corrections and removals may prevent immediate harm, they do not necessarily resolve systemic issues with how AI generates medical content or accounts for critical context that can vary between patients.
Google’s AI Overview Controversies
This is not the first time that AI Overviews have landed Google in hot water. Shortly after the feature’s launch in May last year, it faced widespread criticism for bizarre and unsafe recommendations. For example, the AI suggested that users add glue to pizza to prevent cheese from sliding off and eat a small rock daily for vitamins. These unusual recommendations prompted Google to temporarily remove the feature while it made adjustments to its underlying AI systems.
Even after the relaunch, AI Overviews have occasionally produced incorrect or misleading responses, raising concerns about the reliability of generative AI in search results. In a recent example, when asked, “Is 2027 next year?” the AI incorrectly replied:
“2026 is next year and 2027 will be the year after that.”
While this example may seem minor, it highlights the broader challenges Google faces in ensuring accuracy across a wide range of queries.
The liver test removal illustrates both the potential benefits and pitfalls of AI-generated summaries. On one hand, AI can provide users with fast, concise access to information. On the other hand, even small inaccuracies in health-related content can carry serious consequences. As AI continues to play a larger role in search engines and information delivery, the debate over its safety, reliability, and oversight is likely to intensify.
For now, Google’s actions reflect an attempt to respond to immediate concerns while maintaining AI Overview functionality in areas where the company believes it is reliable. Experts and advocacy groups, however, continue to push for broader safeguards and more robust oversight to ensure that AI does not inadvertently harm users seeking health information.
