AI & ML

Google Shuts Down AI Health Feature Following Safety Concerns

· 5 min read

Google's decision to discontinue its "What People Suggest" feature marks a significant moment in the company's struggle to balance AI innovation with medical information accuracy. The experimental tool, which used artificial intelligence to curate and organize health perspectives from online forums and social media discussions, has been quietly removed from Search results after less than a year in operation.

The feature's premise seemed helpful on the surface: aggregate real-world experiences from people managing chronic conditions, then present them in digestible themes. Someone searching for arthritis management tips, for instance, might see organized insights from patients discussing their exercise routines or pain management strategies. Google launched the capability at its March 2025 Check Up event, positioning it as a way to surface lived experiences that medical literature might not capture.

The Fundamental Design Flaw

The problem wasn't the concept of peer support—online health communities have provided valuable emotional support and practical tips for decades. The issue was presentation. By packaging anecdotal advice from unverified internet users within Google's Search interface, the feature gave casual forum posts an implicit stamp of authority they hadn't earned.

This matters because of how people interact with search engines. Research consistently shows that users trust information differently depending on its source and presentation. A Reddit comment about managing diabetes carries one weight when viewed in its original thread, complete with upvotes, skeptical replies, and visible context. That same comment takes on different authority when extracted, cleaned up, and presented as part of an AI-organized summary within Google Search—a platform billions of people already use as their first stop for medical questions.

Google told The Guardian the removal was part of broader efforts to simplify Search results, not a direct response to safety concerns. That framing is worth examining. The timing coincides with mounting criticism of how AI systems handle medical information across Google's products.

A Pattern of Medical AI Missteps

The "What People Suggest" removal didn't happen in isolation. In January, The Verge reported that Google's AI Overviews—the AI-generated summaries that appear at the top of some search results—had delivered dangerous medical guidance. One particularly alarming example advised pancreatic cancer patients to avoid high-fat foods, directly contradicting clinical recommendations. Patients with pancreatic cancer often struggle with malnutrition and weight loss; dietary fat is typically encouraged, not restricted.

The same investigation found errors in information about liver function tests. Google responded by emphasizing its investment in quality controls for health-related AI Overviews and promised updates when additional context was needed. But these incidents reveal a structural challenge: AI systems trained on internet content will inevitably absorb the internet's mix of accurate information, outdated advice, and outright misinformation.

Medical knowledge also evolves in ways that make static training data problematic. Treatment protocols change, new research overturns old assumptions, and clinical guidelines get updated. An AI model trained on discussions from 2023 might surface advice that was reasonable then but has since been superseded by better evidence.

Why Health Search Demands Different Standards

Google has successfully deployed AI features across numerous search categories. AI-organized shopping results help users compare products. Travel summaries pull together itinerary ideas. Recipe searches benefit from AI that can adjust ingredient quantities or suggest substitutions. These applications carry low stakes—a mediocre restaurant recommendation or a failed recipe is annoying, not dangerous.

Health information operates under entirely different risk parameters. Someone searching for cancer treatment options, mental health resources, or medication interactions is often vulnerable, scared, and desperate for reliable guidance. They may lack the medical literacy to distinguish between sound advice and harmful suggestions. They're also more likely to act quickly on information they find, especially if it appears authoritative.

This creates what safety researchers call an "automation bias"—the tendency to trust computer-generated information more than we should, simply because it comes from a machine. When that machine is Google, which has spent decades building trust as an information gateway, the bias intensifies.

The Crowdsourced Medicine Dilemma

There's genuine value in patient communities sharing experiences. Someone newly diagnosed with rheumatoid arthritis can benefit enormously from hearing how others manage flare-ups, navigate medication side effects, or communicate with employers about their condition. These insights complement, rather than replace, professional medical care.

The challenge is scale and curation. In a dedicated health forum, community norms develop. Regular members learn to recognize credible contributors, spot questionable advice, and understand that personal anecdotes aren't universal prescriptions. Moderators can remove dangerous misinformation. Context remains visible.

AI aggregation strips away those safeguards. It flattens the social dynamics that help communities self-regulate. It removes the visible markers—like a user's post history or community reputation—that help readers evaluate credibility. And it presents information with a polish that suggests editorial oversight, even when none exists.

What This Means for AI Health Tools

Google's retreat on "What People Suggest" doesn't signal an end to AI in health search—the company has too much invested in AI capabilities to abandon the space. But it does suggest a recalibration. The easy wins in AI search have already been captured. The remaining opportunities involve higher-stakes information where the cost of errors is measured in patient harm, not just user frustration.

Other tech companies face the same calculus. As AI tools become more sophisticated at synthesizing information, the temptation to apply them to medical questions will only grow. The technical capability to generate plausible-sounding health advice is advancing faster than the safety infrastructure needed to ensure that advice is actually sound.

For users, this episode reinforces an uncomfortable reality: the polish and confidence of AI-generated answers don't correlate with accuracy. A well-formatted summary can be just as wrong as a rambling forum post—it's just more dangerous because it looks more trustworthy. The removal of "What People Suggest" eliminates one source of potentially misleading health information, but the broader challenge remains. As long as AI systems learn from internet content, they'll absorb both the wisdom and the nonsense that content contains. The question isn't whether AI will make mistakes in health information—it's how companies will respond when those mistakes cause harm.