Google’s AI Medical Advice Feature Ends After Questions About Safety and Accuracy

by admin477351

 

Google has confirmed the end of a search feature that organized health advice from anonymous online users using AI, amid growing questions about the safety of AI-generated health content on its platform. “What People Suggest” gathered community health perspectives from internet forums and displayed them to users making health queries. Three sources confirmed the removal, which Google acknowledged while offering limited details.

The tool was introduced by then-chief health officer Karen DeSalvo at Google’s “The Check Up” event in New York. She wrote in a blog post that the feature was designed to provide users with access to the experiences of others managing similar health conditions. The AI-organized content was deployed first for mobile users in the United States.

Google denied that safety concerns played any role in the removal, citing search page simplification instead. However, when the company pointed to a blog post as evidence of public disclosure, that post contained no reference to the discontinued feature. The failure to provide consistent and transparent communication has been cited as a significant shortcoming.

The removal adds to an already challenging year for Google’s health AI products. An investigation found that Google’s AI Overviews were distributing false health information to two billion users monthly. Google removed AI Overviews from some health searches following the investigation, though the response was widely considered insufficient.

As Google prepares for its next health event, the challenge of demonstrating responsible AI health practices remains significant. The removal of “What People Suggest” — handled quietly and without adequate transparency — is a reminder that responsible health AI requires honest communication as much as technical capability. The coming months will reveal whether Google is ready to meet that standard.

 

You may also like