Artificial intelligence companies are making increasingly bold moves into healthcare, positioning their chatbots as tools that could reshape how patients and doctors interact with medical information. OpenAI has unveiled plans for a consumer-oriented product dubbed ChatGPT Health, while rival Anthropic is promoting a version of its Claude chatbot designed to assist clinicians with diagnosis support and medical documentation.
What stands out in this sudden wave of announcements is the absence of Google. Despite operating one of the most advanced and widely used AI models in the world through Gemini, the tech giant has so far refrained from launching a similar healthcare-focused chatbot. That restraint may not be accidental. Google has already learned, at great cost, how quickly experimental health technology can spark backlash when it falls short of public expectations.
There is no doubt that medicine represents one of the most promising frontiers for generative AI. From summarizing patient histories to flagging potential diagnoses, the technology could ease workloads and expand access to information. Yet it is also one of the most unforgiving environments for error. Unlike casual search queries or creative writing, flawed medical advice can carry life-or-death consequences.
This is where newer AI firms may be overreaching. Generative models are still prone to so-called “hallucinations,” producing answers that sound authoritative but are factually wrong or unsupported. In healthcare, that weakness becomes a critical liability. Without clear disclosures about how often such errors occur — and strong guardrails to prevent misuse — these tools risk misleading patients and overburdened clinicians alike.
Google’s earlier attempts to apply AI and data analytics to health care drew intense scrutiny from regulators and the public, particularly over issues of accuracy, transparency and data handling. Those experiences appear to have instilled a degree of caution that contrasts sharply with the confidence now displayed by OpenAI and Anthropic.
If AI companies hope to play a meaningful role in medicine, enthusiasm alone will not be enough. They will need to openly acknowledge the limits of their systems, rigorously test them in real-world clinical settings and clearly define how responsibility is shared when things go wrong. Without that level of honesty and accountability, today’s ambitious healthcare push could quickly become tomorrow’s cautionary tale.
