The AUDIT: trust vs. utility in healthcare AI
40 million people use ChatGPT daily, but 75% fear AI in the health clinic
By J. Steven Bromwich
Safety, guardrails, and AI experimentation
Although I am enthusiastic about the successes and potential of AI in healthcare, I am also a realist. In medicine, we don’t judge a new intervention by its potential; we judge it by its performance at the bedside, adherence to the Standard of Care, and its compiance to ethics.
We are currently being inundated not only by new AI uses, but also enthusiasts. Are these AI improving care and closing healthcare gaps? At what cost? Are they creating new gaps? Are they worsening care for vulnerable people? How are we measuring these results and creating the necessary guardrails to protect patients against harm? Are clinicans and systems left in an ambiguous liability moment?
To trust or not to trust?
People now query AI agents for information about symptoms, diagnoses, treatments, and pharmaceuticals as a matter of daily habit. Earlier this month, OpenAI reported that 40 million people worldwide use ChatGPT daily for health information. Many of these searches are patients trying to navigate the complex labyrinth of insurance coverage and billing, others are researching health conditions and treatment.
There is an interesting and instructive paradox. According to a survey by the Annenberg Public Policy Center (APPC), 63% of Americans find AI-generated health information reliable, yet 49% are uncomfortable with their own providers using AI in making care decisions for them. People are comfortable searching for information and options in the privacy of the AI but have increased distrust for the provider using it in real-time medical decisions and care. (Physicians and other clinicians are, in fact, using AI at the bedside. This topic will be covered in another article).
Perhaps these statistics point to a different challenge in healthcare: being disconnected and uninformed. Patients want to be involved in their healthcare, yet healthcare has become less personalized and less educative. “Big health” has left many patients feeling like metrics and “throughputs.”
The KPI failure
If we view these survey results about people’s experience with AI search results and feelings about their use in the clinical setting through the lens of a clinical audit, we are witnessing a performance crisis masquerading as a technology revolution in healthcare. In any other clinical setting, the following metrics would be considered abysmal KPIs (Key Performance Indicators):
Functional success rate (31%): Only about three in ten searchers “often or always” receive a satisfactory answer from AI.
The reliability gap (69%): Nearly seven out of ten users walk away without the reliable guidance they need.
Consumer sentiment: 75% of the public believes AI is being integrated into medicine too quickly, without accounting for the risks.
Auditing the results
These numbers reveal a trust gap of 32%. This is the distance between the 63% of Americans who believe the tool is reliable and the 31% for whom it works. There exists a disturbing “false positive” in trust, a gap between aspirational hope and functional utility.
This gap is where the danger resides. If the functional failure metric is not substantially repaired, the aspirational trust could eventually turn into a systemic liability, especially if consumer expectations are conflated with the standard of care used by providers and healthcare systems.
Success tempered by reality
AI in healthcare is already improving care and promises more advances in diagnostics, treatments, and clinical breakthroughs. Trust is not built through marketing or hype, however; it is built through transparency, accountability, and the rigorous training of the clinicians who act as the final safety net.
AI needs guardrails. I know, everyone is saying this, but it needs more than the hype of a headline; it needs teeth. We must treat algorithmic logic with the same “chain of custody” standards we use for medical evidence and evidence in a criminal case. It should be rigorous. Patients and families need to know these exist.
Patients want to be informed and involved in their healthcare. It is the responsibility of healthcare systems and providers to properly engage and educate patients and their families about the reality, both the advantages and limitations of artificial intelligence in their care. Systems should not limit implementation to technology experts. Clinical-AI translators are needed to explain and build trust with patients, families, and staff.
About the Author
J. Steven Bromwich is an RN, clinical ethicist, and investigator. With a background spanning bedside care, bioethics, and criminal investigation, he founded The Standard of Care Report to bring the dignity of patients and the vocation of caregiving to the foreground amid AI advances and the systemic challenges in healthcare.


