AI on the Edge: Outliers or Algorithmic Erasure?
Are outliers at the heart of healthcare, or mere noise?
By J. Steven Bromwich
I started my nursing career in bone marrow transplant and oncology. In such high acuity and often high stakes practice areas, it is common to encounter patients who do not fit neatly into an average. Skilled nurses and physicians in such units develop a clinical intuition for medicine that doesn’t fit a recognizable pattern until hindsight illuminates it.
In my previous article, I looked at the tension between the utility of AI and the trust (or lack of trust) patients place in it. Today, I will take a look into the mechanics behind that trust, specifically, what happens when the math decides a patient is merely “noise.”
The tyranny of averages
Artificial Intelligence loves averages. It lives in the “mean.” In data science, outliers, or “edge cases,” (sometimes also called “tails”) are the data points that don’t make the cut. They are discarded to make the model cleaner.
But in healthcare, an “edge case” is a human being. It is the patient with the rare drug reaction, the one who survives a life-threatening diagnosis, or the person whose physiological markers are ignored because they don’t match the majority dataset.
Marketing brochures often boast 95% accuracy for healthcare AI. To an administrator, that sounds like a miracle. To a clinician, 95% accuracy can mean a 100% failure for the patient who doesn’t fit the mold. If an AI predicts “no complications” for every patient because complications only happen 5% of the time, the algorithm is statistically accurate but clinically unhelpful. This is called the accuracy paradox. It describes a model that has high accuracy but low sensitivity (the ability to detect the positive case). It misses the very thing it was hired to find.
This isn’t a theoretical concern; it is a documented clinical failure. An audit published by Wong et al. in JAMA Internal Medicine (2021) examined a widely used AI tool designed to predict sepsis, a leading cause of death in hospitals.
The tool claimed high accuracy. However, when independent researchers examined the patients who fell outside the dominant pattern, they found the AI failed to flag approximately two-thirds of those who went on to develop sepsis.
The model was considered “accurate” because it correctly identified the thousands of patients who remained healthy, but it was dangerously blind to the ones it missed. This is the accuracy paradox in action.
(Wong A, Otles E, Donnelly JP, et al. External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients. JAMA Intern Med. 2021;181(8):1065–1070. doi:10.1001/jamainternmed.2021.2626)
AI models are based on recognizable patterns
Machine learning models are trained to minimize “loss.” In plain terms, they are built to make the fewest mistakes across all patients.
When an algorithm must choose between a prediction that fits the majority and one that identifies a rare outlier, it usually favors the majority. That choice keeps the model’s overall accuracy score high. Statistically, betting on what happens most often is the safest move for the software, even when it means missing the patient who is different. In medicine, that can be a dangerous trade-off.
AI systems are essentially prediction machines trained on past data. They learn what “normal” looks like by finding repeating patterns in large groups of people. When a patient falls outside those patterns, perhaps because of a rare reaction, an unusual presentation, or a combination of factors the model has rarely seen, the AI tends to pull the prediction back toward what is most common. This isn’t because the patient is average, but because the model has learned far more about the middle than the margins.
The result is subtle but consequential: the further a patient is from the statistical norm, the less confident the system become, and the more likely their signal is treated as noise.
The cost of algorithmic erasure
AI needs thousands of data points to “learn” a pattern. While the “90%” provides a mountain of data, the edge cases may only contribute a neglible amount.
Imagine an AI trained to paint portraits. If asked to paint a family whose faces are scarred by a house fire, the model might fail. Because it was trained on “average” faces, it may not recognize the scarred faces and be unable to properly reproduce them.
In healthcare, this isn’t just a false portrait; it is algorithmic erasure. It erases essential realities about outlier patients. If the AI cannot see the exception, the healthcare system may fail to treat the exception. When the edges fall off, outlier patients can suffer.
Forensic reasoning for AI in healthcare: what clinicians must ask
This work is not an argument against artificial intelligence, nor is it driven by fear of technology. It is an exercise in clinical and forensic reasoning. When an algorithm enters patient care, the questions are the same ones clinicians ask at the bedside and investigators ask after harm occurs. What did the system see? What did it miss? Why did it miss it? And who bears responsibility when it does? AI holds real promise in medicine, but only if it is examined with the same rigor we expect of any other high-stakes clinical tool.
Reference: Wong et al., 2021
Subscribe for more clinical and forensic analysis of AI in healthcare.
Previous article: J. Steven Bromwich
About the Author
J. Steven Bromwich is an RN, clinical ethicist, and investigator. With a background spanning bedside care, bioethics, and criminal investigation, he founded The Standard of Care Report to bring the dignity of patients and the vocation of caregiving to the foreground amid AI advances and the systemic challenges in healthcare.



The use of AI in healthcare can be difficult if you are one of those patients on the edges. Pray we really look and examine the ethical implications.