In earlier posts, I argued that trust in clinical AI is not just a technical property. It is an epistemic one. Systems do not merely process data. They shape what counts as knowledge, whose voice is heard, and which realities become clinically actionable.

This post takes that idea further by grounding it in the work of philosopher Miranda Fricker. She distinguishes two key forms of wrong: testimonial injustice and hermeneutical injustice. When we look at clinical AI through this lens, especially in global health contexts, a troubling pattern emerges. Our models are not just biased. They are systematically silencing certain ways of knowing.

And I have seen this tension firsthand in my work on dengue triage systems in Bangladesh and fairness aware ECG models.

When Data Speaks, Who Is Allowed to Be Heard?

The Myth of Neutral Data:

Clinical AI systems are often trained on datasets from high income countries. These datasets are treated as neutral, objective, and generalizable. But they are anything but.

They encode specific healthcare infrastructures, culturally shaped symptom reporting, and demographic distributions that rarely reflect the Global South. When such data becomes the foundation of intelligent systems deployed globally, we are not just exporting models. We are exporting epistemologies.

Testimonial Injustice in Clinical Data

Fricker describes testimonial injustice as a credibility deficit assigned to someone's word due to prejudice. In clinical AI, this happens in a subtle but powerful way.

From My Dengue Triage Work in Bangladesh:

In my dengue triage chatbot work, patients in Bangladesh often describe symptoms differently than what is captured in structured datasets. Expressions of fatigue, pain, or warning signs of dengue are shaped by language, access to care, and lived experience. But training data does not treat all expressions equally. Symptoms common in Western datasets are weighted as reliable signals. Locally expressed or less formally documented symptoms are treated as noise. Informal care seeking patterns are often excluded entirely.

The result is a system that implicitly says some patients are more credible than others. Not because of malicious intent, but because their experiences were never meaningfully encoded. This is testimonial injustice at scale.

Hermeneutical Injustice: When Entire Realities Are Missing

If testimonial injustice is about not being believed, hermeneutical injustice is about not being understood at all. This is even more dangerous.

From My Fairness Aware ECG Research:

In fairness aware ECG modeling, we often focus on demographic parity or representation learning. But what if the problem is deeper? What if certain cardiac patterns common in underrepresented populations are not well studied? What if wearable device data reflects usage patterns tied to socioeconomic status? What if clinical labels themselves are biased or incomplete?

In such cases, the issue is not just imbalance. It is absence. The system lacks the conceptual resources to interpret certain signals correctly. Entire physiological or experiential patterns remain invisible.
Beyond Bias to Structural Gaps:
This aligns with recent work in medical AI ethics that argues epistemic harm is not only about biased outputs but about structural gaps in meaning making. When AI systems learn from incomplete clinical records or historically skewed datasets, they inherit not just bias but ignorance. And unlike human clinicians, they cannot question what they do not know.

Scaling Injustice Through Automation

From Individual Harm to Systemic Infrastructure:

One of the most unsettling insights from recent literature is that AI does not just replicate epistemic injustice. It scales it.

A misinterpretation in a single clinical encounter is harmful. The same misinterpretation embedded in an AI system becomes infrastructure. It affects thousands of patients. It standardizes exclusion. It becomes harder to detect because it appears consistent. In this sense, clinical AI transforms localized epistemic failures into global ones.

Why Fairness Metrics Are Not Enough

In my ECG work, fairness aware representation learning improved performance across demographic groups. But even then, something felt incomplete.

Fairness metrics answer questions like: Are predictions equally accurate across groups? Is error distributed evenly? They do not answer: Whose knowledge shaped the model? Which experiences are missing? What kinds of uncertainty remain invisible?

This is where epistemic injustice frameworks push us beyond technical fixes. They force us to ask whether the system is capable of understanding all the populations it serves.

Toward Epistemically Just Clinical AI

If epistemic injustice is built into data and design, then addressing it requires more than post hoc correction. It requires rethinking how we build systems from the ground up. Here are some directions that emerge from both research and practice.

Five Directions for Epistemically Just AI

Closing Thought

In one of my earlier reflections, I wrote that building a dengue triage chatbot changed how I think about AI. Not because of its performance, but because of what it revealed.

It showed me that every dataset carries voices. And every model decides which of those voices matter.

Epistemic injustice reminds us that silence in data is not absence. It is often exclusion. If we want clinical AI to be truly global, it cannot just scale models. It must learn to listen.

References

Fricker, M. (2007). Epistemic Injustice: Power and the Ethics of Knowing. Oxford University Press.
Akinlade, D. (2026). Structural Epistemic Injustice in Medical AI. Journal of Medical Ethics.
Emah, I., & Bennett, M. (2025). Relational Ethics and Algorithmic Harm. AI & Society.
This post builds on my dengue triage work in Bangladesh and fairness aware ECG research, as well as insights from the relational ethics literature on structural epistemic injustice in medical AI.