I'll admit it: I started my Computer Science journey in love with the pure logic of it all. I was chasing the elegant solution, the fastest algorithm, the perfect if/then statement. It was exhilarating—like solving a puzzle where every piece always, logically, fit.

But the longer I worked in Machine Learning, the more I realized that building models that work is only half the fun. The really good stuff—the stuff that makes you stay up late—is when you realize you're building systems that are deeply entwined with messy, beautiful, human life. This is where my love for code suddenly met my deep interest in philosophy. It's a bit like learning to cook with an instant pot, then realizing the joy is in the slow, unpredictable fermentation process.

This isn't an academic paper—think of it more as a quiet moment of reflection, like watching the sunset from Camel's Back Park.

The Turning Point: When a Chatbot Needed a Conscience

The Anecdote That Changed Everything:

My "lightbulb moment" happened while developing an AI Chatbot for Dengue Symptom Triage in Bangladesh. On the surface, the project was straightforward: use a Decision Tree Classifier to help people quickly assess symptoms and get advice.

But as I worked on feature prioritization—deciding which symptoms mattered most and how to handle contradictory user inputs—the technical task morphed into an ethical one. A low-income person might downplay a symptom to avoid clinic costs, or a non-native speaker might struggle to articulate their discomfort. If the chatbot, in its binary logic, dismissed their input as "low risk" based purely on statistical probability, the cost of that error was immense.

The technical problem became: How do I design this system to listen better?

It wasn't just about minimizing False Negatives; it was about ensuring the system didn't dismiss the testimonial of someone whose data profile was statistically unusual or whose language was imperfect. The black box wasn't just opaque; it risked being deaf to human experience.

Finding Philosophical Anchors

This is exactly why I ran straight toward philosophy. The formal tools of ethics gave me the vocabulary to define the harm I was trying to prevent.

Epistemic Injustice: Recognizing the Knower

I found incredible clarity in the concept of epistemic injustice, championed by philosopher Miranda Fricker. She describes this as harm done to someone specifically in their capacity as a knower—when their credibility is unfairly discounted due to prejudice or systemic bias.

Suddenly, the chatbot's failure wasn't just a misclassification error. It was a potential act of testimonial injustice. If the model's internal biases led it to systematically ignore the reported symptoms of marginalized groups, it was denying them recognition as credible informants about their own health.

"When a model confidently classifies or reroutes based on demographic features, isn't it making epistemic judgments? Some voices are 'heard' more clearly by the model; others are marginalized."

— Reflection from my research notes

This insight has been humbling. It forces me to think beyond simply minimizing error; I now ask myself: Am I honoring the dignity of different knowers? Am I designing systems that respect not just statistical truth, but lived experience?

Why This Intersection Sparks Me

Here's why I find this crossroads of AI and philosophy so compelling:

My work in Fairness-Aware Representation Learning and Trustworthy AI suddenly clicked into place. These aren't just advanced research topics; they are the technical implementation of core philosophical principles. The goal is to build models that not only predict accurately but also recognize and respect the input data sources equally.

Looking Forward: Building With Conscience

The Path Ahead:

As I move forward, I feel more grounded in my identity: not just as a researcher or engineer, but as a philosophically informed technologist. I don't pretend to have all the answers. But I care deeply about asking the right questions—and making sure that, when AI touches people's lives, it does so with humility, fairness, and respect.

I love the high-level, human-focused problem-solving this intersection offers. It gives the cold logic of my CS background warmth and purpose. When I work on projects involving Privacy-Preserving Models or Explainable AI, I'm not just meeting technical requirements—I'm actively designing for trust, accountability, and ethical recognition.

I'm excited to continue this work, to keep using data science not just to optimize outcomes, but to elevate dignity. It's the most rewarding kind of programming—building a future where our most powerful tools are guided by a conscience, and where every line of code is written with a deeper understanding of human flourishing.

And if I can use both my coding skills and my love for deep, reflective thought to help build that bridge, I know I'm on the right path.