“Like all technologies before it, artificial intelligence will reflect the values of its creators. So inclusivity matters — from who designs it to who sits on the company boards and which ethical perspectives are included.”

— Kate Crawford

How Do You Work Ethically With AI When the Whole Thing Feels… Compromised?

I’ve been circling this question for months: how the hell do you work for a company exploring AI when the technology itself carries so much ethical drag? Not the sci-fi extinction hypotheticals—the real, documented issues that show up in utility bills, court filings, creative guild lawsuits, and conversations with the people doing the day-to-day work.

Things like:

  • Models trained on creative labor that was never paid for.
  • Water and power footprints that turn datacenters into industrial evaporative coolers.
  • Organizations racing ahead because “everyone else is doing it,” while the ethics are still in wet cement.
  • And the ever-present pressure to “move fast” without fully examining the human consequences.

Recently, after an internal leadership conversation about AI strategy, I felt that familiar tightness in my chest—the one that hits when I’m trying to balance responsibility with unease. The one when you’re in the room where decisions gather momentum, and you’re trying not to feel like nodding along makes you complicit.

So I’ve been wrestling with the harder question: What does an ethical stance even look like inside a flawed system?
This isn’t a manifesto. It’s me thinking through it in real time.


I admit the facts bother me, but the fatalism bothers me more

Every major criticism of modern AI is valid. None of it is hyperbole. But fatalism disguises itself as moral clarity. It whispers, “If the whole thing is compromised, nothing you do matters.”

I’ve fallen into that trap more times than I’d like to admit. And when I get stuck there, I’m lucky my partner reminds me that resignation isn’t integrity—it’s surrender wearing a principled mask.

We don’t get to choose the era we’re in.
We only choose how we behave inside it.


I need to stop asking “How do I justify this?” and start asking “How do I shape this?”

I can’t stop the industry from pushing toward AI. I can’t halt the tide inside my own org either. But I absolutely can shape and influence how it’s used. People in infrastructure forget this: we aren’t just passengers. We’re part conductor, part mechanic, part person standing on the tracks saying “Hold up—what’s underneath this thing?”

So I can:

  • Push for models with documented data provenance.
  • Draw hard lines around sensitive data and reject risky integrations.
  • Advocate for human-in-the-loop workflows instead of automated replacement.
  • Ensure the goal is augmentation, not displacement.
  • Reduce inference load and choose greener regions.
  • Insist on transparency over buzzword alchemy.

None of these fix the entire field. But they meaningfully alter the trajectory inside my sphere of influence. That’s stewardship—the space between complicity and nihilism.


I accept I can’t fix the field, but I can work to fix my corner of it

I can’t solve hyperscale cooling systems or global supply chain ethics. But I can influence the scope, footprint, and operational reality of AI inside my own org.

Systems thinking matters most when it shrinks from the grand to the local. When it stops trying to “solve AI” and starts asking, “What’s the next responsible decision I can make today?”

That looks like:

  • Choosing smaller, domain-specific models over the biggest shiny thing.
  • Intelligent caching and batching to reduce constant GPU churn.
  • Routine audits of usage and access—not just building the thing but tending it.
  • Sustainability analysis for high-impact workloads.
  • Workflows that don’t require a heavyweight model for rewriting a damn sentence.

It’s not heroism. It’s scale-appropriate responsibility.


But what about the uncomfortable counterfactual?

If I step back because the terrain is messy, who fills the space I leave?

It’s almost never going to be the person who loses sleep over ethics, patient safety, or environmental impact. Systems don’t improve when conscientious people opt out—they get optimized for convenience and efficiency, usually at the expense of everything else.

This has been true in every technological era. It’s true now.


I have to accept that perfection is not the benchmark

If ethical purity were the requirement, we’d have to abandon:

  • cloud infrastructure
  • smartphones
  • rare earth minerals
  • modern medicine
  • logistics
  • most of what the modern world runs on

Purity isn’t the goal. Directionality is.

The real question is:
Am I helping bend this thing—however slightly—toward something more humane, transparent, and sustainable?
If yes, then the work is meaningful. Even if the larger system is still a mess.


The three questions I’m using to keep myself honest

I don’t have a unified theory of ethical AI. I just have a few anchors:

  1. Am I reducing harm where I actually have power?
    Not in the abstract—right here, in this org, with these choices.
  2. Am I advocating for transparency, safety, and human-centered design?
    Even when it slows things down or makes stakeholders squirm.
  3. Are people—patients, clinicians, staff—better off because of my decisions?
    If not, something needs to change. If yes, this isn’t hypocrisy; it’s responsibility.

I admit I’m still uneasy

I don’t think the discomfort ever fully goes away.
And I don’t think it should.

The tension is a sign that your values are still online—that your integrity hasn’t been numbed by the churn of technological inevitability. The world doesn’t need more leaders who feel nothing. It needs leaders who can sit in the mess, stay awake in it, and still choose the most humane path available.

This is me trying.
I hope you’re trying too.

Leave a Reply