Subject: Termination of My OpenAI Subscription Due to Alignment with Authoritarian Power

To the leadership team at OpenAI,

I am writing to formally inform you that I have cancelled my paid ChatGPT subscription.

This decision is final. It is not driven by dissatisfaction with the product’s technical quality, but by profound concern over the political and ethical choices made by OpenAI’s leadership - in particular CEO Sam Altman - in relation to Donald Trump and his authoritarian political project.

Recent reporting shows that Sam Altman, who once publicly described Trump as “unfit to be President” and a “threat to U.S. national security,” has since reversed course and positioned himself - and by extension OpenAI - as a cooperative partner in Trump’s AI agenda. This shift does not reflect pragmatic neutrality; it reflects a willingness to normalize and enable an administration that has openly signaled contempt for democratic institutions, judicial independence, civil liberties, and the rule of law.

The Choice Companies Still Have - and That OpenAI Did Not Take

What makes this particularly troubling is that this alignment was not inevitable.

Recent reporting on Anthropic demonstrates that technology companies do have a choice. Anthropic reportedly refused to comply with demands from the Trump-aligned government that would have compromised its ethical commitments - even under significant political and financial pressure. That refusal, and the consequences Anthropic was willing to accept, shows that moral integrity in the tech sector is still possible.

OpenAI and Sam Altman chose a different path.

Rather than drawing a clear boundary against authoritarian power, OpenAI’s leadership appears willing to trade ethical clarity for political access and strategic influence. In doing so, OpenAI forfeits any credible claim to neutrality or to a mission centered on the “benefit of humanity.”

Why This Matters - Especially for AI

The business and tech communities bear a special responsibility at this moment in history. Artificial intelligence is not a neutral commodity. In the hands of authoritarian governments, it becomes an instrument of surveillance, repression, propaganda, and control at unprecedented scale.

History is unequivocal on this point:

  • Authoritarian regimes do not protect innovation - they exploit it.
  • They do not respect contracts or independence - they demand loyalty.
  • They do not tolerate dissent - they weaponize technology against it.

Companies that believe they can cooperate with such regimes while remaining independent are engaging in dangerous self-deception. The short-term benefits of access or influence are always outweighed by long-term moral, legal, and societal harm.

My Decision

Because OpenAI’s leadership has chosen accommodation over resistance - and because it has done so despite clear evidence that another path was available - I will no longer financially support or use OpenAI products.

This decision reflects my belief that:

  • Democratic values are not optional.
  • Ethical leadership matters more than convenience.
  • The future of AI must not be shaped by those willing to legitimize authoritarian power.

I regret having to make this choice. But values precede tools, and integrity precedes innovation.