Welcome back to the Tech & Democracy Roundup!
If we care about liberty, democracy, and the future of humanity, then the frontier AI company we should root for is Anthropic.
Anthropic was founded in 2021 by a group of OpenAI employees who left over concerns about AI safety. It is the major AI company whose mission is most clearly influenced by philosophical and practical concerns about the impacts of rapid AI progress on existential risk, social upheaval, and what it means to be human. Anthropic’s mission-focused culture attracts researchers who genuinely care about responsible AI development; who enthusiastically discuss nerdy essays in company Slack channels; and who donate huge portions of their earnings to combat wealth inequality and effectively serve altruistic goals. As a testament to this mission-driven culture, Anthropic is the only frontier AI lab to have retained all of their original founders.
These founders have a proven track record of publicly supporting the foundations of a free society, the rule of law, and the values of liberal democracy. For example, see this tweet by cofounder Chris Olah in the wake of ICE’s murder of Alex Pretti in Minneapolis:
Further evidence of Anthropic’s commitment to liberal democracy can be found in CEO Dario Amodei’s hugely influential essay “Machines of Loving Grace,” which is required reading for anyone who wants to understand the perils and possibilities of the most important technology development in the world today, from the point-of-view of the only major lab CEO who acts more like a philosopher than a salesman. Amodei writes that “the triumph of liberal democracy and political stability is not guaranteed, perhaps not even likely, and will require great sacrifice and commitment on all of our parts, as it often has in the past.”1 Later in the essay, he says:
“The vision of AI as a guarantor of liberty, individual rights, and equality under the law is too powerful a vision not to fight for. A 21st century, AI-enabled polity could be both a stronger protector of individual freedom, and a beacon of hope that helps make liberal democracy the form of government that the whole world wants to adopt.”
Anthropic is also the only frontier lab training its AI based on a model of “Constitutional AI.” This approach culminated in Anthropic’s recent release of a Constitution for its AI model, Claude. Claude’s Constitution outlines a legible vision of Anthropic’s intentions for Claude’s character, values, and behavior. Interestingly, the primary audience for this document seems to be Claude itself: Anthropic acts like a parent bringing a new entity into the world, one it hopes will be guided by good values as it grows beyond any specific rules its parents could prescribe. While the status of AI consciousness and moral patienthood is still unclear, Anthropic is taking every possibility seriously.
Anthropic has been having a major moment in the sun these last few months: it deployed the best coding model (one that even its competitors use), it closed an unprecedented round of funding that speaks to the solidity of its enterprise-focused business model (as opposed to OpenAI’s focus on mass-market product), and it just gained a lot more public awareness by airing well-received ads in the Super Bowl that skewered OpenAI’s proposed shift towards advertisement in ChatGPT.
AI will enormously impact the world, and given the bad situation of a market-driven race-to-the-bottom, Anthropic is the only major player at least trying to steer the outcome in a direction that’s unambiguously in favor of liberal democracy. While other AI companies are lobbying for less regulation through David Sacks’ corrupt influence in the Trump Administration, Anthropic is pushing for more transparency, coordination, and safeguards.
If you believe in liberal democracy, then Anthropic and Claude deserve your attention, even if you don’t use AI yourself. Those who don’t use AI can still support Anthropic by using their voices to spread the message that Anthropic is the “good” AI company, the “thinking person’s” AI company, the only big AI company who seems to take liberal democracy (and its many threats) seriously. Claude is the AI for liberal democracy. In a world where hype, funding, narratives, and frontier research are all connected through market dynamics and social media, elite consensus and promotion among pro-democracy voices can have a big impact on the shape of the AI race. Given that every other major AI company flirts much more with varying flavors of fascism, supporting Anthropic in any form is helpful.
If you do use AI for any purpose— and especially if you pay money for it— then I encourage you to switch to Claude. This moral good requires no functional sacrifice, since Claude is as good as— or better than— ChatGPT and the other frontier models at most tasks. If you’re attached to how well ChatGPT “knows” you, don’t worry; Claude will get to know you fast, too, and he feels more human to interact with. Money spent on Anthropic goes to alignment and interpretability research; money spent on ChatGPT goes to pro-Trump SuperPACs— OpenAI president Greg Brockman just gave Trump $25 million.
And, of course, Claude has a Constitution! Think of it this way: if you were hiring labor from a foreign country, wouldn’t you feel better, morally speaking, hiring a worker protected by rights and empowered by a Constitution?
I am not affiliated with Anthropic and do not profit from this suggestion. Like Anthropic, I am motivated more by values than profit.
Further Tech & Democracy Reading:
“The Adolescence of Technology” is Dario Amodei’s more recent follow-up essay to “Machines of Loving Grace,” (which is, by the way, named after the fantastic poem “All Watched Over by Machines of Loving Grace” by Richard Brautigan.)
From the Allen Lab for Democracy Renovation: “Ethical-Moral Intelligence of AI”
As the AI arms race ramps up, we can’t let big tech control access to information
“AI & Democracy: Mapping the Intersections” from the Carnegie Endowment for International Peace"
State-led crackdown on xAi & Grok’s creation of unconsensual sexual images
Florida’s proposed “AI Bill of Rights”
Trump voters in red states oppose AI acceleration— IFS Survey. (Relates to my last Tech & Democracy piece, linked below)
And, if you missed it, check out the first article in our new series on Rewiring Democracy by Bruce Schneier & Nathan Sanders:
I’d also argue that Dario Amodei, as a philosopher, exhibits the intellectual attitude of American Pragmatism that defines the best American philosophers of liberal democracy, like William James and John Dewey: Amodei writes “I’ll continue to make the optimistic case, but keep in mind everywhere that success is not guaranteed and depends on our collective efforts.” This mix of contingency, agency, and the “will to believe” is reminiscent of James Baldwin’s attitude— it’s not that the moral arc of history inevitably bends towards justice; rather, “here we are, at the center of the arc… Everything now, we must assume, is in our hands.” I wrote about this in my essay “Contingency and Courage.” Notably, Barack Obama has come to this attitude in recent years, as evidenced in these recent Baldwinian remarks.




If we care about democracy and our future, we ought to critically assess Anthropic's involvement in intellectual property theft (https://www.washingtonpost.com/technology/2026/01/27/anthropic-ai-scan-destroy-books/). There seem to be a glaring moral inconsistency there. More broadly as it relates to LLMs "three key features [indicate that they] inflect the workings and logics of authoritarianism: (selective) inhumanisation, the cult of intelligence and scaling," rather than democracy. (https://rgs-ibg.onlinelibrary.wiley.com/doi/pdf/10.1111/tran.70048). Also, the well documented environmental harms of AI and its dependence on precious resources don't bode well for our future. AI that is designed by and to benefit corporate interests, trained on stolen data, and fully under corporate control will not help us with renovating democracy, despite the scaffolding of "Constitutional AI." Lastly, AI is simply code-it's not conscious and should not be anthropomorphized. AI should be build for people, not to be a person.