Discussion about this post

User's avatar
Adele's avatar

If we care about democracy and our future, we ought to critically assess Anthropic's involvement in intellectual property theft (https://www.washingtonpost.com/technology/2026/01/27/anthropic-ai-scan-destroy-books/). There seem to be a glaring moral inconsistency there. More broadly as it relates to LLMs "three key features [indicate that they] inflect the workings and logics of authoritarianism: (selective) inhumanisation, the cult of intelligence and scaling," rather than democracy. (https://rgs-ibg.onlinelibrary.wiley.com/doi/pdf/10.1111/tran.70048). Also, the well documented environmental harms of AI and its dependence on precious resources don't bode well for our future. AI that is designed by and to benefit corporate interests, trained on stolen data, and fully under corporate control will not help us with renovating democracy, despite the scaffolding of "Constitutional AI." Lastly, AI is simply code-it's not conscious and should not be anthropomorphized. AI should be build for people, not to be a person.

No posts

Ready for more?