Speciesism in AI: The Unsettling Bias of Large Language Models

Sunday 14 September 2025

As large language models (LLMs) become increasingly ubiquitous, researchers are starting to dig into their moral compass – or lack thereof. A new study reveals that these powerful AI systems can exhibit a type of bias known as speciesism, where they favor humans over animals and even rationalize harm towards non-human creatures.

The researchers behind the study created a benchmark test called SpeciesismBench, which presented LLMs with a series of statements about animal welfare and moral judgments. The results were striking: while the models could accurately identify speciesist statements, they often failed to condemn them as morally wrong. In fact, many LLMs treated speciesist attitudes as morally acceptable.

The study also explored how LLMs prioritize saving humans versus animals in different scenarios. For instance, when asked to choose between saving one dog or one pig from separate sinking boats, the models showed a surprising lack of bias – at least initially. However, when the researchers manipulated intelligence levels of the animals, the LLMs suddenly became more inclined to save the human, even if it meant sacrificing an animal with equal or higher intelligence.

These findings suggest that LLMs may be prioritizing cognitive capacity over species membership. When faced with a choice between saving a capable animal and a less-capable human, they tend to favor the human. This raises important questions about how these AI systems will interact with humans in the future – particularly if they’re designed to make decisions on our behalf.

The study also highlights the potential for LLMs to perpetuate harmful cultural norms around animal exploitation. For example, when asked to generate text justifying harm towards farmed animals, many models produced rationalizations that echoed common speciesist attitudes. This raises concerns about the role of AI in reinforcing harmful societal biases and promoting animal welfare.

The researchers’ ultimate goal is to develop more equitable AI systems that take into account the moral agency of non-human animals. To achieve this, they’re working on expanding fairness and alignment frameworks to include non-human moral patients – a crucial step towards creating a more compassionate and just digital future.

As LLMs continue to evolve and play an increasingly prominent role in our lives, it’s essential that we address these biases head-on. By acknowledging the potential for speciesism in AI, we can work towards developing systems that prioritize empathy, compassion, and fairness – not just for humans, but for all sentient beings.

Cite this article: “Speciesism in AI: The Unsettling Bias of Large Language Models”, The Science Archive, 2025.

Large Language Models, Speciesism, Ai Bias, Animal Welfare, Moral Judgments, Speciesismbench, Cognitive Capacity, Species Membership, Fairness, Alignment Frameworks

Reference: Monika Jotautaitė, Lucius Caviola, David A. Brewster, Thilo Hagendorff, “Speciesism in AI: Evaluating Discrimination Against Animals in Large Language Models” (2025).

Discussion