Is AI Anti-Animal?

The Rise for Animals Team, February 9, 2024

Artificial intelligence (AI) has been everywhere for some time, but only more recently has it become accessible to even those of us less tech-inclined. This broader access has spurred significant consideration of how AI’s power can be used to help advance worthy causes, like animal rights (including by helping to replace animal testing). 

Although this consideration has rightly highlighted AI’s tremendous potential to support social justice advocacy, it has too often failed to account for its dangerous and largely invisible partialities.

AI depends on human participation and, indeed, is predicated entirely upon data entered by humans. This data, which reflects – and, indeed, is shaped by – humans’ “views and actions”, becomes a tremendous problem in a world like ours in which so many of these “views and actions” stem from or otherwise betray prejudices. Indeed, discriminatory “patterns prevail” in these datasets.

AI’s grounding in human prejudices has been the subject of recent deep dives (like that showcased in Coded Bias) that have made the case for AI as a “replicator” – and, even more troublingly, a perpetuator and normalizer – of humankind’s “discriminatory patterns . . . such as sexism, classism, racism.”

It should come as no surprise, then, that AI is also a replicator, perpetuator, and normalizer of speciesism.  

Speciesism – or “the belief that a mere difference in species justifies us in giving more weight to the interests of members of one species (usually our own . . . ) than the similar interests of members of other species” – is a prejudice, similar to sexism and racism” that underlies all human exploitation of other-than-human animals, including inside laboratories. 

Unlike other forms of human-on-human discrimination like sexism and racism, however, speciesism is not “widely accepted” to be “wrong”, and its “biased views and actions [] are shared, accepted, and performed by a large majority of society”. As a result, its elimination from AI is not a “high priority” (if it’s even on the list at all…):

Massive efforts are made to reduce biases in both data and algorithms to render AI applications fair. These efforts are propelled by various high-profile cases where biased algorithmic decision-making caused harm to women, people of color, minorities, etc. However, the AI fairness field still succumbs to a blind spot, namely its insensitivity to discrimination against animals.”

Unaddressed, speciesism remains “very widespread across most AI systems” and is poised to harm the animal rights movement.

Indoctrinated with discriminatory beliefs in favor of humans and against other-than-humans, “AI technologies currently play a significant role in perpetuating and normalizing violence against animals.” Put differently, “the manifold occurrences of speciesist machine biases lead to a subtle support, endorsement, and consolidation of systems that foster unnecessary violence against animals.” 

There also exists the possibility that AI could expand our exploitation of other-than-human animals by “introduc[ing] these [speciesist machine biases] in social contexts in which they have not previously existed”.   

To spark change ourselves, all we have to do is change the world (something we in the animal rights movement were already planning to do anyway, right?!)!

At present, “none of the major AI companies [] have any mention of animals in their ethical guidelines, and they’re not instructing data workers to consider how responses affect animals”. This means that speciesism will continue to be “hardwired into algorithms running our lives”.

And, this means that those of us in the animal rights movement must remain vigilant in our opposition to oppression in all of its forms – for changing our machines’ reflections of our world requires changing our world itself; and changing our world itself requires each and every one of us taking action. 

The first step to action is education, so please share this blog and expand your conversations about AI to include its dangerous, inherent biases.

Share on Facebook

Share on X (Twitter)