Artificial Worsening

Peter Singer, along with co-author Yip Fai Tse, argue in an article called AI Ethics: The Case For Including Animals, that considering many Artificial Intelligence systems’ significant impacts on nonhuman animals, with the total number of animals affected annually likely to reach tens or even hundreds of billions, AI ethics needs to broaden its scope to deal with the ethical implications of this very large-scale impact on sentient beings.

Singer and Tse divide AI systems’ impact on animals to three different types. The first one is AI systems that are designed to interact with animals such as ones that are already being used in factory farms, or drones that target and murder animals as part of “population control”.
The second one is AI systems that unintentionally interact with animals such as self-driving cars, systems that currently are not designed to protect animals on the road (perhaps except dogs, cats, and animals large enough for a collision to cause serious damage to the car and its occupants).
The third one is AI systems that impact animals indirectly without interaction such as video recommendation algorithms that may ban videos showing cruelty to animals. This they argue, may lead to a reduced demand for such videos, and so change the viewers’ behavior towards nonhuman animals.

Although there is some potential for suffering reduction due to AI use, for example by using AI systems to screen chemicals for toxicity instead of humans exploiting nonhumans in painful experiments, AI systems that are being commonly used in factory farms are most likely to reduce the production costs of factory farming, and therefore increase animal consumption and by that increase suffering.

The case of self-driving cars is more complex. Although regardless of whether driven by a human or an AI, as long as cars drive along roads, animals will get hurt, many animals and many of them very severely. Singer and Tse write that: “Not all the animals struck or run over by a car die immediately. Some of them might have only their lower bodies crushed, some others will have internal injuries and may even manage to drag themselves to the side of the road, or into nearby bushes, where they may suffer from their injuries over hours or even days before dying or, if very lucky, surviving. A study headed by Fernanda Delborgo Abra estimated that in São Paulo State (Brazil) alone, 39,605 “medium” and “large-sized” mammals were killed on roads by vehicles per year. This study ignores “small mammals,” birds and other animals. Another study by Loss et al. estimated that roughly 89–340 million birds were killed in the US by vehicles on roads each year.”

They think that self-driving cars may be a great opportunity to end the ethical problem of “road kills”. However, they realize that for that to happen AI systems “needs to be able to identify that it has caused harm, record the data related to the harm, and report it to stakeholders. This is easier said than done. To make an AI system have this ability by design, it seems that the developers of the system have to be able to forecast a certain range of possible harms that the system can cause, and constantly review uncategorized data gathered by the system (e.g. video footage, sounds, signs of harm such as DNA, blood stain, body parts of animals, etc.) to check if there are types of harms that were not identified before. This could involve the participation of people who are experts on animal welfare issues, such as ecologists, conservation biologists, veterinarians, ethologists, animal cognition scientists, and animal activists.”

So they lay their hopes and are calling for AI systems to be designed in a way that consider their impact on nonhuman animals. But they realize that this is not going to be easy:
The most obvious human stakeholders are the developers (including the staff, teams, companies). But they might not have enough incentive to identify and reduce, and where possible avoid, all harms caused to animals by their AI systems. Hence to make the framework more credible and more ethical from a practical standpoint, we may need to report to further stakeholders, such as government regulators (especially those concerning animal welfare), animal protection organizations, scientists in the fields we mentioned in the last paragraph, the AI product’s owners, and, in the case of companion animals, the owners of the animals, and in the case of farmed animals, both the producers of the animals and the consumers of the animal products.

And therefore express their worry regarding their hope that “if we establish a norm that it is okay for self-driving cars to simply drive like humans with regard to hitting animals, while potentially having capabilities to better protect animals, the opportunity for an early end to the ethical problem of “road kills” may be missed. Once this norm becomes the status quo, it might be much harder to change than it would be to develop a new norm before AI systems are the dominant way of directing cars.”

And regarding factory farms they say that: “For facilities hidden from public view, such as factory farms, unannounced audits by officials from regulatory bodies should be carried out. Unless the reporting mechanism extends beyond the developers of the AI, we are not optimistic that the moral responsibility for AI systems’ harm to animals will be sheeted home to those who are in a position to alter the systems to reduce this harm.”

And they go even further arguing that:
Even if accountability is extended in the manner just described, it will likely be difficult to ensure that all the relevant harms, including some indirect ones, are given sufficient weight. Consider the design, manufacture, and sale of AI systems for use in factory-style production of animals. Those making these AI systems could argue that AI will not only make food cheaper and safer for humans but will also bring benefits to the animals themselves. AI may provide early identification of diseases and injuries suffered by the animals, and thereby reduce animal sufferings, and they could reduce or eliminate the sadistic brutality to animals occasionally shown by factory farm workers. Although this is possible, given that industrial animal production is driven by profitability in a competitive marketplace, rather than by consideration of animal welfare, we consider it more likely that if AI can more closely monitor the health of animals, this will also enable producers to respond by crowding even more animals into confined spaces, thus making their enterprises more profitable, even if the increased crowding results in great stress and higher mortality for the animals.

The larger problem is that if AI reduces the costs of factory farming, it thereby strengthens an industry that is morally objectionable. This might help factory farming to remain viable longer, or even to grow further, and therefore to give rise, in the future, to huge numbers of animals being created to lead miserable lives. Companies that contribute to making the factory farming industry more resilient and better able to resist replacement by less cruel and more sustainable alternatives are acting unethically. They are prolonging the existence of a moral catastrophe on a vast global scale.

Another concern they are raising is that algorithms, including those used in AI systems, “contain, and therefore propagate human biases such as racism, sexism, and ageism. These biases were learned from human generated data. Human generated data also contain biases based on species membership, and these speciesist biases will, through AI systems, have consequences (mostly negative, we believe) on huge numbers of animals.

For example, data about humans’ diet carry significant speciesist biases. As the consumption of meat is widely accepted and a common theme of human conversations, a lot of speciesist language data can be learned by AI systems and then propagated through their use. For example, typing the words “chicken” or “shrimp”, leads Google, Youtube, and Facebook to give search prompts and search results like “chicken/shrimp recipe”, “chicken/shrimp soup”, “chicken curry”, and “shrimp paste”, indicating that the systems reflect the mainstream human attitude that it is acceptable to regard these animals as food for humans.

They lay their hopes on that “By working with experts in animal behavior and animal cognition, AI developers could learn to associate the sounds, facial expressions, and body movements of animals with the feelings the animals are experiencing, much as humans who live with companion animals are able to do.
But why would something that humans had never bothered doing suddenly happen with the AI systems they are developing?

And indeed they are not very optimistic about AI systems being developed to deal with some crucial moral questions that will guide them to address and consider nonhuman animals as well: “leaving these questions to be decided by human designers in a commercially driven field makes it very likely that existing mainstream human values on the treatment of animals will be implemented. If the resulting AI system does not entirely ignore the interests of animals, it is likely to discount those interests in comparison to similar human interests.
If the system is trained to do what humans currently approve of, and to avoid what humans currently disapprove of, then since mainstream human values are speciesist, so the AI system will learn to be speciesist, even if we do not explicitly program a speciesist ethic into it
.”

As if humans are not a big enough trouble as it is, AI may cause an even greater problem for nonhumans. AI is a very powerful tool placed in the hand of a very powerful species which is also extremely cruel. And that’s another reason to hurry up and get rid of humanity as soon as possible and before they further develop more means of causing nonhumans more suffering.