The era of artificial intelligence has begun, and it brings with it many new concerns. A lot of effort and money is being devoted to ensuring that AI will only do what humans want it to do. But what we should fear most is artificial intelligence that will do what humans want. The real danger is us.
This is not the risk the industry is striving to address. In February, an entire company — called Synth Labs — was founded for the express purpose of “aligning AI,” making it behave exactly as humans intend. Among its investors are Microsoft-owned M12, and First Start Ventures, founded by former Google CEO Eric Schmidt. OpenAI, the creator of ChatGPT, has promised to provide 20% of its processing power to “hyperalignment” that will “direct and control AI systems more intelligently than we do.” Big tech is in all of this.
This is probably a good thing due to the rapid development of artificial intelligence technology. Almost all conversations about risk concern the potential consequences of AI systems pursuing goals that differ from what they were programmed to do and that are not in the best interests of humans. Everyone can get behind the idea of AI compatibility and safety, but that is only one aspect of the risk. Imagine what would happen if AI did what humans wanted.
“What humans want,” of course, is not a monolith. Different people want different things and have endless ideas about what constitutes the “greater good.” I think most of us would be rightly concerned if AI was compatible with Vladimir Putin or Kim Jong Un's visions of an ideal world.
Even if we could get everyone to focus on the well-being of the entire human race, we are unlikely to be able to agree on what that might look like. Elon Musk made this clear last week when he shared on X, his social media platform, that he was concerned about pushing AI toward “forced diversity” and “wokeness” too much. (This comes on the heels of Musk filing a lawsuit against OpenAI, arguing that the company has not fulfilled its promise to develop artificial intelligence for the benefit of humanity.)
People with truly extreme prejudice may believe that it would be in the greater interest of humanity to kill anyone they consider deviant. “Human-biased” AI is essentially as good, evil, constructive, or dangerous as the people who design it.
This appears to be why Google DeepMind, the company's AI development arm, recently established an internal organization focused on the safety of AI and preventing it from being manipulated by bad actors. But it's not ideal that what's “bad” is determined by a handful of people at this particular company (and a handful of other companies like it) — with their own personal and cultural blind spots and biases.
The potential problem goes beyond humans harming other humans. What is “useful” for humanity has often, throughout history, come at the expense of other sentient beings. This is the situation today.
In the United States alone, we have billions of animals subject to captivity, torture, and deprivation of their basic psychological and physiological needs at any given time. Entire species are systematically subjugated and slaughtered so that we can have our omelettes, burgers, and shoes.
If AI were to do exactly what “we” (whoever programmed the system) wanted it to do, it would likely mean enacting this mass brutality more efficiently, on a greater scale and with more automation and fewer opportunities for sympathetic humans to step in and report anything. Particularly terrifying.
Indeed, this is already happening in industrial agriculture, although on a much smaller scale than is possible. Major producers of animal products, such as US-based Tyson Foods, Thailand-based CP Foods, and Norway-based Mowi, have begun experimenting with artificial intelligence systems aimed at making animal production and processing more efficient. These systems are being tested to, among other activities, feed the animals, monitor their growth, mark their bodies, and interact with the animals using sounds or electric shocks to control their behavior.
The better goal of aligning AI with the direct interests of humanity is what I would call conscious alignment—AI that acts in the best interest of all sentient beings, including humans, all other animals, and conscious AI, if it exists. In other words, if an entity can experience pleasure or pain, its fate must be taken into account when AI systems make their decisions.
To some, this may seem like a radical proposal, because what is good for all sentient life may not always correspond to what is good for humanity. It may sometimes, and often times, be at odds with what humans want or what is best for the greatest number of us. This could mean, for example, eliminating zoos, destroying unnecessary ecosystems to reduce the suffering of wild animals, or banning animal testing.
Speaking recently on All Thoughts Founded, Peter Singer, the philosopher and author of the popular 1975 book Animal Liberation, said that the ultimate goals and priorities of an AI system are more important than its compatibility with humans.
“The real question is whether this superintelligent AI will be benevolent and want to produce a better world, and even if we don't control it, it will still produce a better world in which our interests will be determined,” Singer said. taken into consideration. It might sometimes be outweighed by the interests of non-human animals or the interests of artificial intelligence, but that would still be a good outcome, I think.
I'm with the singer on this one. It seems that the safest and most compassionate thing we can do is to take nonhuman life into account, even if the interests of those entities may conflict with what is best for humans. Reducing the status of humanity to any extent, especially to this extreme, is an idea that will challenge people. But this is necessary if we are to prevent existing species distinctions from multiplying in new and horrific ways.
What we should really ask is that engineers expand their circles of empathy when designing technology. When we think about the word “safe,” let's think about what “safe” means for all sentient beings, not just humans. When we aim to make AI “good,” let’s make sure that this means good for the world as a whole — not just for one species living in it.
Brian Catman is the co-founder of the Reduction Foundation, a non-profit organization dedicated to reducing societal consumption of animal products. His most recent book and documentary is “Meat Me Halfway.”