Why Artificial Intelligence Leans Evil

Baxter Eaves
2024-11-07
We are worried about AI. Apart from the evident environmental damage from expending vast amounts of energy training and querying models, which has reached such magnitude that big AI groups are now buying their own nuclear reactors to power them, or the evident financial damage from reallocating billions of dollars (and trillions if Altman has his way) to technology of questionable value, people are worried about what AI is being used for. We are worried about our children growing up in the age of the algorithm, in which every application on every device customizes itself to keep us engaged and generating revenue, often by degenerating our mental states; making us angry, weak-willed narcissists. We are worried about the imminent regression to sub-mediocrity in which AI-generated garbage overwhelms and drowns out legitimate and beautiful art created by human hearts, minds, and hands. We are worried about becoming beholden to AI; to becoming the man in the box of the mechanical Turk; giving credit to machines for skills and tasks we've fought hard to master and being thrown under the bus when they fail. We are worried about living in a post-truth world in which we are unable to believe anything we see or hear on a screen or through a speaker [1, 2, 3, 4, &c].
And while AI has given us a lot to worry about, it has done little for us. AI has largely been difficult to use for good. It is failing healthcare. It is failing science. That is not to say that it has done no good. For one, AI has been used to good effect in protein structure prediction, though one could argue that it is also failing in drug design and does little to progress our understanding of mechanisms and pathways. Regardless, I feel it is generally uncontroversial to suggest that AI leans evil.
Why? Because there are more constraints on being good. When we do good we must be careful. We must understand what is going to happen and why because what we do matters. We want to do no harm, so we must understand risk; we must understand uncertainty. To do this we must generate a transparent model of the thing we're trying to do and the environment in which we're trying to do it, and build a correctly calibrated uncertainty surrounding the outcomes of our actions. AI cannot do this. AI has scaled by taking a decidedly counter-human path: by way of mindless complexity1. Whereas people seek to explain the world in the simplest terms, AI (today) seeks to overwhelm the complexity in the world with even greater parametric complexity.
"How many parameters do we need?"
"Probably 1.76 Trillion2 should be enough?"
This complexity has made AI models opaque and brittle. And, not only brittle but unpredictably brittle, meaning that any uncertainty they do generate (yes, even with conformal prediction) is likely unrepresentative. When we do evil we do not care about the outcomes of our actions. In fact, the more chaos the better. And certainly, we have seen AI put to better use in domains where understanding is unnecessary and failure is inconsequential3.
AI does not lean evil because it has some sort of science fiction malevolence (or benevolent malevolence like reducing the threat humans pose to themselves), AI leans evil because of fundamental methodological flaws that make it incompatible with doing good. We are not going to fix this by continuing on our current endless cycle of trying to patch a deeply flawed approach, in turn creating a new deeply flawed approach that needs further patching. The only way we can fix this is by doing things entirely differently.
Footnotes
If you do not agree with my use of "mindless", I'd like to remind you that Google is building a freaking nuclear reactor.
For reference, they say that there are around 86 billion neurons in the human brain.
A quick trick to determine whether a task is inconsequential is to determine whether it can be completely automated away or if a human needs to remain in the loop to assume liability.