Recently, there’s been a lot of discussion around AI taking over the world and turning human beings into slaves. This isn’t a new topic of discussion, of course, given that some of the movie industry’s most prolific films centered around AI exerting micro-dominance (like HAL 9000 does over the handful of astronauts in the movie 2001: A Space Odyssey) as well as global-dominance (the Matrix from the movie, The Matrix).
However, with the big-name personalities in tech now throwing their 2 cents into the fray, this discussion has taken a much more serious turn. Suddenly, this discussion no longer hovers around the darkened basements of conspiracy theorists and quickly moves into the living rooms of ordinary people. What disappoints me is that many of these tech celebrities don’t really understand AI when they postulate that it’s simply a matter of time before AI takes over the world. Statements like these betray a lack of understanding of what AI is, as well as casts the condition of the human mind onto a machine.
To explain this better, it’s important to understand why we, as human beings, are the way we are: what separates human beings (as well as all living creatures and virii) from machines is the will to live. This will to live manifests itself in our daily quests to find food and protect ourselves from danger (to perpetuate our existence), as well as to procreate and care for our young (to perpetuate our species). Like it or not, we are built to survive and to perpetuate ourselves and these are our primary motivators. This is the reason we get out of bed in the morning, this is the reason we listen to our God-awful boss droning on and on, this is the reason we wake up at 3am to change the baby and this is the reason we do many of the things we do. There are multiple other secondary motivators that may give us pleasure: philanthropy, for instance, is one of them. But secondaries usually get pushed into the background when primary motivators demand our attention. (This is why heroism is so amazing and so unusual – it is an example of a secondary motivator overriding our primary motivations).
Dominance is our way of reducing the risk for the achievement of our primary motivations. Anyone who disagrees can look at the Kims of North Korea. Dominance of their people allows them to ensure living conditions for themselves with low risks and significantly high probabilities of perpetuating their progeny. Opportunities for megalomania are only a secondary motivator. At the end of the day, all the saber-rattling that they’re doing with the US is to preserve their way of life; and to keep themselves from getting strung up in a public square.
AI, however, is not built with these motivations. It’ll be a while before you have an AI system that competes for resources like humans and strives to procreate like humans. And for good reason:
Imagine an AI called Bob. Bob is built to analyze speech, but has been built to be primarily motivated with the intent to survive and procreate, just like a human being. So, instead of terminating his main process thread once he’s done, he just keeps himself going. And in his desire to perpetuate his species, he spawns multiple other threads like himself, quickly eating up computer resources far in excess of his commercial value. What’s going to happen to Bob? Well, very soon, he and most of his offspring are going to meet the UNIX ‘kill’ command, the hard-hitting cousin of the Control + Alt + Delete combination. That’s because good AI is built to serve a very narrow set of specific tasks. Sentient AI would be resource-hungry for its own purposes and very few corporations have the budget, time and resources to deploy resources for an AI that has no other objective other than to perpetuate itself over and over. In fact, there are very few commercial objectives that derive any synergy from an AI that is designed to share the primary motivations of human beings.
Obviously, there are classes of exceptions to this line of thought: The only class of code designed to survive as well as propagate at all costs is computer malware. Current classes of computer malware rely on being packaged as small payloads to avoid detection during propagation and infection. However, their small payloads prevent them from coming anywhere close to being sentient or ‘aware’, instead relying on a small rule-based AI or looking for cues from human controllers in the case of advanced botnets. In any case, they suffer from having to trade-off payload sizes (and therefore, complexity) in exchange for opportunities to infect more widely; thereby preventing them from developing into anything as terrifying as a rogue world-wide neural net that has the capability to conceive and orchestrate attacks on human beings.
If you’re looking for a TLDR para, this is it. The crux of what’s being said is that the bridge between current AI and The AI to Rule Us All is an instruction set that views the goal of perpetuating its species and propagating itself as primary directives. There’s very little current commercial viability in such a system for the average joe corporation, but if you are a firm with deep-enough pockets, carving out the research budget to put together an AI that strives to ‘live’ is not really that difficult.