Artificial Intelligence was introduced to me in the film “The Terminator”. In this film, a machine is sent to kill Sarah Connor before giving birth to a future resistance leader. Surprisingly, Sarah Connor found herself relying on the machine to protect her son in the second film.
Sarah narrates…
“Watching John with the machine, it was suddenly so clear. The Terminator would never stop. It would never leave him, and it would never hurt him, never shout at him, or get drunk and hit him, or say it was too busy to spend time with him”
In Mary Shelley’s Frankenstein, Victor mulls over whether he should give in to his monster’s desire to have a wife. Having created a technology, that upon procreation might lead to the end of humanity dawns on him. Artificial (AI) Takeover has been a popular theme in science-fiction for years. In fact, one could argue that it dates to 1818, with the release of Frankenstein.
Thus, the concept of existential risk with the potential of making our own doomsday became a reality. Or arguably, the premise that gave birth to several beloved sci-fi movies. For example, the 1968 film 2001: A Space Odyssey. In this classic by Stanley Kubrick, a murderous computer named HAL sets out to sabotage a space mission and kill the entire crew.
Most Science Fiction films follow a common theme.
- Man designs AI
- AI becomes self-aware
- AI strives to gain control
- Thus causes the destruction of humanity as we know it
Many believe that a reality where AI overtakes humanity may not be so far into the future. The interesting point is that even though most people recognize these dangers. Despite this, our need for this technology allows us to turn a blind eye to the risks.
Science Fiction meets (AI) reality
What we have now in the field of Artificial Intelligence (AI) is weak AI. Weak AI is designed to carry out simple tasks. This includes facial recognition, driving a car, and other IoT related (Smart-home) functions. However, researchers aim to eventually accomplish Artificial General Intelligence (AGI). AGI is defined as a program that could outsmart human beings at cognitive tasks, such as thinking and making decisions.
Advances such as these bring to light that human beings aren’t afraid of something more intelligent than us. Instead, we fear the loss of control that comes with it. Intelligence breeds control and living with the possibility of an intelligent system gaining the upper-hand is scary. However, the way we address these concerns can be harmful to humans even if robots don’t rise in the future.
Max Tegmark, President of the Future of Life Institute, says, “The fear of machines turning evil is another red herring. The real worry isn’t malevolence, but competence. A super-intelligent AI is by definition very good at attaining its goals, whatever they may be, so we need to ensure that its goals are aligned with ours. Humans don’t generally hate ants, but we’re more intelligent than they are – so if we want to build a hydroelectric dam and there’s an anthill there, too bad for the ants. The beneficial-AI movement wants to avoid placing humanity in the position of those ants.”
In an interview with The Guardian, Max said, “I think Hollywood has got us worrying about the wrong thing.”
Five Ways AI Could Alter the Fate of Humanity
1. Losing control of AI
As we march forward on the path to revolutionize superintelligence, the biggest fear is lack of control. With automated systems becoming more prevalent comes the inevitability of human beings losing autonomy.
Deep Learning works on the principle of replicating the ‘neural network’ in a human brain. Deep Learning is accomplished by creating a program that learns to read through layers and layers of data. This level of learning is what’s known as strong AI. It’s noteworthy that these functions would take place without any human supervision whatsoever. Hence, deep learning makes it possible to create an AI capable of making complex decisions better than humans possibly could.
However, the field of deep learning has not reached its full potential yet. Therefore there’s no way to say with certainty when it could be used to create superhuman intelligence.
2. AI has no ‘Humanity’
Most concerns regarding AI stem from the endless ways it could function. If there is an AI uprising, we would first see government offices and multinational companies working for profits. People are scared because these digital systems have no inherent values and ethics.
Scholars like Nick Bostrom plead that superintelligence is far off. He argues that an AI wouldn’t feel emotions like love or hate. Therefore eliminating the possibility of AI trying to govern the world as humans do.
However, if the AI has a hard-wired goal into their neural network, they will carry that task efficiently. In contrast, this could solve significant world issues like hunger and poverty. This also implies that AI would come up with an effective solution for these problems.
If we were to understand this through Bostrom’s plea, an AI algorithm wouldn’t rationalize its choices while working towards a goal. Instead, it would work like a machine that is simply following a command.
Perhaps that’s what Stephen Hawking had in mind when he said, “The development of full artificial intelligence could spell the end of the human race.”
3. AI could take our Jobs
Perhaps the most daunting possibility of Artificial Intelligence is millions losing their jobs to machines. The AI takeover of jobs might lead to a broader social and economic divide among people. High unemployment may result in an uprising of the people against the upper classes that are financially well-off. As a result, leaving the lower section in society’s economic strata to face poverty.
4. Increasing Dependence
One of the most apparent anxieties about the uprising of superintelligence is the overt dependence in which humans might succumb. The decisions we make are already governed by what’s on our devices. As a result, the further advancement of AI threatens people’s ability to think. What if there comes a time when we as humans draw a blank on how to think for ourselves. This is because we haven’t done it in a long time. At that point, the machines would be thinking for us?
Thinking back to the example of Sarah Connor. Even after nearly dying at the hands of a Terminator she ends up relying on one to protect that which she holds most dear.
5. Autonomous Weapons
Using Artificial Intelligence (AI) technology to develop weapons poses the threat of autonomous weapons falling into the wrong hands. Autonomous weapons, by definition, wouldn’t need the assistance of human beings at all. With this, the concern of an AI global arms race is incredibly alarming.
In an open letter about Autonomous Weapons, the Future of Life Institute says.
“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”
In Conclusion
AI might change the world as we know it. However, nobody knows when (if ever) it’s will happen. Therefore we need to be cautious about attaching a dooms-date to the most advanced form of technology. Therefore, it’s essential to stay informed and aware of how these tools continue to shape our world. We must enjoy the benefits this technology brings to our lives while also respecting the risks.