It’s normal for everyone to believe that any creator would always have power or influence over their creation.
Believers in GOD know that GOD created man from dust, and then “breathed the breathe of life into his nostrils, and man became a living soul.”
On the other hand, believers in mythology believe that the ancient gods (who were probably GOD’s creatures, just like mankind is) had divine power to create and control their creations.
In fact, Greek mythology states that the goddess Aphrodite could make statues spring to life. Aphrodite, who had pity on an artists named “Pygmalion”, after he fell uncontrollably in love with a statue, granted him his best wish and turned the statue into a beautiful woman called “Galatea”.
The god Hephaestus, who was the blacksmith of the gods, had the ability to create a militant army made of metal; he animated each member of an army to move in a way that looked human.
Today, mankind is like Hephaestus, forging metals, animating machines, and “breathing life”—not only into clay, but into steel, zinc, silicon, aluminium, and other metals—thereby creating artificial forms of intelligence.
But, will artificial intelligence keep elevating us higher, and to a point of total freedom, or complete destruction?
Will artificial intelligence overpower mankind if mankind is not careful?
Available information points to the fact that, if mankind is not careful, it could be overpowered by its own creation, especially when its creation is given autonomy, or a very high degree of it.
What is autonomy?
Autonomy is the ability to make decisions without external influence. In the field of robotics, autonomy is a term used to refer to the ability a robot has to make decisions without human interference or influence.
We may agree that the autonomy GOD gave mankind, has been used by mankind to do many meaningful things—not to mention the meaningless ones.
Autonomous robots (subjects of AI), just like human beings, also have the ability to make their own decisions and carry out actions.
A well-designed autonomous robot is one that is programmed to perceive what is going on in its surroundings, and make decisions based on what it perceives, or “has been programmed to recognize”.
If an autonomous robot hasn’t been programmed to recognize a factor that hasn’t been detected/known, but is highly influencial, then how good would the robot be at doing the job it has been designed for?
I don’t think it would be that good.
The possibility that autonomous AI can cause problems for mankind
If autonomous robots have been “programmed”, isn’t it possible that they might not have been “programmed sufficiently enough”? Insufficient programming could lead to confusion and a lot of problems.
Although it’s not good to be pessimistic, we may agree that if science & technology is mixed with a selfish, war-mongering, and materially-minded attitude, then autonomous AI could ruin mankind, and probably bring mankind to a bitter end.
Mankind’s creation (AI) has autonomous abilities/tendencies
In the age of science we’ve witnessed the industrial revolution, the electronic revolution, and many others. In the robotic revolution (which contains both human-decision-based AI, and autonomous AI) there are “Predator drones”: pilotless robot planes that are actually pilotless.
They have been used with spot-on accuracy to target terrorists in countries like Pakistan and Afghanistan. On the other hand, they have been less accurate on a number of occasions.
Currently, there are cars that can drive themselves. Also, there is a highly advanced robot (named ASIMO—Advanced Step in Innovative Mobility) that can do a lot of things some human beings can’t do. It can move around, walk slowly, run, climb stairs, serve coffee, and even dance.
AI based on human decisions, and AI based on autonomous decisions
AI can be controlled, either by human decisions, or by autonomous decisions: it’s either human beings make decisions for robots, or robots make decisions and take actions by themselves—autonomously, and based on programmes (programming).
The Predator drone, which operates from the sky, and is used to fire deadly missiles at terrorists, is controlled by a human being who probably sits in front of a computer screen (away from the drone) and selects targets; in this instance, a human being calls the shots, and decides what/where should be targeted.
(No one knows for sure whether human beings, or the drones themselves, are at fault whenever wrong targets are being hit.)
Another example involving human decision, is when a car drives itself under the influence of a GPS map stored in its memory, and is controlled by human beings who are somewhere else away from the car.
These two examples won’t easily erase the nightmare of having fully conscious and autonomous robots that might be insufficiently programmed.
Autonomous robots, like “the Roomba”, are capable of making decisions based on what they perceive in their surroundings.
Autonomous robots, which make decisions based on a set of sensors that allow them perceive their environments, do their jobs without the aid of human beings.
But, if programming is insufficient, will autonomous robots do their jobs properly?
So generally speaking, there are two different types of robots
The first type of robot, which requires human decisions, is remote-controlled by human beings; while the second one, which is autonomous, is programmed to follow precise instructions without interference from human beings.
These types of robots are already in existence and have generated a lot of air/internet waves.
The robots could be slowly entering into human activities, as much as they have been entering into battlefields and places where humans wouldn’t want to waste their time.
If human beings decide not to make decisions—as is the case with autonomous robots—they (humans) could be playing with helpful toys that are two-edged swords, and have the potential to bring a combination of both help and destruction.
Without precise and properly monitored decision-making from human beings, smart autonomous robots (especially those designed for warfare) could bring something unsavoury upon mankind.
Although it’s true that awesome breakthroughs have been made, things have to be put into a healthier perspective, especially after scientists once voiced their opinions at a conference where they stated that in about 20 to 1,000 years time, they believe that mankind will create robots that could be as smart as mankind itself.
In addition, The New York Times once ran a headline titled: “Scientists Worry Machines May Outsmart Man.”
And if we examine autonomous robots closely, there is less conviction about how properly mankind will be able to continue controlling them.