AI will end up the human’s life.! This is what most people think about the future of life on earth. Artificial Intelligence (AI) is a System or a Machine that acts and works like human and it has been developing since 1955. Improvements to AI have produced significant advances in deep learning, image and speech recognition and autonomous robotics. Search Engines, GPS navigation, Snapchat filters, targeted advertising and many other people’s routine activities are based on AI.
Exponential growth in computer science through the last ten years, especially AI, has Led to a growing concern about the future of humankind. The problem is without realizing the limitation of this technology and the way it works, people started to make unrealistic expectations from the initial success. AI systems are not really smart, they are just good at pattern recognition. Machines are too far from having the general intelligence, and humans are not stopping their advancement either. Artificial Intelligence will not end up human race, it is within the power of humans to control AI machines in the future by using their unique brilliant brains.
One day you open amazon website and right there in the first page you see exactly what you had in your mind to buy. You read the news on how a supermarket predicted pregnancy of a teenage student, people see these things are freaking out. The first interaction is to say, computers are smart these days, they even read your mind. Such thoughts will lead to a bigger concern. People think the technologies that could do amazing predictions and calculations, can get too powerful that threat humankind’s life on earth. The fact is people are overestimating computers’ capability. Since the computer systems are getting stronger, and people are feeling the existent of AI in their lives, people think they are getting stronger, which would lead to have even much stronger Machines in the future.
Devdatt and Lappin mention “these arguments rely on a misplaced analogy between the exponential increase in hardware power and other technologies of recent decades and the projected rate of development in AI”. Artificial Intelligence Systems are not actually intelligent, humans to solve their complicated problems, developed different algorithms, which is a simplified version of the way the brain calculates the problems. One of these inventions was neural network systems, which enabled computers to learn and be trained by a database. The method is simply like training a dog, each input data results in a certain output. Dogs can be trained with conditioning to receive orders as input and for the output, act as they have been taught before.
The dog is not processing any data, it is only a reaction based on the previous data, that has been feed to it over and over. It is impossible to train a dog to fetch but expect it to answer to a philosophical question which needs a complicated parallel process in a human’s mind. Machines are just good at pattern recognition, the learning method of AI systems are too simple compared to human’s brain, as Olivier excellently states, “after all, human beings designed the computers to perform some of the functions their minds usually carried out, but not all.” Human mind can’t be limited to just functions of reasoning and calculation.
Humans are relying more and more on AI systems, space programs, controlling airplanes, traffic lights, and recently self-driving cars. It seems AI is coming out of just being a program on the computer that helps people search things easier, recognize their face, or predict stock market. AI systems are getting to a point that can directly affect humans. All of the programs could have possibly errors and glitches, everyone has seen that even the most professional applications on their computers or phones have been crashed once in a while. as Dietterich and Horvitz, two computer scientists famous in artificial intelligence field mention, “Some software errors have been linked to extremely costly outcomes and deaths” [NoBD6], besides that cyberattacks could be another possible risk for AI systems, “AI algorithms are as vulnerable as any other software to cyberattack”.
Another concern is since AI is there to serve humans, how are they react if they’ve been asked to do something that might threaten others life? Is an AI based car going to drive at 130mph because you asked it to get you home as fast as it could? As it happens in the movie “I Robot”, a car is in self-driving mode at the speed on 125mph and the driver decides to drive it himself, the transition between human and AI is one of the most concerning topics, which may lead to disasters. These sort of concerns which are based on AI software, are not well understood by people. People compare AI systems with the applications on their phone.
AI machines that have effect on human’s life, are programmed differently, and tested very carefully in different situation to make sure they are safe for human. There will be a regulation on machines that stop them from acting out of safe zone. These disasters are more likely preventable, we just need to “… ensure that AI systems responsible for high-stakes decisions will behave safely and properly…” [NoBD6]. Self-driving cars are a good example for this concern, it’s been years since companies has started developing self-driving cars, they are still testing them in different situations and places, step by step they are presenting new features to new cars, like staying between lines, or adjusting speed based on the car in front. It is not like people wake up today and they see self-driving cars are everywhere. This transition is so smooth and controlled by developers that people may not even notice that.
It is not all intelligent machines that people are worried about, “people are often anxious about the possible existence of dangerous intelligent systems”. One possible future for AI machines is Superintelligence Systems which are discussed to surpass human general intelligence and intellectual power.