Why Are People So Scared of Artificial Intelligence?
Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using the rules to reach approximate or definite conclusions) and self-correction.
Particular applications of AI include expert systems, speech recognition and machine vision. AI is more important in day to day in following ways such as AI automates repetitive learning and discovery through data, AI adds intelligence, AI adapts through progressive learning algorithms, AI analyzes more and deeper data using neural networks that have many hidden layers, AI achieves incredible accuracy though deep neural networks, and AI gets the most out of data. When algorithms are self-learning, the data itself can become intellectual property.
Even though AI is much helpful & important people scare about it because Human actions are going to have unpredictable consequences. We do not have the ability to think of every possible future situation. Already, algorithms are getting too complex to understand. The artificial neural networks used in machine learning work very well in many situations, but we don’t always understand why they work, and why they sometimes don’t work. When we get to the point where machines are designing themselves, we may have no idea what is going on. The machines may even replace their stated goals with meta-goals.
Negative feelings about AI can generally be divided into two categories: the idea that AI will become conscious and seek to destroy us, and the notion that immoral people will use AI for evil purposes.
"One thing that people are afraid of, is that if super-intelligent AI more intelligent than us becomes conscious, it could treat us like lower beings, like we treat monkeys," he said. "That would certainly be undesirable."
However, fears that AI will develop awareness and overthrow humanity are grounded in misconceptions of what AI is, Weinberger noted. AI operates under very specific limitations defined by the algorithms that dictate its behavior. Some types of problems map well to AI's skill sets, making certain tasks relatively easy for AI to complete. "But most things do not map to that, and they're not applicable," he said.
This means that, while AI might be capable of impressive feats within carefully delineated boundaries playing a master-level chess game or rapidly identifying objects in images, for example that's where its abilities end.
"AI reaching consciousness there has been absolutely no progress in research in that area," Weinberger said. "I don't think that's anywhere in our near future."
The other worrisome idea that an unscrupulous human would harness AI for harmful reasons is, unfortunately, far more likely, Weinberger added. Pretty much any type of machine or tool can be used for either good or bad purposes, depending on the user's intent, and the prospect of weapons harnessing artificial intelligence is certainly frightening and would benefit from strict government regulation, Weinberger said.
Perhaps, if people could put aside their fears of hostile AI, they would be more open to recognizing its benefits, Weinberger suggested. Enhanced image-recognition algorithms, for example, could help dermatologists identify moles that are potentially cancerous, while self-driving cars could one day reduce the number of deaths from auto accidents, many of which are caused by human error, he told Live Science.
But in the "Humans" world of self-aware Synths, fears of conscious AI spark violent confrontations between Synths and people, and the struggle between humans and AI will likely continue to unspool and escalate during the current season, at least.) For more information, https://www.ceymplon.lk/service/it-service/tech-consultancy