Perceived Possible Dangers of AI
There is a wide variety of fears surrounding artificial intelligence, from irrational to realistic.
There is a wide variety of fears surrounding artificial intelligence (AI). They range from irrational to realistic.
Many humans fear change and the unknown. Many fears are irrational, such as fearing automatic cars that will make the roads safer.
The realistic fears include errors in computer systems that cause problems or unintended consequences that are bound to happen with any new technology. This can include important programs not working as they should, using too much energy, causing major power blackouts, computer breakdowns, and giving wrong answers in essential situations.
Many humans fear autonomous artificial intelligence, such as in military weapons, that decide what to kill, and computers that can change and develop themselves on their own.
There is the black box problem, where the inner workings of a system, such as a computer, can only be observed by its inputs and outputs, and it is unknown what is really being done inside the system. The system (box) is black or opaque to outsiders, including computer scientists. This is often the case with artificial neural networks and deep learning, where it comes up with answers, but the computer scientists don’t fully know or understand its internal methods that were used to produce the answers. This brings up questions about the reliability of the system’s answers.
At the Georgia Institute of Technology, computer scientists had two artificial intelligence language programs communicate with each other in a test bartering economy. The problem was the scientists forgot to program in that the AI had to stick to English and the two programs developed their own mutual language that the scientists did not understand. English is a cumbersome language for AI, and the programs developed a more efficient for them 'gobbledygook' language. The scientists reprogrammed so only English was used. However, the question then was if it was better for the AI to work better in its own language that humans may never be able to understand or be less efficient using a language the humans can understand.
Humans have to be careful of the directives they give to AI. AI follows orders literally, and sometimes in ways and paths humans did not intend. If a company tells advanced AI to maximize profits, the AI, in its singular focus, may go out and break the law or do other unintended damage. If humans tell superintelligent artificial general intelligence to save the climate and earth, the computer may decide the first thing to do is to kill all the humans or radically reorganize human society in a way humans dislike.
One reason for humans wanting to have control over AI and to know what the computer is doing and how it is doing it is to allow humans is to help prevent errors and unintended consequences and insert common sense. There will always be bugs, mistakes, and unintended problems with any technology.
Another fear is AI being used by humans for nefarious purposes, such as weapons, cyberattacks, internet virus spreading, and spreading propaganda on social media. Humans can be the danger not just the AI.
A fear is when AI starts to be able to correct and change itself and develop new things, even create new artificial intelligence on its own. Some fear that highly advanced AI may try to prevent humans from changing it and turning it off.
A common fear is how AI will affect human jobs. People, often derisively labeled ‘luddites’, worrying that their work will be replaced by technology is as old as early 1800s textile workers fearing their work being replaced by automation in factories. In the early 1960s, United States President John Kennedy wrote that “the major challenge of the sixties is to maintain full employment at a time when automation is replacing men.” In the 1980s, people feared they would be replaced by personal computers.
As with any prediction, there is a wide variety of opinions on this, both in the amount of job loss (if any) and in the types of jobs that will be taken and created.
A paper for the U.S. National Bureau of Economic Research by prominent economists Joseph Stiglitz and Anton Korinek predicted that artificial intelligence could increase the wealth gap between the rich and poor.
Many humans fear, or at least have serious concerns about singularity, which is where AI becomes so intelligent and capable that if far surpasses humans.
British mathematician I. J. Good introduced the concept known as the ‘intelligence explosion.’ He wrote, “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind.”
Singularity would make all the AI problems that much more problematic and dangerous.
Many believe AI will of great benefit to humankind and the world
While there are people with great fears, there are many who say AI will make human lives better, helping cure diseases, making the economy more efficient, perhaps solving issues of drought and famine, and generally making our lives more efficient so we can do better things. AI will take over some jobs and work, as technology often does. However, it will create other jobs and work, and the jobs it takes over include ones people don’t want. Humans have unique talents of creativity, language, common sense and social intelligence that AI doesn’t have and can’t take over in the workforce.
“Simply put, jobs that robots can replace are not good jobs in the first place. As humans, we climb up the rungs of drudgery — physically tasking or mind-numbing jobs — to jobs that use what got us to the top of the food chain, our brains.” — The Wall Street Journal, ‘The Robots Are Coming. Welcome Them.’
While folks such as Elon Musk and Stephen Hawking have expressed fear of artificial intelligence, others such as Bill Gates and Mark Zuckerberg have said it will make human lives better.
What are the ethical issues surrounding AI?
There are a wide variety of ethical and legal issues surrounding artificial intelligence. The following are several.
One debate is if the militaries should be allowed to use AI, and, if so, to what extent. AI can potentially both increase and decrease deaths.
Another is if artificial intelligence ever gets sentience with feelings and thinking, should it be given rights similar to those given to human or animal? These could include the right to life, liberty and freedom of expression. If it even happens, artificial intelligence having sentience would be something far in the future. Further, we can never be certain it has sentience, even if it does. A computer mimicking human emotions does not necessarily mean it has them.
Some are concerned with computers replacing humans in certain human-relation positions, such as psychotherapist, nurse, judge and police officer. Joseph Weizenbaum was a computer science professor who invented a computer English language program. He thought it unethical and dehumanizing when a psychiatrist used his program to ask patients psychotherapy questions.
Others, however, believe that advanced AI may be useful in removing racist, sexist and other conscious and subconscious biases from police, detective, and other work.
What will AI be like in the future?
Different than you think. The history of predicting AI, and almost any technology, has been the history of people making wrong, and often comically wrong predictions.
Realize that the future of AI will be formed not only by theorists, inventors, and computer scientists. It will also be formed by economics, the market, and resource availabilities, plus the whims, often changing opinions of industry leaders, funders, political and civic leaders, and the public. Artificial intelligence will be influenced by political and social leaders who may have little understanding of the technology.
And, of course, artificial intelligence will contribute to forming itself.
.
Further reading
Philosophy of Artificial Intelligence: pdf ebook by David Cycleback