Consciousness is commonly considered to be a subjective awareness of one’s surroundings, the universe, and oneself. Advanced computer programs have a type of self-awareness in that they can check themselves for errors and even correct them (debugging). However many humans say that is not the same as sentient self-awareness involving feelings and sensations. A computer can have more information about some things and itself than a human can, so it could be argued that it does have a form of self-awareness, and that, at least in situations, human consciousness is not practically important.
Some say that, as with sentience, human-like (or ‘higher than human’) consciousness is required for artificial general intelligence (AGI). AGI is artificial intelligence that thinks and wholly acts on the order of human brains. Others say it is not.
Of course, a lot of it depends on how you define artificial general intelligence (AGI). If you define AGI by function and behavioral results, then having human-like qualities are not a requirement unless they are needed to produce those functions and results. If you define AGI as having all the qualities of a human, including feelings and emotions, then it is required. Many computer scientists who judge things by practical results say that such human qualities as aesthetic taste, emotions, sentience, and consciousness are only important to have when they are required for function. They don’t believe in having these qualities just to have them.
How human-like consciousness is produced is another big debate. Some say it is a ‘thing’ produced by a specific part of the brain. Others say that it is the byproduct of the complex brain and all its complex workings and parts and that a similarly complex, advanced artificial brain will produce it. They say AI programmers do not need to specifically try to create it, but that it emerges in a complex enough and intelligent enough system.
Many say that consciousness, along with sentience and emotions, cannot be produced by a digital computer, and there must be a biological component. Some say that you need sentience and emotions before you can have human-like consciousness and that it is highly unlikely a digital computer can produce the required sentience and emotions.
These debates go on and on, and will likely exist as long as humans exist.
The following short video is an excerpt from a 2016 interview with one of the most important and influential philosophers of artificial intelligence, University of California at Berkeley professor Hubert Dreyfus. Author of What Computers Can’t Do, Dreyfus was an outspoken critic of the ideas of computer scientists such as MIT’s Marvin Minsky. Dreyfus doubted that computers could have human-like consciousness, and many of his predictions about AI, and not those of the computer scientists, proved correct.
In this video, Dreyfus explains why he considers the idea of computers gaining consciousness folly.
.
Discussion questions:
With a computer program that can check and debug itself, do you think that is a form of consciousness? Does consciousness need human qualities, or can there be different types of consciousness?
Do you think human-like consciousness is required to achieve artificial general intelligence?
Do you think the common definition of consciousness is too human-centric?
Dreyfus died in 2017 and a lot has happened with AI (Chatbot AI, etc.)? Does this change answers?
.
Recommended further reading:
Philosophy of Artificial Intelligence — (52-page pdf introductory book that answers the essential basic philosophical questions about AI)
There are two avenues for content to populate consciousness: perception and association. Computers have a variety of analogs to perception: cameras, microphones, etc. AI at this point is a glamorous text prediction process. Although it uses the same kind of process as human association. Association occurs because the neural connections between the original content and the associated content are sensitive. AI predictive algorithms work in the same way. The predictive status of words are ranked by the commonality of previous linkage, which is just what neural sensitivity is all about. So the process by which content arrives as computational input for AI is very similar to the process by which content is apprehended by the human mind.
That said, consciousness is about awareness and while the processes of AI may be analogous to the way contend arrives in the human mind, there is no reason to believe that a computer is aware of the content in the sense that there is perspective above and beyond the linear processing of information. If there is no extra level of awareness, then the AI processing is similar to the way the leg jerks up when the kneecap is tapped by a rubber mallet - neural processing, but not at the conscious level.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461