5 Comments
User's avatar
Mike Mallory's avatar

There are two avenues for content to populate consciousness: perception and association. Computers have a variety of analogs to perception: cameras, microphones, etc. AI at this point is a glamorous text prediction process. Although it uses the same kind of process as human association. Association occurs because the neural connections between the original content and the associated content are sensitive. AI predictive algorithms work in the same way. The predictive status of words are ranked by the commonality of previous linkage, which is just what neural sensitivity is all about. So the process by which content arrives as computational input for AI is very similar to the process by which content is apprehended by the human mind.

That said, consciousness is about awareness and while the processes of AI may be analogous to the way contend arrives in the human mind, there is no reason to believe that a computer is aware of the content in the sense that there is perspective above and beyond the linear processing of information. If there is no extra level of awareness, then the AI processing is similar to the way the leg jerks up when the kneecap is tapped by a rubber mallet - neural processing, but not at the conscious level.

Expand full comment
Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461

Expand full comment
Mike Mallory's avatar

I am sympathetic to Gerald Edelman's Extended Theory of Neuronal Group Selection (TNGS), although I would describe it more as a theory of memetics than a theory of consciousness, because it seems to be about how content is selected rather than how it is we have an experience of that content.

And while one of the criticisms of TNGS is that the theory does not help us figure out why one element of conscious content attracts our attention (focus) rather than another, it is my belief that 1) that selection process is essentially aesthetic, and 2) that process is the most likely place to find what people refer to at "free will."

Expand full comment
Grant Castillou's avatar

My hope is that immortal conscious machines can achieve great things with science and technology, like defeating aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do (unless they're physically destroyed, of course). The theory and experimental method the Darwin automata are based on is the way to such machines. Darwin 30, maybe?

Expand full comment
Mike Mallory's avatar

I'm not interested in living forever. As George Bernard Shaw said in "Back to Methuselah" - I couldn't live with myself that long.

Expand full comment