Ghost In The Machine

Ghost In The Machine

Originally shared by Rick Wayne (Author)

This is a wonderful, lucid, and short essay on the fundamental flaw of contemporary cognitive science written by a preeminent psychologist.

If you follow my rants at all, you know that philosophy of mind, particularly human judgment and decision-making, is a big interest of mine, and that I've said repeatedly, as Mr. Epstein does, that there's a reason 60 years of the computational theory has produced no significant advances. It's bunk, a textbook case of the trap of paradigmatic thinking.

Some relevant excerpts:

"By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. In the 1600s, the British philosopher Thomas Hobbes suggested that thinking arose from small mechanical motions in the brain. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph.

Each metaphor reflected the most advanced thinking of the era that spawned it. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer...

...Just over a year ago, on a visit to one of the world’s most prestigious research institutes, I challenged researchers there to account for intelligent human behaviour without reference to any aspect of the Information Processing metaphor. They couldn’t do it, and when I politely raised the issue in subsequent email communications, they still had nothing to offer months later. They saw the problem. They didn’t dismiss the challenge as trivial. But they couldn’t offer an alternative. In other words, the IP metaphor is ‘sticky’. It encumbers our thinking with language and ideas that are so powerful we have trouble thinking around them."

Before Newton, otherwise very intelligent people literally couldn't imagine planetary motion without physical spheres. That was the only example they had. And it allowed them to do things like build functional models, orreries, that gave them the subjective sense they were on the right track. They had the sense they were making progress accounting for planetary motion -- by adding spheres onto spheres, for example -- and so it was easy to think they were only a few small revelations away from accounting for everything, when in fact they were almost completely wrong, where the real explanation was in fact much simpler.

The computational theory of mind is similarly pervasive, so much so that, just as early astronomers were mired in the Ptolemaic system, so too it is often difficult to have intelligent conversations with otherwise intelligent computer scientists about consciousness and cognition. It's often said, for example, that we're on the verge of artificial intelligence and that soon "computers will be as powerful as the human mind."

But as quantum physicist David Deutsch (and many others) recently argued, consciousness is not a matter of computational power. At all. If we had had a functioning theory of mind, we could have created artificial intelligence in 1960, where the most advanced, room-sized machines had less computing power than what you carry in your pocket. It just would have taken a long time to produce a response.

But it's not like we wouldn't have waited. Happily.

The problem is not lack of computing power but rather that we don't have a working theory. The so-called Turing "test" only illustrates this. It punts on the issue completely. It treats the mind as an impenetrable black box -- just as creationists treat the evolution of species -- which is the antithesis of science, where we seek explanations for things: good, falsifiable explanations. Turing, certainly a genius, couldn't describe consciousness with the computational theory, despite personally recapitulating the entire history of logic, because the brain is simply not an information processor.

What we are on the verge of now -- if anything -- is a brute-force mimicking of intelligence analogous to the anthropomorphic, piano-playing automata that were popular with the aristocrats in the late 1700s. Unlike true consciousness, brute force DOES require power. But what will be produced will not be conscious, just as Google's AlphaGo machine, which recently shellacked the world Go champion, was not conscious, was not being creative. The humans who made it were conscious (and creative). They encoded their creativity in the program and coupled it with a non-conscious algorithm that simply sorted through millions of moves to find the optimal one.

That is not consciousness -- or intelligence or self-awareness or whatever you want to call it. And that is not at all what your brain does. (If you want to have a better sense of why that is, read the article.)

Part of the problem is that cognitive science often jumbles two distinct aims. The first is to understand the general phenomenon of consciousness. Towards that end, I suspect computers will ultimately be very useful -- as they are now with all kinds of study. What's more, machines may certainly be self-aware one day. Corvids (ravens) and cephalopods (octopi) seem to have evolved proto-intelligence completely separately from mammals, which suggests there's any number of ways to get there.

In other words, I'm not saying we won't make AI. Rather, I'm saying that, given the difficultly of the problem, it seems silly to try to develop a theory of consciousness from scratch when we have a valid, verifiable, real-world case to study -- our brains.

And that brings us to the second aim -- so often lost in the first but which I believe is ultimately more rewarding -- which is to understand our particular kind of intelligence, the specifically human phenomenon mediated by our brains; to build a deeper understanding of ourselves; and to discover paths to increased self-awareness, fulfillment, and happiness, both individually and for the species as a whole.

I'm skeptical we'll discover that first in a machine.

via TDB Gryffyn 
https://aeon.co/essays/your-brain-does-not-process-information-and-it-is-not-a-computer

Comments

Popular posts from this blog

Entremet

Flushbunkingly Gloriumptious

Originally shared by Kam-Yung Soh