Monday, June 30, 2008

The soul of the machine

In an interview, cognitive scientist Douglas Hofstadter had this to say about Ray Kurzweil's 'singularity', that point in which machines achieve the holy grail of computing, artificial intelligence, and takes over evolution from us humans:
Well, to me, this “glorious” new world would be the end of humanity as we know it. If such a vision comes to pass, it certainly would spell the end of human life. Once again, I don't want to be there if such a vision should ever come to pass. But I doubt that it will come to pass for a very long time. How long? I just don't know. Centuries, at least. But I don't know. I'm not a futurologist in the least. But Kurzweil is far more “optimistic” (i.e., depressingly pessimistic, from my perspective) about the pace at which all these world-shaking changes will take place.

In any case, the vision that Kurzweil offers (and other very smart people offer it too, such as Hans Moravec, Vernor Vinge, perhaps Marvin Minsky, and many others — usually people who strike me as being overgrown teen-age sci-fi addicts, I have to say) is repugnant to me. On the surface it may sound very idealistic and utopian, but deep down I find it extremely selfish and greedy. “Me, me, me!” is how it sounds to me — “I want to live forever!” But who knows? I don't even like thinking about this nutty technology-glorifying scenario, now usually called “The Singularity” (also called by some “The Rapture of the Nerds” — a great phrase!) — it just gives me the creeps. Sorry!
Douglas Hofstadter still believes in AI, mind you. I think he more-or-less holds a materialist mind is brain view. His objection to Kurzweil's technological approach is one of method. "A key element in this whole vision is that no one will need to understand the mind or brain in order to copy a particular human's mind with perfect accuracy, because trillions of tiny “nanobots” will swarm through the bloodstream in the human brain and will report back all the “wiring details” of that particular brain, which at that point constitute a very complex table of data that can be fed into a universal computer program that executes neuron-firings, and presto — that individual's mind has been reinstantiated in an electronic medium." Hofstadter thinks this can't be done without first understanding the mind/brain. And in this he says we have a long way to go.

If indeed trillions of nanobots swarm through the bloodstream and create an electronic replica of the human brain, and we succeed in somehow copying all the wirings, I would think all we'll come up with is a computer, not a human mind at all. The problem I see is this: in a classical computer, a bit must be either 0 or 1. Kurzweil's nanobots will be limited to 0's and 1's because I can't imagine these nanobots operating in a quantum level. Even if the nanobots can, for example, copy a certain subatomic wiring in the brain, it'll have to contend with virtual particles, that is, particles that arent really there, but could possibly be there (or not there). Therefore, being limited to 0's and 1's it won't be able to capture everything about what a mind is. For example, humans arent limited to 0's and 1's. We can have states of both 0 and 1, or neither 0 nor 1, or it can simply refuse to decide between 0 and 1, or simply not care. Hofstadter believes that we will one day understand the mind. [And perhaps, with the development of a quantum computer (already in its infancy), we might begin to approximate the human mind. In the meantime, allow me to be skeptical of this new venture in computing.]

The interview also has Hofstadter saying something interesting. When asked to explain his claim that the "I" is 'nothing but a myth, a hallucination perceived by a hallucination', and its conflict to such things as his compassion, he says, "I can't explain this completely rationally." It is interesting because here he is admitting that reason itself can't account for his beliefs, and yet we are asked to believe (not by Hofstadter particularly, but by science ideologues in general), that the rationality of science is our way to 'salvation.'

In his critique of the Turing Test, software pioneer Mark Halpern had this to say:
The AI champions, in their desperate struggle to salvage the idea that computers can or will think, are indeed in the grip of an ideology: they are, as they see it, defending rationality itself. If it is denied that computers can, even in principle, think, then a claim is being tacitly made that humans have some special property that science will never understand—a “soul” or some similarly mystical entity. This is of course unacceptable to scientists (and even more to aspirants to the title “scientist”).
And here is one of cognitive science's champions, claiming to have no rational way of explaining something. Rationality, Im sure Hofstadter will agree, can only get you so far. The problem of rationality I suppose is its dependence on language. Language in fact is an analogue of reason itself. But humans have experience beyond language, a subjectivity that they cannot convey to others. How many times have we described an experience as 'beyond words'? Or try this: describe how a sampaguita smells like to another person who hasnt smelled one before. This 'beyond words' experience is something that has to be captured by the machine that has to follow a program that uses language.

So can machines think? Will they be able to? Sure, why not? I just dont expect them to think like humans. Artificial intelligence is just that: artificial. Mark Halpern (ibid) offers agnosticism as the most rational approach: "What I would urge on them is agnosticism—an acceptance of the fact that we have not yet achieved AI, and have no idea when if ever we will. That fact in no way condemns us to revert to pre-rational modes of thinking—all it means is that there is a lot we don’t know, and that we will have to learn to suspend judgement. It may be uncomfortable to live with uncertainty, but it’s far better than insisting, against all evidence, that the emperor is well dressed."

4 comments:

cvj said...

Rationality may have its limits, but it does help guard against loony beliefs.

Jego said...

Some people define 'loony belief' as something that can't be rationally explained except when it comes from somebody from their own camp in a sort of special pleading.

We agree for example that the 9/11 terrorists were in the grip of a loony belief, but to them, we're the loony ones. On the other hand, the eugenics movement is a rational enterprise but we find it abhorrent because of a belief we can't defend rationally without invoking a non-rational and question-begging faith in something like the intrinsic value of a human life.

cvj said...

I think what you're describing is brought about by the false distinction between fact and value.

It is perfectly valid to base reasoned argument on values as it is on facts. For example, it is still rational to base the argument against eugenics on the intrinsic value of human life.

It would be a different matter if you base the value of human life on the assertion that you are one of God's creatures. That would be based on faith and therefore non-rational.

Jego said...

But you see why I think it's question begging, right? Why do we assume that a human life has intrinsic value?

There is a distinction between fact and value and it is reasonable to base an argument based on value but where does that value come from? Why, for example, does the life of a street urchin have the same intrinsic value as the life of a Nobel prize winner?