In an interview, cognitive scientist Douglas Hofstadter had this to say about Ray Kurzweil's 'singularity', that point in which machines achieve the holy grail of computing, artificial intelligence, and takes over evolution from us humans:
If indeed trillions of nanobots swarm through the bloodstream and create an electronic replica of the human brain, and we succeed in somehow copying all the wirings, I would think all we'll come up with is a computer, not a human mind at all. The problem I see is this: in a classical computer, a bit must be either 0 or 1. Kurzweil's nanobots will be limited to 0's and 1's because I can't imagine these nanobots operating in a quantum level. Even if the nanobots can, for example, copy a certain subatomic wiring in the brain, it'll have to contend with virtual particles, that is, particles that arent really there, but could possibly be there (or not there). Therefore, being limited to 0's and 1's it won't be able to capture everything about what a mind is. For example, humans arent limited to 0's and 1's. We can have states of both 0 and 1, or neither 0 nor 1, or it can simply refuse to decide between 0 and 1, or simply not care. Hofstadter believes that we will one day understand the mind. [And perhaps, with the development of a quantum computer (already in its infancy), we might begin to approximate the human mind. In the meantime, allow me to be skeptical of this new venture in computing.]
The interview also has Hofstadter saying something interesting. When asked to explain his claim that the "I" is 'nothing but a myth, a hallucination perceived by a hallucination', and its conflict to such things as his compassion, he says, "I can't explain this completely rationally." It is interesting because here he is admitting that reason itself can't account for his beliefs, and yet we are asked to believe (not by Hofstadter particularly, but by science ideologues in general), that the rationality of science is our way to 'salvation.'
In his critique of the Turing Test, software pioneer Mark Halpern had this to say:
So can machines think? Will they be able to? Sure, why not? I just dont expect them to think like humans. Artificial intelligence is just that: artificial. Mark Halpern (ibid) offers agnosticism as the most rational approach: "What I would urge on them is agnosticism—an acceptance of the fact that we have not yet achieved AI, and have no idea when if ever we will. That fact in no way condemns us to revert to pre-rational modes of thinking—all it means is that there is a lot we don’t know, and that we will have to learn to suspend judgement. It may be uncomfortable to live with uncertainty, but it’s far better than insisting, against all evidence, that the emperor is well dressed."
Well, to me, this “glorious” new world would be the end of humanity as we know it. If such a vision comes to pass, it certainly would spell the end of human life. Once again, I don't want to be there if such a vision should ever come to pass. But I doubt that it will come to pass for a very long time. How long? I just don't know. Centuries, at least. But I don't know. I'm not a futurologist in the least. But Kurzweil is far more “optimistic” (i.e., depressingly pessimistic, from my perspective) about the pace at which all these world-shaking changes will take place.Douglas Hofstadter still believes in AI, mind you. I think he more-or-less holds a materialist mind is brain view. His objection to Kurzweil's technological approach is one of method. "A key element in this whole vision is that no one will need to understand the mind or brain in order to copy a particular human's mind with perfect accuracy, because trillions of tiny “nanobots” will swarm through the bloodstream in the human brain and will report back all the “wiring details” of that particular brain, which at that point constitute a very complex table of data that can be fed into a universal computer program that executes neuron-firings, and presto — that individual's mind has been reinstantiated in an electronic medium." Hofstadter thinks this can't be done without first understanding the mind/brain. And in this he says we have a long way to go.
In any case, the vision that Kurzweil offers (and other very smart people offer it too, such as Hans Moravec, Vernor Vinge, perhaps Marvin Minsky, and many others — usually people who strike me as being overgrown teen-age sci-fi addicts, I have to say) is repugnant to me. On the surface it may sound very idealistic and utopian, but deep down I find it extremely selfish and greedy. “Me, me, me!” is how it sounds to me — “I want to live forever!” But who knows? I don't even like thinking about this nutty technology-glorifying scenario, now usually called “The Singularity” (also called by some “The Rapture of the Nerds” — a great phrase!) — it just gives me the creeps. Sorry!
If indeed trillions of nanobots swarm through the bloodstream and create an electronic replica of the human brain, and we succeed in somehow copying all the wirings, I would think all we'll come up with is a computer, not a human mind at all. The problem I see is this: in a classical computer, a bit must be either 0 or 1. Kurzweil's nanobots will be limited to 0's and 1's because I can't imagine these nanobots operating in a quantum level. Even if the nanobots can, for example, copy a certain subatomic wiring in the brain, it'll have to contend with virtual particles, that is, particles that arent really there, but could possibly be there (or not there). Therefore, being limited to 0's and 1's it won't be able to capture everything about what a mind is. For example, humans arent limited to 0's and 1's. We can have states of both 0 and 1, or neither 0 nor 1, or it can simply refuse to decide between 0 and 1, or simply not care. Hofstadter believes that we will one day understand the mind. [And perhaps, with the development of a quantum computer (already in its infancy), we might begin to approximate the human mind. In the meantime, allow me to be skeptical of this new venture in computing.]
The interview also has Hofstadter saying something interesting. When asked to explain his claim that the "I" is 'nothing but a myth, a hallucination perceived by a hallucination', and its conflict to such things as his compassion, he says, "I can't explain this completely rationally." It is interesting because here he is admitting that reason itself can't account for his beliefs, and yet we are asked to believe (not by Hofstadter particularly, but by science ideologues in general), that the rationality of science is our way to 'salvation.'
In his critique of the Turing Test, software pioneer Mark Halpern had this to say:
The AI champions, in their desperate struggle to salvage the idea that computers can or will think, are indeed in the grip of an ideology: they are, as they see it, defending rationality itself. If it is denied that computers can, even in principle, think, then a claim is being tacitly made that humans have some special property that science will never understand—a “soul” or some similarly mystical entity. This is of course unacceptable to scientists (and even more to aspirants to the title “scientist”).And here is one of cognitive science's champions, claiming to have no rational way of explaining something. Rationality, Im sure Hofstadter will agree, can only get you so far. The problem of rationality I suppose is its dependence on language. Language in fact is an analogue of reason itself. But humans have experience beyond language, a subjectivity that they cannot convey to others. How many times have we described an experience as 'beyond words'? Or try this: describe how a sampaguita smells like to another person who hasnt smelled one before. This 'beyond words' experience is something that has to be captured by the machine that has to follow a program that uses language.
So can machines think? Will they be able to? Sure, why not? I just dont expect them to think like humans. Artificial intelligence is just that: artificial. Mark Halpern (ibid) offers agnosticism as the most rational approach: "What I would urge on them is agnosticism—an acceptance of the fact that we have not yet achieved AI, and have no idea when if ever we will. That fact in no way condemns us to revert to pre-rational modes of thinking—all it means is that there is a lot we don’t know, and that we will have to learn to suspend judgement. It may be uncomfortable to live with uncertainty, but it’s far better than insisting, against all evidence, that the emperor is well dressed."