I read an article last week in MIT's house magazine Technology Review that purported to prove, once again, that computers will never have "consciousness", however smart they might appear to be.
I could go through the article demolishing it point by point (for example, we don't even have a good definition of what it means for a person to have consciousness, so it's a bit premature to be saying electronics can never have it...), but its really not necessary. The best refutation of the article is simply to note that every purported argument that silicon and software can never "really" think works just as well as an argument that brains can't think.
Forget all the clever solipsisms and philosophizing. Fundamentally, there are only three possibilities:
1. We think with our brains.
2. We think with something other than, or in addition to, our brains.
3. We don't really think, we just think we do.
If the opponents of "strong AI" agree with (1), a simple iterative argument suffices. We note that the brain is a physical (albeit biological) system that obeys the laws of physics and of systems. In this case they need to come up with a compelling argument for why a physical system composed of silicon (or some other material) that exactly mirrors the organization and behavior of the brain wouldn't be conscious. If they refuse to see this, consider a process whereby we take a brain that we all agree to be conscious and replace its neurons, one by one, with their silicon counterparts. Is there a point at which the brain ceases to be conscious? If you argue that it's with the first silicon implant, you would also have to logically agree that anybody who suffered a brain injury that destroyed the same neuron was also no longer conscious, which is clearly absurd. If you claim it's with the last one, that attributes some miraculous properties to the presence of a single biological neuron among billions. And if its somewhere in between... well, whatever point you choose immediately falls to the argument "why not one more or one less?"
If the argument is essentially (2), the opponents of strong AI need to tell us what exactly what it is we do think with, if not our brains. If the answer is another physical (biological) component, we are back in the case of (1) above. If the answer is something non-physical or immaterial, then we are off in the realms of the unscientific, unprovable and supernatural, and this is an argument that literally cannot be debated. Proponents of this position are quite simply assuming the very thing they purport to prove, i.e. consciousness is not material, therefore you can't build one. Not a very convincing argument, I'm afraid.
And finally there is possibility (3). Perhaps the reason the arguments against strong AI are both convincing to many and equally applicable to brains as to silicon is because we really don't think. Perhaps consciousness is simply an illusion conjured up inside our brains as we weave a narrative for ourselves to explain our mechanistic actions. (Although this does beg the question of whose benefit the narrative is for -- perhaps it is simply an irrelevant side-effect of our evolutionary beneficial ability to imagine and predict?). Among serious consicousness researchers, there is a disturbing and growing body of evidence that (3) might actually be the case...