Re: virus: kurzweil cuts the mustard

Robin Faichney (
Sun, 3 Jan 1999 21:21:41 +0000

In message <>, David McFadzean <> writes
>At 10:17 AM 1/2/99 +0000, Robin Faichney wrote:
>>I'm assuming that it can learn, so if we're talking
>>about an extendable lookup table, you're right. But
>>that point I'm trying to make is that a machine, of
>>whatever sort, has no reason to be more limited in
>>its responses than we are.
>Are you saying it is possible for a machine to act
>like it is conscious, yet not be? (A "zombie" in
>philosophical parlance.)

OK, you got me there -- in the sense that I can't answer either yes or no. Or at least, I can say that no, it's not possible to act entirely as if conscious while not being conscious -- but the reason for that is not that consciousness is required for certain types of actions (where that includes speech acts). It is because there is, in fact, no objective truth about whether a given thing is or is not conscious. It is never simply and straightforwardly true either that the thing is conscious or is not conscious. So we cannot say either that consciousness is required for conscious-type action, or that it is not. All we can say is that if a thing acts as if conscious, then it will be natural to think, talk and act as if it was conscious. The problem then being, what if it acts conscious sometimes, but not others? Because it is safe to say that machines will be inconsistently conscious-acting for a long time before they reach perfection (if ever). That's a really major problem for practical AI, which I don't think many people recognise yet. (Though the more general case, of inconsistent user-interface metaphors, certainly is wellrecognised.)

By the way, I realise that long-term inmates of this forum might have a sense of deja vu here, but I'm working on a website on this sort of topic right now -- and this time I mean to make a go of it!