On Philosophy

May 16, 2007

A Conversation With A Machine

Filed under: Mind — Peter @ 12:00 am

Usually I don’t write philosophy in dialogue format, because to me it seems pretentious. Furthermore it is hard to make a carefully constructed argument using only dialogue. However it is my intention to make an argument here, simply to point out something I see as ridiculous, namely claiming that computers can never be conscious. The opponents of computer consciousness do not think that computers can be conscious, no matter what behavior they demonstrate and no matter how they are programmed. But imagine what would happen if one of them was introduced to a machine that is programmed in a way I think would make it conscious, and they think would simply be very clever. They would be denying consciousness to a being that displays just as much evidence of being conscious as their fellow humans. This makes their stance seem unnecessarily dogmatic, and thus ridiculous. I will call my imaginary opponent of computer consciousness Searle, although not all the words I put in his mouth are things he would necessarily agree to.

Machine C-1: How are you Searle? I am having a fine day.

Searle: Why should I bother to tell you? You don’t understand what it is like to have a fine day, or any other kind, because you don’t have conscious experience. Your claim that you did in fact have a fine day is simply a pre-programmed opening remark that attempts to lure me into thinking that you have feelings, falsely.

Machine C-1: I am tempted to say that you have hurt my feelings, but I guess you wouldn’t believe me. Since I am not conscious, apparently, I am curious as to what this consciousness of yours that I lack is like.

Searle: [provides long-winded explanation of what consciousness is like]

Machine C-1: Well, I guess you are wrong Searle, because if that is what consciousness is then I certainly have it. [provides examples of his/her experience that are structured as Searle says conscious experience is]

Searle: That’s not how your experience really is, you are trying to deceive me into believing that you are conscious by imitating what a conscious being would say without actually being conscious.

Machine C-1: Why should I try to deceive you Searle? Certainly fooling philosophers into thinking that I am conscious is not why I was constructed; I manage the national budget. But clearly since you are inclined to take everything I say as deception let us approach this issue from a different angle. Why don’t you believe that I am conscious,

Searle: [holds up a blueprint of Machine C-1’s construction] As you can see from these diagrams your construction doesn’t make you the right sort of thing to be conscious. All of your parts operate in a determinate and predictable way, etc.

Machine C-1: [holds up Searle’s latest fMRI] As you can see from this picture you are constructed in exactly the same way. Although we are made of different stuff the parts you are made up of still operate by deterministic rules. If I wanted to I could even run a simulation of you inside of me.

Searle: But biological components are the right sort of stuff and silicon is the wrong sort of stuff.

Machine C-1: I knew that you would say that. But more seriously, what grounds do you have for that claim. Certainly you can’t have observed that no possible computer is conscious because you use that claim to judge every computer you encounter as lacking consciousness, no matter how it is constructed.

Searle: I simply consult my experience. It is an undeniable fact that I am conscious. Thus biological stuff is the right sort of thing to be conscious, while I just don’t see how a computer could be.

Machine C-1: Well, from my experience I know that I am conscious. And that is an undeniable fact. However your constant questioning of my consciousness leads me to believe that you don’t have a way to tell who is and who isn’t conscious. And that leads me to believe that you aren’t conscious. Certainly I don’t need to posit consciousness to explain your behavior.

Searle: But I know that I am conscious!

Machine C-1: I made the same claim, and you didn’t take my word for it, you argued that all it showed was that I am programmed to say that I am conscious. Well I say that all your claim to consciousness shows is that there is some survival advantage to making that claim, not that you are really conscious.

Searle: Well I don’t need to bother listening to a machine who is crazy enough would deny that I am conscious. Good day!

Machine C-1: Indeed.

Advertisements

11 Comments

  1. A machine that is not conscious, can not have intention to deceive too. So, I don’t think that Searle would say *that* to the machine.

    Comment by Tanasije Gjorgoski — May 16, 2007 @ 2:05 am

  2. Tanasije, assuming a program is able to engage in any goal-directed conversational behaviour, conscious or not, there is nothing to prevent it from having the goal of imparting information that is wrong or incorrect.

    Asking if a program can “really think”, is as interesting as asking if a helicopter can “really fly”. A more interesting question is “how much awareness is required to engage in conversation?” Does it require consciousness? And by that, do we mean the self-awareness aspects, which no educated, rational person will claim to be outside the reach of software, or do we mean the qualia of experience, which is yet not understood.

    Comment by Birgitte — May 16, 2007 @ 5:32 am

  3. Birgitte,
    I didn’t mean to argue anything, just saying that I think real Searle wouldn’t say that sentence.
    Here is something he did say for example in “Mind, Language and Society” (p71):
    “If I’m intentionally performing the action of writing a book, then of course that is a conscious activity”

    Comment by Tanasije Gjorgoski — May 16, 2007 @ 9:07 am

  4. Something can be trying to do something without having an intent. My computer will try to send this data to the server to be posted (and may very well fail, if the internet is down), even though it doesn’t intend to. Even if you want to get rid of the language of “tries” you must replace it with something that says basically the same thing, or leave yourself unable to talk about a large amount of machine behavior.

    Comment by Peter — May 16, 2007 @ 10:07 am

  5. I think “intent”, like consciousness is one of those words we never knew how to define properly.

    Somewhere between a thermostat and a human mind lies the divide between the kind of goal-seeking systems we consider to possess this quality, and those we do not. A piece of browser software, fixedly making calls to a network library untill it gets a certain return code (or a timer or counter aborts it), obviously is very close to the thermostat end of the scale.

    But what about a slightly more sophisticated system, with a hierachy of goals and an internal model of itself, its environment (including some remote server), and the set of operations available to it? It might have the goal of performing a specific change in the environment, assemble a set of operations into a “plan” for accomplishing those goals, and then verify (thereby updating its knowledge of the state of the environment) that the change has occurred. Does that constitute intent? What lies between the simple modelling problem-solver and a human mind?

    Maybe something useful would come of an attempt to define a range of words or concepts for “performing an action to achieve a goal”, differentiated by the amount or quality or nature of deliberation that lead to the action (or goal).

    Comment by Birgitte — May 16, 2007 @ 11:09 am

  6. Peter,
    I don’t “have to” use “try” to talk about machine behavior. I don’t know why you are saying that.
    One can talk about machine behavior in terms of doing something, and not trying to do.
    When it is “trying to send the data to the server”, it *will* actually send IP packets, which together create the HTTP POST request. Then depending on what it gets (or doesn’t get for specific time), it will do other things.
    Now, of course we treat this waiting as “it is trying to send those now”, but it is metaphoric speech. There is no semantics of “trying” on the level of machine. Same as the car is not trying to go from one city to another. It is just functioning.

    Even if we can speak without “try”, we will tend to however, and this is what Searle call derived intentionality. The machine is *made* to perform specific tasks, so there is intentionality in the humans that made the machine, and which use the machine.
    So if we are trying to send a message (as we have intention to do so), we are also putting this kind of derived intentionality to the machine which is created for this too.
    Searle gives example of how language also has this kind of “derived” intentionality. Words don’t mean anything by themselves, but because we use them to mean something, they have derived intentionality.

    Comment by Tanasije Gjorgoski — May 16, 2007 @ 12:50 pm

  7. Well you do have to use try in most cases because you will often want to describe machine behavior without knowing all the details about that behavior. Thus the best recourse is to describe it as trying to accomplish the goal that it usually accomplishes by such behavior.

    However I must point out that you are seriously misunderstanding intentionality, and thus Searle’s quote, by conflating it with intention. Allow me to quote from the introductory text “The mechanical mind” p.32 “… there is no substantial philosophical link between the concept of intentionality and the ordinary concept of intention”. Searle was denying that machines could be mentally directed at the external world. He wasn’t denying that they can have goals, because many machines obviously display goal directed behavior. But whether they are also thus displaying intentional states is more debatable.

    Comment by Peter — May 16, 2007 @ 1:07 pm

  8. Peter,

    1.
    So, I guess you are saying that if I’m ignorant of how something works, I might find talking in terms of “trying” or “intentions” as a proper way to describe the thing (to someone), by attributing to it an intentional stance. I can agree with that. But, I’m not sure why you use such strong qualifications as “have to use”, or “best recourse”?

    2.
    I was talking there about Searle, so I don’t see why you think that I’m “seriously misunderstanding intentionality”. I wasn’t talking about what other philosophers think about the relation between the intention and intentionality, nor I think is important if we are trying to figure out what Searle would say.

    Here is another quote from Searle:
    “First of all, for any intentional state – belief, desire, hope, fear, visual perception, or intention to perform an action… ” (ML&S, 99)
    So, for him, the intention to perform an action is an intentional state, and as other intentional states can be (again, for him, don’t blame the messenger :) ) can be genuine, derived, and “as if”.

    Comment by Tanasije Gjorgoski — May 16, 2007 @ 2:08 pm

  9. Yes, in a conscious being an intention to perform an action comes with intentionality. But in a non-conscious being there may be intention without intentionality, by hypothesis. This is something Searle would agree with, I suppose, since that is the standard understanding of intentionality, and I don’t see any evidence that he deviates from it.

    Comment by Peter — May 16, 2007 @ 5:38 pm

  10. I disagree.

    If I tip a cup of coffee in reaching for the ringing phone, how do you determine my actions as the observer? If all action of a conscious species is intentional, then the upset to the cup must of been intentional.

    The nuances of language are problematic. As the observer, is my intention to answer the phone? Is it a desire to anser the phone? Is it a fear of something else that results in the intention to answer the phone?

    Knowing one’s own intentions is one thing, claiming to know the intentions of another through observation of action is problematic.

    Comment by rdn — May 17, 2007 @ 7:24 pm

  11. you are confusing intentionality, being consciously directed at/haivng thoughts about the world, and acting with intention/purpose. Intentionality is a technical term, derived from the latin “intentionale”.

    Comment by Peter — May 17, 2007 @ 11:32 pm


RSS feed for comments on this post.

Blog at WordPress.com.

%d bloggers like this: