A/N: A second year philosophy term paper for philosophy of mind. Word cap: 2000. Mark: one mark off an A+ because of begging the question once in the last paragraph. :'(

The Ghost in the Machine: A Rationale for Computer Therapy in the Treatment of Depression

Assume my friend Mary confides to me that she is feeling withdrawn and experiencing the symptoms of depression. After a visit to her doctor Mary is diagnosed with clinical depression, but the circumstances prevent her from attending therapy. Would Mary benefit from seeing a computer therapist for her depression? Could a computer therapist exhibit strong AI?

Perhaps Mary argues that she wants a therapist that is human and not just a "robot". Mary wants a therapist she thinks shares her mind and that she can relate her depressive state to. She believes that other people are more than simply complex but inanimate machines and therefore a human therapist is objectively better. In lieu of these problematic assumptions, Mary should first consider what the actual difference is between the mind and the brain. Mary, like many of us, assumes the mind is something separate from the brain. Physicalists on the other hand would claim they are really one and the same, and we shall explore further why they come to such conclusions.

First, we should address why we assume the mind is something separate from the brain. We know that despite observing the reactions of the brain, the mind and perception cannot be described in terms of a mechanism. Even if a computer therapist were intelligent and able to think and feel, if we looked inside we would only see only circuit boards and hardware, not thoughts or sensations. The same principle applies to humans; there is no mind organ of the brain we can observe by looking at it and dissecting it and there is likewise no way to know the mind of another, human or otherwise.

Mary, like everyone else also relies on relative experiences to help impute mind to others, she assumes they have felt or experienced the same things in the past that she has. Mary might still argue that she can't observe or relate to the emotions of a computer because they require some sort of empirical evidence, like behavior. Mary assumes we can know another person if their mind is like our own, and therefore subject to our perceptions. Thus Mary is referring to the argument from analogy: when she expresses that she is depressed, she expects that a human should compare the experience to their own to justify and understand the claim. To illustrate how this looks in the argument from analogy, her experience of depression could be broken into three stages: 1. Stimuli (depressing circumstances such as the death of a loved one) 2. Mental processing (the conscious experience of depression) 3. Reaction (crying, withdrawing, etc.) To someone observing Mary, only the first and third stages are visible and the second remains inherently private. But considering that the stimuli and reaction correspond with our own reactions, we conclude that the second stage is also a similar mental process to our own. The crucial problem here is that one subjective experience is an insufficient sample size to make claims about everyone else.

Behaviorist Gilbert Rile would claim that the second stage of this argument is part of the official doctrine (Cartesian dualism) that humans are somehow embodied minds. According to Rile, this view of mental states is what he calls the ghost in the machine. Assuming the mind is something separate from the brain is absurd. Rile argues we could simply omit the second stage entirely, because to attribute a mental state to someone is really only to attribute behavior. If the key is in behavior, then the next step is to measure the behavioral responses of a computer. The standard behavioral test for intelligence in a machine is known as the Turing test. Consider for a moment that our computer therapist passes the Turing test. Its behavior is thus indistinguishable from a human. Mary would presumably not be able to tell the difference, and easily impute mind to a therapist that exhibits human behavior towards her. If the computer can achieve this, in theory it would be no different from a human therapist. Rile would argue this human behavior is enough of a sufficient condition for attributing intelligence – or having a mind. Then if it was the case that it is logically impossible for an intelligent computer to fail the Turing test, then there's no immediate reason for Mary not to impute mind to a computer that's passed the test.

However the Turing test is only a purely behavioral analysis of intelligence, it doesn't account for the internal information processing of a system. Turing test critic Ned Block claims that behavioral dispositions alone are not sufficient conditions for intelligence. Block argues that a non-intelligent system could pass the test, because the only intelligence it has is that of its programmers. To illustrate Block's point here, imagine a creature called Blockhead. Blockhead looks identical to a human except he's controlled by a programmed set of responses with every possible input throughout his life. His responses could even be identical to ones Mary might give, but Blockhead isn't intelligent because he is still only mimicking intelligence. Realistically an AI that passed the Turing test would be built on a theory of mind that incorporates a more comprehensive explanation of what intelligence is.

Block's Blockhead is an extension of John Searle's Chinese Room argument. In his argument Searle demonstrates how a complex system can mimic intelligence (like Blockhead) but not have any genuine understanding. Searle would refer to Blockhead as having a syntactic engine, meaning although it follows language convention and has such a vast array of programmed responses it still doesn't understand. The idea is that forced intelligence is not genuine intelligence, and while critics may claim that even though the programmed information processing of the computer may not have understanding, the system as a whole does. However, this still fall victim to the behaviorist's biggest problem: the phenomenon of qualia. The way things like pain actually feel and to understand semantics, what things really mean.

The problem once again becomes perception. Mary is convinced that having a mind and intelligence is essential for a good therapist. As we've seen the physicalists are convinced the mind is a misnomer for the brain, and the behaviorists are convinced mental states are really behavioral dispositions. The mind is inherently not observable, but behaviorists attempt to tackle this problem with the private language argument. In his private language argument, Wittgenstein asks if we grow through language, or the brain evolves to make sense of the abstraction of thought through language. To understand this he asks us to imagine an individual that is raised in isolation and away from contacting any other intelligent people. Wittgenstein theorizes that an internal thought process of language wouldn't develop. That is, without the community of language to determine private sensations there's no way to determine if the sensation is universal. Wittgenstein claims we should reject the idea that something like pain consists of the awareness of private qualia.

Wittgenstein's argument, behaviorism and the argument from analogy all suffer similar criticisms. Observing behavior offers no concrete evidence of actual mental states. The mental process is still inherently private, and we shouldn't reject qualia and consciousness so easily.

Perhaps Mary argues that her doctor has told her that her depression consists of a chemical imbalance of serotonin in her brain. Her doctor has made the argument for type-identity theory, the identity between the mental and physical as types. Type identity theory attempts to answer the problem of consciousness by claiming mental states are really just typesof brain states, this is a form of reductive physicalism. It certainly is plausible that the experience of depression is a chemical imbalance, but perhaps Mary's computer therapist can also experience depression without a serotonin imbalance. The computer doesn't have a flesh and blood brain like Mary, if depression were really just a serotonin imbalance then type-identity theory (or a purely reductive physicalist theory) can't account for it. This is the problem of multiple realizability, it arises when we try to identify one sensation exclusively with one physical state. A computer has no physical states in common with a human such as Mary, but hypothetically they could both experience depression. A more complete account might include the causal function of depression.

Functionalism is ontologically neutral, which means it doesn't preclude the possibility of either physicalism or dualism. Functionalism doesn't try to explain how functional states are realized, only that they can be multiply realized in different ways. Mary might be back to wondering if her computer therapist could experience mental states and be intelligent. Consider Mary's depression as a functional state, and a common cause of which is a serotonin imbalance. The effects can include depressive behavior and additional mental states. In this way functionalism seems to escape the criticism that behaviorism suffered. The notion of multiple realizability brings back into question the idea that since a human brain is not required to have mental states, then Mary's computer therapist could as well. If Mary rejects the computer therapist again only because it's not a flesh and blood human, she's being chauvinist by precluding the possibility that intelligence could be multiply realizable. A functional definition of a therapist that possesses intelligence must rely on a careful definition that doesn't include things that aren't really intelligent or exclude things that are.

What might a functional definition for Mary's computer therapist look like? John Searle classifies AI as either strong or weak. Simply put if Mary's computer therapist were strong AI then it would be able to process causal roles between stimuli, mental states and subsequent behavior. It would also be able to form beliefs about Mary and act on those beliefs. Functional states of mind themselves are closely related to computational states, recalling the argument from analogy for a moment in that they are both structurally similar. They involve an input and output process with internal processing in between, like a computer running a program. If Mary's computer therapist could do these things, then we could assume it is a mind.

However, as with the Blockhead argument, forced intelligence that only mimics isn't genuine. There must be a careful distinction between a computer that exhibits strong AI and one that is only a model of the human mind. The later is what Searle refers to as weak AI, and while he argues weak AI isn't intelligent, they still offer valid insight into the functions of the human mind. By now, Mary is still wondering about where quale fits into functionalism. This is where functionalism stumbles by assuming that two individuals with functionally identical states will experience the same qualia. The argument would break down as follows: 1. Mental states are functional states, and are multiply realizable 2. Qualitative states are mental states 3. Qualia is a functional state. So logically we can assume that for each qualitative state there is a functional state, yet suppose two individuals that are functionally identical experience different qualia. One significant argument for this is the inverted spectrum argument; it supposes two people with identical functional states experience a color such as red differently.

Intuition compels me and my hypothetical depression patient Mary to believe we have formed meaningful relationships with other people and that these people are thinking, feeling beings. But our intuitions cannot escape the subjectivity of our own experiences. As is evident from the argument from analogy, it is impossible to infer on the basis of one example that we are correct in imputing mind to others. Intuitions aside, I find no satisfactory verification to convince me of a computer that exhibits strong enough AI to take the place of trained human therapist. But if it is the case that we simply impute mind, then I would not be justified in discouraging Mary from seeing a computer therapist on the basis of that alone.

While we can be sure that certain stimuli will provoke a predictable response in other intelligent beings, the mind itself remains essentially private. Because of this, it not only is easy to become chauvinist towards AI, but it speaks to a larger humanity when we can also dehumanize others by not imputing mind where we think we should. By removing personhood, we're able to commit atrocities such as slavery and genocide. As surely as I would impute mind to Mary in order to see her as a thinking, feeling being, I see no immediate issues with Mary imputing mind to a computer therapist to assist her in coping with depression.

In the case for weak AI for the treatment of depression, current computer therapists have the capacity to use therapeutic techniques such as Carl Rogers's person-centered therapy for example. Person-centered therapy has been proven to be effective and requires very little input from a therapist. This type of therapy works for a computer because it doesn't require the introduction of new ideas or comprehension of the problem itself, but rather only of the words that have been inputted.

However, I argue that the approach of current computer therapists is still unable to substitute for a properly trained human therapist. A computer is still following an algorithm and it has a limited series of programmed responses. Effectively adapting therapy techniques presents another immediate problem for the computer. At present, computer therapists have a very limited scope. Obviously more dedicated research into this field is needed, but ultimately I'm not entirely convinced that complex mental health problems are solvable via an algorithmic solution.

Works Cited

Block, Ned. "Psychologism and Behaviorism." New York University. Web. 3 Apr. 2015. . .

Campbell, Neil. A Brief Introduction to the Philosophy of Mind. Ontario: Broadview Press. 2005. Print.