Saturday, October 5, 2013

Blog 6: ELIZA, Meaning, and the future of Psychotherapy

So both in this class and in my DTC 475 class, we have talked a bit about ELIZA, the 1966 chat bot and computer program modeled after the general idea of a Rogerian Psychotherapist. Despite that the program is very limited in its responses, and the fact that many of us today would struggle to talk to ELIZA without wanting to flip a table, at the time many people took the program seriously. There were many people who believed that ELIZA could understand them, and people would have long and intimate conversations with the program.

I believe this relates to our discussions about if computers can comprehend meaning. At the time of ELIZA, the program certainly couldn't comprehend meaning, but only guess and parrot responses based on what you said to it. Often these responses made absolutely no sense to the information you just fed it. However, while the computer itself doesn't comprehend meaning, the people who interact with the computer can create meaning from the information given, and even acknowledge meaning where it doesn't exist. As of today, I believe that this still remains true--computers give us information without comprehension of meaning, but we can pair appropriate meaning with the information given to us, or even create meaning where meaning isn't found.

On another note, I'm surprised that there haven't been further efforts and advancements in computer psychotherapy since ELIZA, at least not that I found. I believe that there could be promise in the idea. Yes, ELIZA is a rather frustrating program to talk to, but vast improvements have been made in programming and AI since then. If we could create a program with a substantial psychology database and incorporate our fuller knowledge of creating chat bots, there could be a future of creating an effective digital psychotherapist.

For the record, I'm not suggesting that we fully replace human therapists--partially because I know quite a few people currently studying psychology who probably won't be too happy with that idea. But mainly because it's worth noting that there are people who are likely more comfortable with having face-to-face interaction, and would rather speak with a real person. Especially in terms of being able to have heartfelt moments or exchange emotions, that is much harder to do with a computer. 

But on the flipside, there are also people who most likely are too shy and uncomfortable to talk their problems out to another person, and the unbiased nature of a computer program might be soothing to them, but adding a chat function would create at least a small sense of personal interaction. Personally, I've always been wary of therapists because no matter how much education and training a therapist may have, humans are imperfect by nature. I've always been hesitant about the idea of trusting my deepest issues, the most fragile parts of my self, to someone who could have some kind of personal bias, not be knowledgeable in the right area, maybe randomly not like me or work well with me, or just be terrible at their job.That's why I feel like pursuing the idea of a digital therapist would be a worthwhile idea--it may not solve all of these problems, but ideally it would create an unbiased source with a comprehensive knowledge.

However, obviously the technology isn't all there yet. For one thing, while chat bots have massively improved since ELIZA, it's still pretty rare that I can talk to Cleverbot without getting a headache. Current diagnostic programs are hardly perfect either, there are plenty of jokes about the misadventures of trying to diagnose oneself by WebMD. While WebMD has a large database and can accept information from you, it cannot understand the information enough to always give an accurate diagnosis. Overall, this program isn't one that can be created immediately, probably at least a few more years of work needs to be done.

There is also the question of, in order to be a good psychotherapist, would a program have to be able to comprehend meaning and emotion? I do feel that a digital psychotherapist would have to have a higher comprehension of meaning than computers of today do. Especially in terms of being able to "read between the lines" of what a patient might say--when in a sensitive situation like a psychological session, a patient might not always be able to be completely forward. The program may need to be able to comprehend when someone is lying or isn't being direct, and also the difference between those two. Largely so that the program can handle the situation correctly, and also to avoid misdiagnosis (like with WebMD) and the general chatbot frustration as the program tries to figure you out. 

I also feel that at least a basic comprehension of emotion might be needed, especially to tell if the patient is upset or uncomfortable. If a chatbot continued to press personal questions on a patient who, to a human, is showing clear signs of agitation, then the patient might get too scared or disheartened to continue the conversation, and might even be scared away for good.

I could go deeper into this, but I fear this blog post may turn into an actual essay, so I'd better stop here. However, I find the idea of digital psychotherapy a fascinating concept, especially since we recently read about how information theory ended up giving inspiration to psychological researchers. Hopefully, as technology continues to grow smarter, it'll be an idea we'll see developed in the future.

2 comments:

  1. The fact that they decided to make ELIZA a psychotherapist seems extremely fitting. We learned in class about the Turing Test, “a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human” (totally Wikipedia). We came to the conclusion that: if you don’t realize you’re talking to a machine, then it’s considered good artificial intelligence. Clearly, as you describe, ELIZA sucks; and, it’s hard to believe that anyone—even when it first came out—could have an intimate conversation with her. However, I thought it was very clever of the designers to make her a psychotherapist—someone who analyzes human emotions. This seems to fit the description of what a perfect A.I. should be—something that knows what being human means and can act human, just without actually being human. I’m not sure what my point is in all this but you definitely gave me some ideas as to what I might include in my own blog; this was a very interesting post.

    ReplyDelete
    Replies
    1. To add some additional food for thought to your comment:

      You're right, it is interesting that people at the time of ELIZA thought that it was even remotely human. But I wonder, was this the first time that conversation had come from something non-human? Even in the case of the telegraph and the telephone, the messages, while delivered by machine, still originated from humans. In the case of ELIZA though, while a human programmed the base of the responses, ELIZA appeared to respond on its own. Perhaps this is what led to the confusion -- maybe the tested people just couldn't wrap their head around a machine generating responses that it didn't understand, just as that people didn't understand the non-physical nature of the telegraph (such as the one woman who tried to send sauerkraut via telegraph). As blatantly non-human as the responses seem to us now, perhaps to the testers, the mere fact that a machine was capable of generating responses at all led to a seemingly logical conclusion that it must be capable of comprehension.

      You bring up another interesting idea: What is a perfect A.I.? Would it be one human enough to easily pass the Turing test? By technical definition, that is probably the case, but is that a dangerous thing to strive towards? I don't know if you've taken DTC 475 with Susan Ross, but she had us read the book "Do Androids Dream of Electronic Sheep" (which is the base of the movie "Blade Runner," if anyone's seen that). In the book, the androids are so human that the humans constantly struggle to come up with a working test to be able to tell the difference between androids and humans--tests that newer versions of androids keep surpassing. So the struggle continues. But the humans can't seem to handle how similar the androids are to humans and try to continue to differentiate between the two of them, despite there is virtually no difference between an android and a non-empathetic human. The point is, while technically the definition of a perfect A.I. is one that can act completely human, the consequences of creating such an A.I could be quite interesting, especially if said A.I. has any grasp of emotion. At the very least, we'd have to start asking ourselves a lot of uncomfortable questions about what it means to be human -- if we can create an A.I that acts perfectly human, what is really the difference between it and us? Besides it probably having a far superior intelligence.

      Ahhh, sorry for the long comment. I always tend to ramble in writing. T_T But your comment gave me some interesting ideas to dwell on.

      Delete