Wednesday, October 23, 2013

Blog 8: We are the Machine. The Machine is Us

I admit I felt rather inspired by Molly's blog for this week, as I am also taking DTC 475 and have also been mulling over the similarities between the class materials. I also felt inspired by one of the videos we watched on Monday, The Machine is Us/ing Us, largely because while we've talked in class about how machines are like humans or are becoming like humans, we haven't talked so much about how humans are becoming like our machines.  

So in The Shallows, one of our DTC 475 readings, Carr talks about how information technologies mentally change us. Even as far back as the invention of writing itself our minds changed, we no longer needed to remember the massive amounts of information that an oral culture needed to -- it was just no longer necessary (Carr 56-57). Later, with the invention of the typewriter, we see how writers were affected by it, Carr offers a quote from T.S. Elliot who said that his writing changed from long fluid sentences to sentences that were short and staccato (Carr 209). 

We have continued to see this with the rise of the digital age. The main point behind The Shallows is to detail how the human brain has changed due to the internet, how we have lost some of capacity to think deeply, and instead have shifted to a brain more oriented to the multi-tasking many of us are used to on the internet, such as working with multiple tabs or programs.

I think part of the reason why the internet has seemed to especially take people by storm is just how immersive it is. As we discussed a little bit on Monday, advanced programming has made it so that the average user doesn't have to deal with code in order to post information, it seems to flow almost instantaneously. Typing a blog post for the world to see takes less time, effort, and money than, say, sending a telegraph.

An idea I was toying with for the midterm though, is the idea of will information technologies become even more immersive, to the point where posting information could be almost literally as fluid as thought? Will we reach a point where humans and information technologies are not separate entities, but almost combined into one? Perhaps you only need to think to post content? Or perhaps mouse control works by a device that tracks your eye paths? I decided against the idea for my midterm project since I don't see this happening in eight years, but it's something I could see something like it happening maybe even as soon as fifteen years. It would be interesting, as it would quite revolutionize how we currently consider our relationship with technology.  

What would this idea mean in regards to information though? In my midterm I talked about a defining moment where we need to decide how we handle information -- a technology like what I speak of might help provoke that moment, if it hasn't already happened. I say that because I think those technologies may increase our current problems of not knowing how to deal with the amount of information sharing and reproduction, if the problem hasn't already been solved by then. Such a fluid technology might make it even more difficult to control what users can and cannot access, and browsing and posting remixed information would ideally grow even easier. Perhaps it would be the final straw for current copyright holders, to either attempt to put a stop to the ease of accessing/reproducing information, or else to give up and let information remixers do what they want to a certain degree.  

Overall, it seems that information technologies are growing ever more immersive. It will be interesting to see what this immersion will cause us to reconsider -- whether it be psychology, copyright, or information itself. 


Thursday, October 10, 2013

Blog 7: Information Technologies and the Deparment of Redundancy Department

(The actual image is a bit bigger. If you're having trouble seeing the image, here's the image link)

This image illustrates a connection between various information technologies that we have looked at: that redundancy in information given is incredibly important. The more information given, the better!

It's interesting because in everyday life, such as in writing and in speech, redundancy is not considered a good thing. We are encouraged to not beat around the bush and just to get to the point. Language that repeats a single point over and over again, or a point that is obvious is not considered impressive or interesting by any stretch of the mind. For example, if I wrote a story with this as an excerpt: "it was raining outside. I decided to take a walk in the rain. The rain got my hair wet. I almost slipped on the freshly rained-on concrete," the reader would probably want to throw something in frustration. The reader understood the information the first time, and it is neither useful nor interesting to repeat it over and over again.

In information technology though, redundancy is actually valuable. Repetition of information helps catch or prevent errors or miscommunications from occurring. We can see throughout history how redundancy in information can prevent errors and miscommunications.

For example, Gleick describes redundancy in the language of the talking drum as "the antidote to confusion" (25). Because certain patterns of tones could mean more than one word, adding additional information was a necessity to make sure that the correct message got across. While listening to the talking drum would be more time consuming than anyone would want a normal conversation to be, redundancy eliminated the potential for errors.

We also see Gleick detail how a lack of redundancy in abbreviated telegraph messages made them vulnerable to error. As Gleick says, "because they lacked the natural redundancy of English prose--even the foreshortened prose of telegraphese--these cleverly encoded messages could be disrupted by a mistake in a single character" (158). While the coded messages were economically friendly by reducing information to only the necessary, if hidden, core, these messages were prone to errors that couldn't be noticed without natural redundancy of language -- and in the case of the wool dealer Frank Primrose, even a single error turned out to be incredibly costly.

We also see the importance of redundancy in current digital information technologies. In one of the videos we watched, Claude Shannon - Father of the Information Age, we are told how Claude Shannon integrated redundancy to digital information -- by adding extra digital bits of information and various error detection and correction codes to any sort of digital medium, then the information can still come clearly through, even if part of the data is faulty or corrupted. (In the video, the bit on redundancy starts at about 15:03, or about there.)

While redundancy in conversation may make you want to strangle someone, it's hard to overvalue the worth redundancy of information has to information technologies throughout history. To bring it all back to the image I posted, when it comes to redundancy and information technologies, there's never too much of a good thing! With information technologies, it may be possible to say that there is really no such thing as Too Much Information.

Saturday, October 5, 2013

Blog 6: ELIZA, Meaning, and the future of Psychotherapy

So both in this class and in my DTC 475 class, we have talked a bit about ELIZA, the 1966 chat bot and computer program modeled after the general idea of a Rogerian Psychotherapist. Despite that the program is very limited in its responses, and the fact that many of us today would struggle to talk to ELIZA without wanting to flip a table, at the time many people took the program seriously. There were many people who believed that ELIZA could understand them, and people would have long and intimate conversations with the program.

I believe this relates to our discussions about if computers can comprehend meaning. At the time of ELIZA, the program certainly couldn't comprehend meaning, but only guess and parrot responses based on what you said to it. Often these responses made absolutely no sense to the information you just fed it. However, while the computer itself doesn't comprehend meaning, the people who interact with the computer can create meaning from the information given, and even acknowledge meaning where it doesn't exist. As of today, I believe that this still remains true--computers give us information without comprehension of meaning, but we can pair appropriate meaning with the information given to us, or even create meaning where meaning isn't found.

On another note, I'm surprised that there haven't been further efforts and advancements in computer psychotherapy since ELIZA, at least not that I found. I believe that there could be promise in the idea. Yes, ELIZA is a rather frustrating program to talk to, but vast improvements have been made in programming and AI since then. If we could create a program with a substantial psychology database and incorporate our fuller knowledge of creating chat bots, there could be a future of creating an effective digital psychotherapist.

For the record, I'm not suggesting that we fully replace human therapists--partially because I know quite a few people currently studying psychology who probably won't be too happy with that idea. But mainly because it's worth noting that there are people who are likely more comfortable with having face-to-face interaction, and would rather speak with a real person. Especially in terms of being able to have heartfelt moments or exchange emotions, that is much harder to do with a computer. 

But on the flipside, there are also people who most likely are too shy and uncomfortable to talk their problems out to another person, and the unbiased nature of a computer program might be soothing to them, but adding a chat function would create at least a small sense of personal interaction. Personally, I've always been wary of therapists because no matter how much education and training a therapist may have, humans are imperfect by nature. I've always been hesitant about the idea of trusting my deepest issues, the most fragile parts of my self, to someone who could have some kind of personal bias, not be knowledgeable in the right area, maybe randomly not like me or work well with me, or just be terrible at their job.That's why I feel like pursuing the idea of a digital therapist would be a worthwhile idea--it may not solve all of these problems, but ideally it would create an unbiased source with a comprehensive knowledge.

However, obviously the technology isn't all there yet. For one thing, while chat bots have massively improved since ELIZA, it's still pretty rare that I can talk to Cleverbot without getting a headache. Current diagnostic programs are hardly perfect either, there are plenty of jokes about the misadventures of trying to diagnose oneself by WebMD. While WebMD has a large database and can accept information from you, it cannot understand the information enough to always give an accurate diagnosis. Overall, this program isn't one that can be created immediately, probably at least a few more years of work needs to be done.

There is also the question of, in order to be a good psychotherapist, would a program have to be able to comprehend meaning and emotion? I do feel that a digital psychotherapist would have to have a higher comprehension of meaning than computers of today do. Especially in terms of being able to "read between the lines" of what a patient might say--when in a sensitive situation like a psychological session, a patient might not always be able to be completely forward. The program may need to be able to comprehend when someone is lying or isn't being direct, and also the difference between those two. Largely so that the program can handle the situation correctly, and also to avoid misdiagnosis (like with WebMD) and the general chatbot frustration as the program tries to figure you out. 

I also feel that at least a basic comprehension of emotion might be needed, especially to tell if the patient is upset or uncomfortable. If a chatbot continued to press personal questions on a patient who, to a human, is showing clear signs of agitation, then the patient might get too scared or disheartened to continue the conversation, and might even be scared away for good.

I could go deeper into this, but I fear this blog post may turn into an actual essay, so I'd better stop here. However, I find the idea of digital psychotherapy a fascinating concept, especially since we recently read about how information theory ended up giving inspiration to psychological researchers. Hopefully, as technology continues to grow smarter, it'll be an idea we'll see developed in the future.