Crawford describes interactivity as a process by which two counterparts engage in the acts of listening, thinking and speaking (inputting, processing and outputting) happen as a cyclical process. In order to have interactivity, all three must be at play together between the two sides of the interaction. In the event that one is missing, interactivity is no longer at play.
I agree with the definition that interactivity involves two parties (human or nonhuman) in which inputting, processing and reacting are all necessary. As Crawford says, “interactivity wraps its tentacles around our minds and doesn’t let go,” it captures our full attention and keeps us in the cycle.
For me, the best real life (non-digital) example of this definition of interactivity lies in improvisational comedy (or just improv in general). When one of those three components breaks down, the scene, the communication, the entire reason for being on stage in the first place, just dies right there and nothing can be done to save it (RIP Interactivity). The basic rules of improv: LISTEN to what the first actor says, think (quickly) about how to respond that moves the scene/relationship forward, and respond in agreement/forward momentum (never say no). By missing one of the steps, the scene will have no where to go and the actors will be praying to get off the stage. The goal is to have the interaction continue on and on. But negating what the initial party says, the scene essentially stops. It is no longer able to continue interactively. Example:
A: John, your hair has gotten so long.
B: No it hasn’t. It’s shaved.
(Interaction shut down.)
Without listening, the dialog is disconnected and just becomes unrelated sentences. What is said after the not-listening, is merely a reaction. To recover, another reaction is needed. And then…interactivity on life support. (This is my biggest pet peeve in improv. Lack of listening makes thinking/processing impossible.)
Victor leads us on a wild ride into the future of interactivity, where
the future of the digital landscape is potentially our oyster. So where to, oyster?
I am a very tactile person. I would rather hold a book and feel the pages, the texture and the weight, smell the binding and gauge how old it is by how much musty essence fills my nostrils. The future of physical interactivity should be headed in the direction of more tangible experiences, embracing human capabilities and away from just a swipe of the hand or finger, according to the demand of Victor. “Why aim for anything less than a dynamic medium that we can see, feel, and manipulate?” Using your hand/fingers as a device for interactivity“…is not visionary. It’s a timid increment from the status quo, and the status quo, from an interaction perspective, is terrible.” Such a great concept. Why do we have to create within the ideas that already exist? Break out of the box and trailblaze! At one point, someone had to move beyond that status quo of dialing buttons. They had a vision, prototyped and voila touch screen dialing.
Interactivity, something that is both new and old, according to Crawford, is at this turning point yet again, where stepping away from the status quo and raising the interactivity level is imperative. I would love to see the day that holographic technology breeds interactivity where the hands are given the opportunity to feel, touch, mold and manipulate.
It does seem that the concept of interactivity changes over the course of time. As technology and interactivity advances, what was once considered to be interactive may not hold up to the current standards of the “definition.” This could coincide with what Crawford refers to as “degrees of interactivity.” This stands out to me in relation to video games. Whereas in the 80s/90s the interactivity for games such as Space Invaders and Mortal Combat were considered super interactive for the time, they are lower on that current scale that we have now, where games are more immersive and there is more listening, processing and speaking going on between player and device.
The digital carbon monoxide detector, as much as I’d want it to be an interactive technology, is not. It just sits on the wall, all alone, waiting for the slight chance to react to the input of CO. This can be months of waiting, years. Or forever. I will occasionally walk by and click the reading to see if there was any recent CO in the air. Then I walk away for days and weeks, looking at it as it glows its green hue in the dark. I wish it could talk to me. Tell me it’s okay. Tell me that there is no CO, as I constantly check the dials on the stove. A simple, “Angela, you do not need to check. The air is clear today,” would be great, every now and then.
A piece of current digital technology that is great, but misses the mark for interactivity (as far as I understand it) is our friend and mentor, autocorrect. I put in a word not necessarily looking for feedback, it automatically senses based on context, spelling and spite, what it wants to change the word to. When it works, it is great, because there are often times that I can’t spell despite my best efforts. However, it is not interactive. The software does its own bit of thinking/processing upon my input, but it doesn’t listen. Nine times out of ten, it gets the word wrong. I speak. It does not listen. It reacts. Then I have to react. And it is usually with anger. I did not want to use the word “good.” I meant “food.” When do I ever talk about “good” things? I am a cynical person who spends most of her time thinking about whether or not it is okay to have a meal between meals. At the end of the day, it is a program that works in line with input-process-output, however it does seem to be missing the fundamental process, and seems to work more on a process/react function without really listening to the user.