Crawford describes interactivity as a process by which two counterparts engage in the acts of listening, thinking and speaking (inputting, processing and outputting) happen as a cyclical process. In order to have interactivity, all three must be at play together between the two sides of the interaction. In the event that one is missing, interactivity is no longer at play.
I agree with the definition that interactivity involves two parties (human or nonhuman) in which inputting, processing and reacting are all necessary. As Crawford says, “interactivity wraps its tentacles around our minds and doesn’t let go,” it captures our full attention and keeps us in the cycle.
For me, the best real life (non-digital) example of this definition of interactivity lies in improvisational comedy (or just improv in general). When one of those three components breaks down, the scene, the communication, the entire reason for being on stage in the first place, just dies right there and nothing can be done to save it (RIP Interactivity). The basic rules of improv: LISTEN to what the first actor says, think (quickly) about how to respond that moves the scene/relationship forward, and respond in agreement/forward momentum (never say no). By missing one of the steps, the scene will have no where to go and the actors will be praying to get off the stage. The goal is to have the interaction continue on and on. But negating what the initial party says, the scene essentially stops. It is no longer able to continue interactively. Example:
A: John, your hair has gotten so long.
B: No it hasn’t. It’s shaved.
(Interaction shut down.)
Without listening, the dialog is disconnected and just becomes unrelated sentences. What is said after the not-listening, is merely a reaction. To recover, another reaction is needed. And then…interactivity on life support. (This is my biggest pet peeve in improv. Lack of listening makes thinking/processing impossible.)
Victor leads us on a wild ride into the future of interactivity, where
the future of the digital landscape is potentially our oyster. So where to, oyster?
I am a very tactile person. I would rather hold a book and feel the pages, the texture and the weight, smell the binding and gauge how old it is by how much musty essence fills my nostrils. The future of physical interactivity should be headed in the direction of more tangible experiences, embracing human capabilities and away from just a swipe of the hand or finger, according to the demand of Victor. “Why aim for anything less than a dynamic medium that we can see, feel, and manipulate?” Using your hand/fingers as a device for interactivity“…is not visionary. It’s a timid increment from the status quo, and the status quo, from an interaction perspective, is terrible.” Such a great concept. Why do we have to create within the ideas that already exist? Break out of the box and trailblaze! At one point, someone had to move beyond that status quo of dialing buttons. They had a vision, prototyped and voila touch screen dialing.
Interactivity, something that is both new and old, according to Crawford, is at this turning point yet again, where stepping away from the status quo and raising the interactivity level is imperative. I would love to see the day that holographic technology breeds interactivity where the hands are given the opportunity to feel, touch, mold and manipulate.
It does seem that the concept of interactivity changes over the course of time. As technology and interactivity advances, what was once considered to be interactive may not hold up to the current standards of the “definition.” This could coincide with what Crawford refers to as “degrees of interactivity.” This stands out to me in relation to video games. Whereas in the 80s/90s the interactivity for games such as Space Invaders and Mortal Combat were considered super interactive for the time, they are lower on that current scale that we have now, where games are more immersive and there is more listening, processing and speaking going on between player and device.
The digital carbon monoxide detector, as much as I’d want it to be an interactive technology, is not. It just sits on the wall, all alone, waiting for the slight chance to react to the input of CO. This can be months of waiting, years. Or forever. I will occasionally walk by and click the reading to see if there was any recent CO in the air. Then I walk away for days and weeks, looking at it as it glows its green hue in the dark. I wish it could talk to me. Tell me it’s okay. Tell me that there is no CO, as I constantly check the dials on the stove. A simple, “Angela, you do not need to check. The air is clear today,” would be great, every now and then.
A piece of current digital technology that is great, but misses the mark for interactivity (as far as I understand it) is our friend and mentor, autocorrect. I put in a word not necessarily looking for feedback, it automatically senses based on context, spelling and spite, what it wants to change the word to. When it works, it is great, because there are often times that I can’t spell despite my best efforts. However, it is not interactive. The software does its own bit of thinking/processing upon my input, but it doesn’t listen. Nine times out of ten, it gets the word wrong. I speak. It does not listen. It reacts. Then I have to react. And it is usually with anger. I did not want to use the word “good.” I meant “food.” When do I ever talk about “good” things? I am a cynical person who spends most of her time thinking about whether or not it is okay to have a meal between meals. At the end of the day, it is a program that works in line with input-process-output, however it does seem to be missing the fundamental process, and seems to work more on a process/react function without really listening to the user.
What level of responsiveness would you want from your CO detector? Should it tell you the CO level every time you walk past? What event should trigger its response? You could say that the interactivity there is among three “actors”, the third being the level of CO. If that level does not change, then the detector has no reason to exit the stage, to use your improve metaphor. Is that a bad thing, necessarily, if the alternative is a widely fluctuating level of a poisonous gas? How would you redesign the device’s interaction to make it more responsive, yet appropriate?
re: autocorrect, there are several different versions of it, depending on the application and the operating system. Watch it closely as you continue in ICM (assuming you’re taking it), to see where it chooses its words from. Contextual responsiveness is big in tools that use natural language processing like that, and the range of correct or near-correct guesses in auto-correct is getting better.
In some perfect world, I would want the CO detector to be somehow connected with the other devices that use gas (and emit CO). If I approach the stove, for example, the sensor would send a message to the CO detector which would then announce that there is currently no leak in the vicinity of the stove. However, the argument can be – is that completely necessary? No. I’m just neurotic and always expect there to be a leak.
I just updated my software on my iPhone and noticed a more interactive auto correct where it guesses and I can decide which word I want. Which is actually better! I almost prefer no auto correct at all on phone devices. I lived in Japan and used a Japanese based phone and it was such a relief to not have to deal with Big Brother Auto correct changing every other word. However, this was 10 years ago and the programming has certainly changed. I shall keep a look out for it!