What is interaction and how can we describe it? In our recent book "Things That Keep Us Busy--the elements of interaction" we take on this challenge and we develop, what we call, an anatomy of interaction. We also develop a detailed account of when it is reasonable to say that interaction actually takes place. We do this by employing the notion of the "window of interaction" (more on that later).
Below I am briefly presenting some of the work on our anatomy of interaction (from Chapter 4 in the book, as a teaser :-)
The basic elements of the anatomy are artifact and user. Interaction takes places between a human and an artifact/system, as described in the figure below (4.3).
Some of the terms used in the figure need to be explained since they mean very specific things. First of all, an artifact has certain 'states':
• internal states, or i-states for short, are the functionally important interior states of the artifact or system.
• external states, or e-states for short, are the operationally or functionally relevant, user-observable states of the interface, the exterior of the artifact or system.
And then
• world states, or w-states for short, are states in the world outside the artifact or system causally connected with its functioning.
To fully describe the anatomy of interaction some more terms are needed (as defined in the glossary in the book):
Action (with respect to an artifact or system): an action that a human interactant can do in its fullness, here defined to include also the intention with the action; only used for human interactants
Cue the user’s impression of a move of an artifact or system
Move (with respect to artifact or system) something the artifact or system can do, the counterpart of a human action; only applicable to nonhuman interactants
Operation (with respect to artifact or system) an artifact’s or system’s impression of an action by a human interactant; something the artifact or system is designed to take as input from a human interactant; only applicable to nonhuman interactants
So, how does it work. Here is an excerpt from the book, page 65.
"Let us first look at the artifact or system end of the interaction. States can change. They can change as a result of an operation triggered by a user action. For digital artifacts and systems i-states as well as e-states are usually affected by an operation. They can also change as a result of the functioning of the artifact or system itself, what we will call a move. For digital artifacts and systems the changes caused by a move will concern first of all i-states, but frequently also e-states, and sometimes w-states.
An operation can be seen as an artifact’s perception of a human action, a projection of an action. Operations can be seen as partially effective implementations of actions. A move can be seen as the artifact counterpart of a human action. To avoid confusion, we choose to call it “move” rather than “action.” Operations and moves are thus artifact centered: they change i-states always, e-states sometimes, and in some cases also w-states (see figure 4.3). .........
Turning now to the human end of the interaction, we have already pointed out that user actions appear to the artifact or system as operations. Similarly, the moves of an artifact or system appear as cues to the user. A cue is the user’s perception of an artifact move: it is what the user perceives or experiences of a move, the impression of a move. Actions and cues are user- centered concepts. Cues come via e-state changes or w-state changes. When using a word processor the cues mainly stem from the changing images and symbols on the display, but in the case of a robot vacuum cleaner, the important cues will come rather from watching its physical movements, hearing the sounds it makes, and seeing dust and dirt disappear from the floor (all a matter of moves that change w-states). .....
To summarize: User actions appear to the artifact as operations and are reciprocated by artifact moves that appear as cues to the user. Operations are projected actions. Cues are projected moves."
Well, that is a lot. If you find this interesting, read Chapter 4 in the book! Have fun.
Below I am briefly presenting some of the work on our anatomy of interaction (from Chapter 4 in the book, as a teaser :-)
The basic elements of the anatomy are artifact and user. Interaction takes places between a human and an artifact/system, as described in the figure below (4.3).
Some of the terms used in the figure need to be explained since they mean very specific things. First of all, an artifact has certain 'states':
• internal states, or i-states for short, are the functionally important interior states of the artifact or system.
• external states, or e-states for short, are the operationally or functionally relevant, user-observable states of the interface, the exterior of the artifact or system.
And then
• world states, or w-states for short, are states in the world outside the artifact or system causally connected with its functioning.
To fully describe the anatomy of interaction some more terms are needed (as defined in the glossary in the book):
Action (with respect to an artifact or system): an action that a human interactant can do in its fullness, here defined to include also the intention with the action; only used for human interactants
Cue the user’s impression of a move of an artifact or system
Move (with respect to artifact or system) something the artifact or system can do, the counterpart of a human action; only applicable to nonhuman interactants
Operation (with respect to artifact or system) an artifact’s or system’s impression of an action by a human interactant; something the artifact or system is designed to take as input from a human interactant; only applicable to nonhuman interactants
So, how does it work. Here is an excerpt from the book, page 65.
"Let us first look at the artifact or system end of the interaction. States can change. They can change as a result of an operation triggered by a user action. For digital artifacts and systems i-states as well as e-states are usually affected by an operation. They can also change as a result of the functioning of the artifact or system itself, what we will call a move. For digital artifacts and systems the changes caused by a move will concern first of all i-states, but frequently also e-states, and sometimes w-states.
An operation can be seen as an artifact’s perception of a human action, a projection of an action. Operations can be seen as partially effective implementations of actions. A move can be seen as the artifact counterpart of a human action. To avoid confusion, we choose to call it “move” rather than “action.” Operations and moves are thus artifact centered: they change i-states always, e-states sometimes, and in some cases also w-states (see figure 4.3). .........
Turning now to the human end of the interaction, we have already pointed out that user actions appear to the artifact or system as operations. Similarly, the moves of an artifact or system appear as cues to the user. A cue is the user’s perception of an artifact move: it is what the user perceives or experiences of a move, the impression of a move. Actions and cues are user- centered concepts. Cues come via e-state changes or w-state changes. When using a word processor the cues mainly stem from the changing images and symbols on the display, but in the case of a robot vacuum cleaner, the important cues will come rather from watching its physical movements, hearing the sounds it makes, and seeing dust and dirt disappear from the floor (all a matter of moves that change w-states). .....
To summarize: User actions appear to the artifact as operations and are reciprocated by artifact moves that appear as cues to the user. Operations are projected actions. Cues are projected moves."
Well, that is a lot. If you find this interesting, read Chapter 4 in the book! Have fun.
Comments