HCI research is usually seen an academic activity meant to create knowledge about interactive experiences between humans and computers, and about the technology that makes those experiences possible and about the process of shaping these technologies. Design of any artifact and system is complex and has to satisfy many requirements, needs, wants, and desires, which means that it is not easy to know how to measure the overall success of an interactive system. This means that the measure of success of HCI research is equally complex.
I will not discuss this problem at any length here, only mention one aspect that I frequently see manifested in papers, articles and in phd dissertations in the field.
I will not discuss this problem at any length here, only mention one aspect that I frequently see manifested in papers, articles and in phd dissertations in the field.
We might call the artifact or the system that a HCI researcher is developing, studying, or evaluating for the core system and the (social) system where the design will be implemented and situated as the target system (or context or environment). For instance, if an interactive artifact is supposed to support people to handle their emails better, the core system can be seen as the email application (device) itself and the target system is the way people handle and communicate via email in their everyday lives. If we measure the time people spend with their email applications it seems, at least based on anecdotal evidence, that we are moving towards a situation where email is consuming an increasingly large part of people's work day (at least in some professions).
It is commonly appreciated that email systems are not working well and need to be improved. This kind of observations about a specific core system in many cases drives research in a field and is commonly mentioned as the main argument behind a particular study or design of a new system in conference papers. The research ambition becomes to improve the core system. This usually leads to efforts to make email more efficient, faster, producing less email, make the email experience easier or maybe more pleasurable.
However, if we instead would measure and evaluate the impact that the email (core system) might have on the overall work load and effectiveness (the target system), we might get a different understanding of what the consequences are of email systems. For instance, it may be that the increase in time spent with emails actually increases the efficiency of the target system at a level that far exceeds the time that the core system requires. It may be that badly designed (according to certain principles) core systems lead to improvements in the target system that are desired.
We have all seen the typical research paper that describes a new idea implemented in an artifact. The idea is to improve certain aspects of a human activity. The artifact is built into a prototype and tested on students who are far from representative when it comes to the particular activity. The results show maybe that the 'users' liked the prototype but the research state that there are more research that needs to be done but the results so far are 'positive'. This is, however, far from a measure of success that tells us anything about the validity of the 'new idea' that the artifact was supposed to manifest. The 'new idea' is commonly argued for as a way to improve the target system while the evaluation is set up to only evaluate the core system.
For those who are well trained in systems thinking this is not new in any way. C. West Churchman wrote several books that in a wonderful way display the dangers when systems thinking is not involved in design.
The problem is of course that we are already today suffering in HCI research when it comes to anything that has to do with evaluation and testing of new artifacts and systems. The field knows that traditional lab tests only give partial answers and there has been a strong push and move to the 'wild', that is, to evaluate designs in the context where they are supposed to be used. However, the overall purpose seems still to be about the qualities and issues related to the core system and seldom about how the target system actually changes and is influenced. In some areas, this type of target oriented studies are called 'clinical studies'.
I am not arguing that all HCI research has to be measured by its real impact on target systems since I think that would lead to some form of paralysis in our field. But I think our field would benefit from a more in-depth discussion about how to deal with design oriented research (which is most all HCI research) when it comes to evaluations. Ok, this is a big topic and I have just scratched the surface here. A debate and discussion is welcomed.
It is commonly appreciated that email systems are not working well and need to be improved. This kind of observations about a specific core system in many cases drives research in a field and is commonly mentioned as the main argument behind a particular study or design of a new system in conference papers. The research ambition becomes to improve the core system. This usually leads to efforts to make email more efficient, faster, producing less email, make the email experience easier or maybe more pleasurable.
However, if we instead would measure and evaluate the impact that the email (core system) might have on the overall work load and effectiveness (the target system), we might get a different understanding of what the consequences are of email systems. For instance, it may be that the increase in time spent with emails actually increases the efficiency of the target system at a level that far exceeds the time that the core system requires. It may be that badly designed (according to certain principles) core systems lead to improvements in the target system that are desired.
We have all seen the typical research paper that describes a new idea implemented in an artifact. The idea is to improve certain aspects of a human activity. The artifact is built into a prototype and tested on students who are far from representative when it comes to the particular activity. The results show maybe that the 'users' liked the prototype but the research state that there are more research that needs to be done but the results so far are 'positive'. This is, however, far from a measure of success that tells us anything about the validity of the 'new idea' that the artifact was supposed to manifest. The 'new idea' is commonly argued for as a way to improve the target system while the evaluation is set up to only evaluate the core system.
For those who are well trained in systems thinking this is not new in any way. C. West Churchman wrote several books that in a wonderful way display the dangers when systems thinking is not involved in design.
The problem is of course that we are already today suffering in HCI research when it comes to anything that has to do with evaluation and testing of new artifacts and systems. The field knows that traditional lab tests only give partial answers and there has been a strong push and move to the 'wild', that is, to evaluate designs in the context where they are supposed to be used. However, the overall purpose seems still to be about the qualities and issues related to the core system and seldom about how the target system actually changes and is influenced. In some areas, this type of target oriented studies are called 'clinical studies'.
I am not arguing that all HCI research has to be measured by its real impact on target systems since I think that would lead to some form of paralysis in our field. But I think our field would benefit from a more in-depth discussion about how to deal with design oriented research (which is most all HCI research) when it comes to evaluations. Ok, this is a big topic and I have just scratched the surface here. A debate and discussion is welcomed.
Comments