Friday, October 02, 2015

Core and target systems: HCI research and the measure of success

HCI research is usually seen an academic activity meant to create knowledge about interactive experiences between humans and computers, and about the technology that makes those experiences possible and about the process of shaping these technologies. Design of any artifact and system is complex and has to satisfy many requirements, needs, wants, and desires, which means that it is not easy to know how to measure the overall success of an interactive system. This means that the measure of success of HCI research is equally complex.

I will not discuss this problem at any length here, only mention one aspect that I frequently see manifested in papers, articles and in phd dissertations in the field.

We might call the artifact or the system that a HCI researcher is developing, studying, or evaluating for the core system and the (social) system where the design will be implemented and situated as the target system (or context or environment). For instance, if an interactive artifact is supposed to support people to handle their emails better, the core system can be seen as the email application (device) itself and the target system is the way people handle and communicate via email in their everyday lives. If we measure the time people spend with their email applications it seems, at least based on anecdotal evidence, that we are moving towards a situation where email is consuming an increasingly large part of people's work day (at least in some professions).

It is commonly appreciated that email systems are not working well and need to be improved. This kind of observations about a specific core system in many cases drives research in a field and is commonly mentioned as the main argument behind a particular study or design of a new system in conference papers. The research ambition becomes to improve the core system. This usually leads to efforts to make email more efficient, faster, producing less email, make the email experience easier or maybe more pleasurable.

However, if we instead would measure and evaluate the impact that the email (core system) might have on the overall work load and effectiveness (the target system), we might get a different understanding of what the consequences are of email systems. For instance, it may be that the increase in time spent with emails actually increases the efficiency of the target system at a level that far exceeds the time that the core system requires. It may be that badly designed (according to certain principles) core systems lead to improvements in the target system that are desired.

We have all seen the typical research paper that describes a new idea implemented in an artifact. The idea is to improve certain aspects of a human activity. The artifact is built into a prototype and tested on students who are far from representative when it comes to the particular activity. The results show maybe that the 'users' liked the prototype but the research state that there are more research that needs to be done but the results so far are 'positive'. This is, however, far from a measure of success that tells us anything about the validity of the 'new idea' that the artifact was supposed to manifest. The 'new idea' is commonly argued for as a way to improve the target system while the evaluation is set up to only evaluate the core system.

For those who are well trained in systems thinking this is not new in any way. C. West Churchman wrote several books that in a wonderful way display the dangers when systems thinking is not involved in design.

The problem is of course that we are already today suffering in HCI research when it comes to anything that has to do with evaluation and testing of new artifacts and systems. The field knows that traditional lab tests only give partial answers and there has been a strong push and move to the 'wild', that is, to evaluate designs in the context where they are supposed to be used. However, the overall purpose seems still to be about the qualities and issues related to the core system and seldom about how the target system actually changes and is influenced. In some areas, this type of target oriented studies are called 'clinical studies'.

I am not arguing that all HCI research has to be measured by its real impact on target systems since I think that would lead to some form of paralysis in our field. But I think our field would benefit from a more in-depth discussion about how to deal with design oriented research (which is most all HCI research) when it comes to evaluations. Ok, this is a big topic and I have just scratched the surface here. A debate and discussion is welcomed.

Wednesday, September 02, 2015

HCI Pioneers website

Ben Shneiderman is one of the most well known and highly respected researchers in HCI. His contributions to the field over the years are many and foundational. Now he has taken on the responsibility to collect and display what he calls the "HCI Pioneers". This is an excellent project and highly valuable since we, as a discipline, have to know and understand our background and history. Reading about these individuals will help all of us, and in particular I can see this site as a wonderful asset to new PhD students in the field.

Ben explains in an email the purpose and work behind this project like this:

"After 40 years of photography and two intense months of work, the website with 45 personal profiles & photos of leading human-computer interaction researchers and innovators is ready for public showing:

       “Encounters with HCI Pioneers: A Personal Photo Journal”  

My goal is to make HCI more visible and tell our history more widely.  These are the people who made HCI designs as important as Moore’s Law in bringing the web and mobile devices to the world.  The ABOUT page has a more complete description of the goals, process, and history.

I hope to add more photos and more personal profiles as time and resources permit."

Again, this is a great resource and thanks to Ben for taking on this project!

Friday, July 24, 2015

Faceless Interaction

I found out today that our article "Faceless Interaction - a conceptual examination of the notion of interface: past, present and future" is now published in print (see ref below). The Abstract of the article reads:

"In the middle of the present struggle to keep interaction complexity in check, as artifact complexity continues to rise and the technical possibilities to interact multiply, the notion of interface is scrutinized. First, a limited number of previous interpretations or thought styles of the notion are identified and discussed. This serves as a framework for an analysis of the current situation with regard to complexity, control, and interaction, leading to a realization of the crucial role of surface in contemporary understanding of interaction. The potential of faceless interaction, interaction that transcends traditional reliance on surfaces, is then examined and discussed, liberating possibilities as well as complicating effects, and dangers are pointed out, ending with a sketch of a possibly emerging new thought style."

I am quite proud of this article. And I hope that a lot of people will read it and critique it, and maybe build on the ideas we have developed.

Unfortunately the online version is not open to everyone. I will soon make the text available in some way. If you want it, you can write to me and ask for a copy.

Janlert, L-E., & Stolterman, E. (2015). Faceless Interaction - a conceptual examination of the notion of interface: past, present and future. In Human-Computer Interaction, Vol 30, Issue. 6.

Monday, June 29, 2015

Apps, products and misunderstandings of design

The design and development of apps has in many ways become easier over the years. Today there are tools and development kits that make it possible to fairly easy put an app together that actually works. The app can also easily be released on a market (if accepted by the 'platforms') . An app does not have to be manufactured, packaged and shipped.

At the same time, it seems as if many of today's most influential interactive products are actual products, that is, they are made of materials, have a shape and form and have to be manufactured. It is of course possible to see software design and product design as similar in the same way as we can see similarities between many design fields. But the similarity is usually on a more abstract level than seems to be usually understood.  Software design is, even though to some extent similar, it is radically different from product design.

In a great article about Silicon Valley industrial designers, Bill Webb (at Huge Design), is interviewed and gives his very insightful view of how and why product design is not understood and valued among those who live in the world of software 'products' and one of the reasons he mentions is the difference in working with material products versus software products.

One of the most distinct differences is that software is possible to change, add to, and remove from over time without having to bring it back to a factory. New updates can be released while it is being used. This is the basic idea behind a lot of modern design and development approaches within interaction design, create a barely functioning version, send it out to users and keep working on it.

When it comes to 'real' products, this is of course not possible. When you are dealing with manufacturing, products have to be defined in detail and when manufacturing has started no changes can be made without exceptional effort and cost. Product design, in any form, is a process of irreversibility. It is not possible to go back, to iterate, in the way as with software products. In the article with Bill Webb, he nicely explains this simple point and what consequences it has.

One of the consequences, and the one I see as detrimental to the larger field of designing, of this difference between software and product design is the inability to understand each others design process. This inability leads to serious management and leadership issues that have become harmful to many startups.

To me, a bigger problem lurking behind this more practical level is that the inability to understand that there is always a 'material' reality in each design area that in a distinct and crucial way not only influences the design process but in some ways determines it. When design thinkers and practitioner talk about designing as a generic process and advocate their own design approach, there is almost never any disclaimers about what kind of designing they are addressing or to what extent their approach is relevant for other areas. In many cases the proposed approach is based on or shaped by the 'material' foundation in the specific area and will not easily be transfered to other design areas.

This becomes a major problem for the whole field of  'design thinking' since it leads to many less insightful recommendations about designing that may be tried by others and found not only useless but not at all suitable for their particular design process. In the next step this can (or have already) lead to a backlash to a, otherwise growing understanding of designing.

Thursday, June 25, 2015

HCI research and the problems with the scientific method

A few years ago I read an article in the New Yorker about the phenomenon of 'declining truth'. I have been thinking about this article since then and today my PhD student found it and I had the chance to read it again (thanks Jordan). It is an article that asked critical questions about the scientific method in general and specifically about replicability. Reading this article today makes me reflect upon the present status of HCI research of course and its relation to the scientific method. Will come back to that.

The article is "The truth wears off -- is there something wrong with the scientific method?" by Jonah Lehrer. The article was published in 2010 so things may have changed a bit since then. The questions asked in the article are interesting and challenging to any science practitioner. The topic of the article is a phenomenon that has been described and discussed by several scientists over the last decades. It is by some called the "decline effect". The phenomenon is that scientific results that are highly significant seems to wear off over time. It is not easy to understand what could be the cause of this effect. Lehrer discusses several potential answers to why this effect is showing up, for instance, it could be that scientists are biased when they conduct experiments, maybe there is a bias in the publication system towards studies with 'positive' results, etc. However, it is not easy to find one simple explanation.

Lehrer describes the history of the idea of 'fluctuating asymmetry' that was discovered in 1991. The discovery showed that barn swallow females prefer males that show a symmetrical appearance. The idea drawn from this was, among other things, that aesthetics is genetics. Over several years there were many studies around the world that confirmed the theory by studying other animals, even humans. But, a few years later the theory started to become questioned. More and more studies could not find any results that supported the study. Even so, the theory is still today found in textbooks and has influenced other areas. Lehrer shows that this is not a one time thing, there are large studies that show this effect in other areas and disciplines.

I find this article interesting and I think it should be read by anyone who deals with research and science. It is not that the article destroys science in any way, but it does add a healthy dose of skepticism when it comes to our general trust in the scientific method. And especially when it comes to the role of empirical research and its relation to truth. As the author ends, it reminds us of how difficult it is to prove anything and it asks some good questions about what evidence is.

I think this article is also raises good questions for HCI research. Our field is of course not a field based on scientific inquiry in the same way as physics or biology but there is a strong belief in our field that at the end of the day ideas have to be tested against nature, that is, empirical studies are needed to make convincing claims.

What is maybe disturbing for HCI research is that if the fields in the article, which are disciplines with a strong history of scientific empirical practice, have problems, then the problem in HCI might be much worse. Despite the problems in these other fields their tradition and culture of doing scientific studies is something HCI is not close to. Replicability, which is a core concept in these disciplines, is barely mentioned in HCI research and not taken seriously. I am not arguing that HCI research should become more rigorous or scientific, actually the opposite. I do not believe that almost any research in HCI is of a kind that either require or is suitable for scientific research in the way that the article discusses. However, I see a serious problem in HCI research that is caused by a conflict or internal dilemma that stems from a shared view that HCI is not really a scientific enterprise while at the same time scientific research  is still valued and rewarded. This dilemma is easy to see not only on the level of the discipline but in individuals. It is possible in HCI to see condescending remarks about science that are strangely mixed with a scientific practice that seems to aspire to become 'real' science.

HCI research will of course over time develop a position in relation to science as practiced in other disciplines in a way that hopefully is adequate for the purposes and context of the field. I am quite optimistic in the sense that I see more debate about HCI research today than just a few years ago. I also find it intriguing to see where the field will go. As far as I can see it, there are so many potential directions open right now. HCI can move towards a more scientific tradition, a humanistic tradition, or towards a designerly tradition, or towards a new and particular understanding of what research in this field is all about.

Monday, June 15, 2015

Book note: "Thing Knowledge" by Davis Baird

For some time now, my good friend and colleague Ron Wakkary has talked about the book "Thing knowledge - a philosophy of scientific instruments" written by philosopher Davis Baird. Ron has made the case that this is a book worth reading. I finally ordered the book a few days ago and have now read a couple of chapters.

Baird makes the argument that scientific instruments are knowledge in themselves. He draws on a large number of historical and contemporary examples and he makes a convincing case for his thesis. His exploration has led him to develop an 'instrument epistemology'.

What is interesting and as far as I can understand quite unique is that Baird pushes the argument of instruments as knowledge further than others who have made similar arguments. He is highly critical to the common 'text bias' that science or at least studies of science suffers from. Baird is very clear that his materialist account of epistemology does not mean a critique of traditional word based knowledge. And, as he also writes, there 'is no single unified account of knowledge' that fully can serve science and technology studies.

A key argument in the book is that an instrument creates a "phenomenon" that in many cases is indisputable, even though it may not be possible to explain or theorize with established knowledge. Instruments express 'working knowledge', another key concept that Baird introduces.

Overall, I find this approach extremely interesting and a possible way for a field like HCI research to explore and develop further its practice. Or maybe it is the opposite, maybe HCI research in many ways already are living with an 'instrument epistemology' and with 'thing knowledge'. So, while being stuck within a 'text bias' paradigm, HCI research is desperately trying to escape it based on some intuitive understanding that there are other ways of knowing that maybe are better suited for our field.

So, I am very much looking forward to read the rest of the book and also to see how these ideas can be developed to support a field that constantly work with 'instruments', that create and develop 'instruments' and in so many ways is trying to understand how these instruments express knowledge.

Thursday, June 11, 2015

Why isn't there more progress in HCI research?

One of the questions that always comes back to me is if HCI as a field of research is actually making any progress. Is there any form of knowledge accumulation or growth? Is what we know as a field getting more stable? Are there things we today know with some certainty that were open questions some years ago? My personal answer to these questions is basically "no". I do not think there are any serious progress made. I think it is clear that the field has matured in the sense that there are more researchers and scholars who are able to perform well developed research. The field has also matured in the sense that there are some deeper understanding of what constitute the core of the field, even though I doubt that now and then. So, is it possible to figure out if the field is making progress or not?

In a new book "Philosophers of our times", edited by Ted Honderich, the well known philosopher David J. Chalmers has a chapter titled "Why isn't there more progress in philosophy?" (from which I
obviously borrowed the title to this post).

Chalmers explores and elaborates on the question of progress in philosophy by dividing it into three questions (1) is there progress in philosophy, (2) in there as much progress as in science, and (3) why isn't there more progress in philosophy? These questions forces Chalmers into some quite detailed discussions of what "progress" is or should be understood. He uses the notion of "large collective convergence to the truth of the big questions" as a definition of progress. This definition is not difficult to understand even though Chalmers goes into substantial detail in examining each of the words in the sentence, such as, "truth", "large", "big", "convergence" etc. Then he makes the comparison with science who undoubtedly has been extremely successful in making progress. His conclusion is that philosophy is not really making any progress, he speculates on why that is the case, and he discusses the future prospects of progress.

I will not go into any more detail of Chalmers' chapter, just mention that it is possible and interesting to read his chapter while substituting the word "philosophy" with "HCI". Maybe not to anyones surprise, HCI comes out much more similar to philosophy than to science (at least in my analysis). If we look at what HCI has achieved during these last 30 years, the results seem more similar to the kind of "progress" that we can see in philosophy than what we see in science. The conclusion of this is not necessarily a bad thing for HCI, but it is definitely a result that challenges the common understanding within HCI research of what kind of field it is, and it definitely should influence the hope within HCI of what it should aspire to be.

Friday, May 29, 2015

Got my Amazon Echo today

Quite some time ago I ordered the Amazon Echo.  Echo is a virtual assistant that is voice controlled. It has a range of questions it can answer and also functions it can perform. It is a bit like Siri on an iPhone.

So far I have only installed it (very simple), added some functions, and tested it. Echo has a good voice recognition system and also understands commands well. I am also quite surprised with the quality of the sound, for instance when Echo plays music.

You give a command by saying "Alexa, play xxxx". Alexa is the wake up word. It is possible to be anywhere in the room and still control Echo. To thus who watched Star Trek is feel very similar to say "Computer, xxxx".

Echo do not really have any interface, except for a thin light "circle" on top of the Echo that can change into different colors and also pulsate and 'circle'. I think this is one of the first new devices that we will more off, of course they will disappear into our houses or furniture and not be visible at all in the future. But, even if Echo works well so far, does this kind of interaction work over time. Is this how we want to interact? I will come back to Echo again when I have used it for some time.

Wednesday, May 27, 2015

New design journal: She Ji. The Journal of Design, Economics, and Innovation.

It is great to see another ambitious design journal emerge.

This time it is "She Ji. The Journal of Design, Economics, and Innovation" with Ken Friedman as the Editor-in-Chief.

The journal is published by Elsevier in collaboration with Tongji University and Tongji University Press. The first issue will appear in
September 2015.

It is an open access journal which makes it even better. The journal also has a highly impressive list of Associate Editors.

Personally I am very happy to see the pdf about the new journal where the first page looks like the picture to the right here. A prominent placement of our book "The Design Way". Nice....

If you want to submit to She Ji, please go to the She Ji Web site at URL:

Wednesday, May 13, 2015

Design Judgment: Decision-Making in the 'Real" World

What is design judgment? What is that ability that designers have to have to be able to make decision when they face overwhelming but incomplete information and conflicting goals and desiderata? How do you as a designer know what to focus on and what to leave out? Designers always engage in making judgements: design judgments.

Yesterday I found an article that Harold Nelson and I wrote in 2003. It was published in the Design Journal, and I have not seen it since then. The article is partially built on a chapter in our book 'The Design Way' where we have a chapter on judgment.

The title of the paper is Design Judgment: Decision-Making in the 'Real' World. The title reveals that design judgment is not about how to make decisions in a perfect or ideal world, instead it is about being in the real world with all its richness, contradictions, dilemmas, insufficient knowledge, etc. It is also about making decisions that has a real impact on the world we all live in.

Even though Harold and I have 'preached' about design judgment for years (together with some other design scholars) it is still not a concept that has  received enough attention. It seems as if most design research, especially research with the purpose to support designers, instead takes the approach to reduce the dependency on judgment by introducing methods, techniques and tools that will let designers do a good job without being required to make judgments. It sometimes even looks as if the ideal solution would be to have a design process with methods and tools that would be indifferent of who is the designer is, that is, a process that is independent of the designers ability to make judgment. However, the road to design success can only be reached when design judgment is seen as an unavoidable and crucial aspect of the being a designer and enough attention is payed to the development of it. In this article we take some steps towards such a position.

You can download the article here.