Wednesday, November 14, 2018

A Blind Spot of HCI Research

The HCI research community is engaged with how people interact with digital artifacts and systems. However, looking at the present focus of HCI research, this engagement does not cover all areas of interaction. There are areas of HCI that does not really receive any attention from HCI researchers. We might distinguish between two forms of interaction, 'forced' and 'voluntary'. The voluntary form of interaction is what HCI research commonly is focused on, that is, interaction that is a result of people choosing to interact with a system for their personal reasons. Forced interaction usually takes place in workplaces where people have to use whatever system the organization is using.

It is possible to see 'forced' interaction as a 'blind spot' in HCI research. Forced interaction includes, for instance, systems that people use to manage their everyday work, scheduling, tracking, of activities and processes. Administrators and others working in scheduling, accounting, logistics, resource handling, in areas of healthcare, education, banking, insurance, transportation, etc.

We commonly hear stories from people working in organizations about bad interaction and systems not designed for users. Of course, you can argue that a lot of HCI research is basic research (new technologies, new forms of interaction, etc) that over time will change these systems to the better. And the shift to user-oriented design and user experience have made a difference, but it is far from enough. Even with the best intentions, the interaction design that is behind these kinds of everyday systems require other competencies,  knowledge, and approaches.

One key aspect of these kinds of systems is that the user does not choose the system. They have to use it. It has nothing to do with their motivation. And the user is not in any way involved in any development work or has any influence on the system. The user is neither able to influence how to use the system.

How can and should HCI research approach this huge problem? What kind of research is needed?



Monday, November 12, 2018

"Critical Theory and Interaction Design"

A wonderful book was just published.

Jeff Bardzell, Shaowen Bardzell, and Mark Blythe had the idea of creating a 'reader' on critical theory and how it relates to interaction design. They invited a  group of great people to pick a critical theory text and to write a commentary of it.

I am honored to be part of this and I selected my favorite critical theory thinker Herbert Marcuse. The book is a wonderful collection of great texts and insightful commentaries.

See below for a description and more info.

A must read for any PhD student in HCI and interaction design.

Reference:

Bardzell, J., Bardzell, S., & Blythe, M. (Eds). (2018). Critical Theory and Interaction Design.
MIT Press.

------------------------------
Critical Theory and Interaction Design

Classic texts by thinkers from Althusser to Žižek alongside essays by leaders in interaction design and HCI show the relevance of critical theory to interaction design.

Why should interaction designers read critical theory? Critical theory is proving unexpectedly relevant to media and technology studies. The editors of this volume argue that reading critical theory―understood in the broadest sense, including but not limited to the Frankfurt School―can help designers do what they want to do; can teach wisdom itself; can provoke; and can introduce new ways of seeing. They illustrate their argument by presenting classic texts by thinkers in critical theory from Althusser to Žižek alongside essays in which leaders in interaction design and HCI describe the influence of the text on their work. For example, one contributor considers the relevance Umberto Eco's “Openness, Information, Communication” to digital content; another reads Walter Benjamin's “The Author as Producer” in terms of interface designers; and another reflects on the implications of Judith Butler's Gender Trouble for interaction design. The editors offer a substantive introduction that traces the various strands of critical theory.

Taken together, the essays show how critical theory and interaction design can inform each other, and how interaction design, drawing on critical theory, might contribute to our deepest needs for connection, competency, self-esteem, and wellbeing.

Contributors
Jeffrey Bardzell, Shaowen Bardzell, Olav W. Bertelsen, Alan F. Blackwell, Mark Blythe, Kirsten Boehner, John Bowers, Gilbert Cockton, Carl DiSalvo, Paul Dourish, Melanie Feinberg, Beki Grinter, Hrönn Brynjarsdóttir Holmer, Jofish Kaye, Ann Light, John McCarthy, Søren Bro Pold, Phoebe Sengers, Erik Stolterman, Kaiton Williams., Peter Wright

Classic texts
Louis Althusser, Aristotle, Roland Barthes, Seyla Benhabib, Walter Benjamin, Judith Butler, Arthur Danto, Terry Eagleton, Umberto Eco, Michel Foucault, Wolfgang Iser, Alan Kaprow, Søren Kierkegaard, Bruno Latour, Herbert Marcuse, Edward Said, James C. Scott, Slavoj Žižek

Friday, November 09, 2018

Today's reading

I have started to 'force' myself to read one paper each morning at work. So far so good. I post a mini comment on the page "Today's Reading" here on my blog. We'll see how many days I will be able to do this.

Thursday, November 08, 2018

Interesting McKinsey study reveals what every business should to know about design

The consulting firm McKinsey has studied 300 companies and based on the results they argue what successful companies need to do when it comes to design. To someone who has worked with this issue for decades, the results are not surprising. But they are encouraging. A summary can found in this article in FastCompany.


The study ends with presenting four areas that increased revenue and total returns the most:

"1. Tracking design’s impact as a metric just as rigorously as you would track cost and revenue. McKinsey cited one gaming company that tracked how a small usability tweak to its home page increased sales by 25%.

2. Putting users first by actually talking to them. This helps to think outside of a standard user experience. One hotel that McKinsey underlined presented visitors with souvenir rubber ducks embossed with an image of the host city–with the encouragement to collect more rubber ducks from the hotel’s other locations. The initiative improved retention 3% over time.

3. Embedding designers in cross-functional teams and incentivizing top design talent. McKinsey pointed to Spotify as an example because the company gives its designers autonomy within a diverse environment–unlike a consumer packaged goods company, which was bleeding designers because they had to spend time making slide decks look pretty for the marketing team.

4. Encouraging research, early-stage prototyping, and iterating. Just because a product or service is launched doesn’t mean the design work ends. One cruise ship company that McKinsey highlighted spoke with passengers, assessed which activities were most popular by looking at payment data, and analyzed security feeds with machine learning algorithms to find inefficiencies in a ship’s layouts–all in the name of improving user experience over time." (see the article)

Tuesday, November 06, 2018

Interesting thoughts on design sprints

As we hear more and more about 'sprints' as a way to make processes faster, and especially when it comes to design, it is good to read this text. We have a person who has gone from a true believer to a skeptic. Someone who now doubts the benefits of design sprints, but not abandoning them.

The post is titled "Why I am breaking up with design sprints" by Michael, who is the Design & Strategy Director at Reason.


Monday, November 05, 2018

Explainable AI, interactivity and HCI

I have lately been aware of a growing movement around the idea that AI systems need to be able to explain their behavior and decisions to users. It is a fascinating topic, sometimes called XAI as in Explainable Artificial Intelligence.

This is a question that is approached from many perspectives.

There are those who are trying to develop AI systems that technically can explain their inner workings in some way that makes sense to people. In traditional systems, this is not as difficult as today with machine learning and deep learning systems. In these new AI systems, it is not clear, even to their creators, how they work and in what way they have reached their advice or decision. For instance, DARPA has an ambitious program around XAI with the clear purpose of developing technical solutions that will make AI systems able to explain themselves (https://www.darpa.mil/program/explainable-artificial-intelligence).

There are also those who approach the XAI from a legal point of view. What does it mean to have machines that can make decisions about serious issues without any humans being able to inspect how they reached the decision. Where does responsibility lie? Some argue that AI systems should be held to the same standard as humans when it comes to the law (for instance, Doshi-Velez et al. "Accountability of AI under the law: the role of explanation").

There are also those who argue that explanable AI is needed for practical reasons, for instance, if AI is to really make a difference as a supporting tool in medicine, the systems need to be able to reason and explain themselves (for instance, Holzinger et al. "What do we need to build explainable AI systems for the medical domain?" or de Graaf et al. "How people explain action (and autonomous intelligent systems should too").

And there are those who approach the topic from a more philosophical perspective and ask some broader questions about how reasonable it is for humans to ask systems to be able to explain their actions when we cannot ask the same standard of explanations from humans (for instance, Zerilli at al. "Transparency in algoritmic and human decision-making: is there a double standard?")

There are of course many more possible perspectives. Explainable AI is with a growing number of applications influencing our everyday lives, often in critical ways when it comes to safety (self-driving cars, decision support systems for medicine, engineering, logistics, etc).

To me, there is also an obvious HCI angel to this. When humans interact with advanced intelligent systems, many interactivity questions emerge. For instance, if systems are not able to explain what they do and maybe even more, what they can do, we end up with a 'black box' problem. Humans who interact with such a system may have no or very little idea about what the system can do. This can lead to several problems, one is that the user may 'trigger' the system to do things without knowing. When interaction is not transparent, the user might act in ways that are read as 'operations' to the system.

But maybe the most interesting aspect from an interaction point of view is how deep should interaction reach? When humans interact with simple systems, they can be aware of the complete interactability of the system, that is, the ability the system has to interact and act (see Janlert & Stolterman "Things that keep us busy-the elements of interaction"). This is of course not possible with more advanced systems and even less so with more intelligent systems. So how deep should human interaction reach? Just interact with the surface of the system? Or should we be able to, when needed, interact all the way down to the lowest level of the systems abilities?

Anyway, I think that the area of explainable AI is a field where HCI researchers need to engage. It is not only a technical or legal or practical issue, it is to a large extent a question of interation and interactivity.


Friday, November 02, 2018

Doing design well

Any problem or challenge can be addressed with any approach. However, not every approach is suitable for any kind of problem. For example, humans usually approach the challenge of building a bridge with an engineering approach even though it is, of course, possible to approach it using art or religion or any other approach. Most people have an intuitive sense of when one particular approach is suitable or not even though they sometimes debate it.

Today a lot of people argue that design is a suitable approach for certain challenges, usually those that require creative or innovative solutions. Design as a human approach for inquiry and change has proven to be extraordinarily powerful. This is also why so many today want to "use" the approach. Some time back, design as an approach was 'packaged' into a something called 'design thinking'. The purpose was to make the approach more approachable, easier to understand and use. And also to teach those who had no experience or training in design.

This is all good and well. However, it has led to some disappointments and frustrations, since people who have tried to 'use' the approach have not been as successful as they were promised or believed they would be.

There might be many reasons for this backlash. This backlash has been getting quite a lot of attention recently. Some argue that design as an approach is fundamentally flawed, some that it has been misunderstood, and some are arguing for moving on to other more promising approaches.

But there are also some very simple reasons that design is not always done well. There are some truths about using design as an approach that is commonly forgotten.

Here are some of these truths about what is needed to do design well:

1. Any serious human approach aimed at dealing with some aspect of reality (science, art, engineering, business, design, etc.) is intrinsically complex and requires substantial training and experience to be able to do well. Reality is wonderfully rich and complex and cannot be dealt with in any simplified way without leading to unwanted consequences. This is why we have education, disciplines, and professions.

2. You can use an approach, without understanding it. Like any approach, it is possible to 'use' design without really understanding its underlying assumptions and principles (philosophy). However, this leads to increased risk of using the approach for the wrong purposes and to misappropriate the methods and tools that are core to the designerly approach. This will, in turn, lead to outcomes that are not desired and expected.

3. You can use an approach but do it badly. There is no serious approach that can be used without being conscientious about its process, methods, and tools. For instance, you cannot 'use' the scientific approach sloppily. If you do, the outcome will not be recognized as knowledge. The same with design. If the process is done badly, the outcomes will not lead to the level of quality expected and it will be a consequence of execution and not the approach itself.

4. Any approach requires a supporting culture and environment that understands, embraces and supports the approach. It is extremely difficult to engage in a  truly designerly approach if the surrounding environment is not supportive. This is true for all approaches. For instance, we know about companies that are commonly labeled as "engineering" companies with the meaning that most people in the company are trained engineers. In a company like that it is easy to use an engineering approach. To engage with a designerly approach in such a company can be extraordinarily difficult and even dangerous.

There are of course many more "truths" than these four.

So, what does this mean? It means that if you want to do design well, you have to:

1. accept that design is highly complex and requires extensive training and experience.

2. truly understand design as a fundamental approach, its philosophical foundation.

3. put in a lot of effort, work hard, and be true to the approach.

4. make sure you have a supportive environment, a design culture.