Skip to main content

Explainable AI, interactivity and HCI

I have lately been aware of a growing movement around the idea that AI systems need to be able to explain their behavior and decisions to users. It is a fascinating topic, sometimes called XAI as in Explainable Artificial Intelligence.

This is a question that is approached from many perspectives.

There are those who are trying to develop AI systems that technically can explain their inner workings in some way that makes sense to people. In traditional systems, this is not as difficult as today with machine learning and deep learning systems. In these new AI systems, it is not clear, even to their creators, how they work and in what way they have reached their advice or decision. For instance, DARPA has an ambitious program around XAI with the clear purpose of developing technical solutions that will make AI systems able to explain themselves (https://www.darpa.mil/program/explainable-artificial-intelligence).

There are also those who approach the XAI from a legal point of view. What does it mean to have machines that can make decisions about serious issues without any humans being able to inspect how they reached the decision. Where does responsibility lie? Some argue that AI systems should be held to the same standard as humans when it comes to the law (for instance, Doshi-Velez et al. "Accountability of AI under the law: the role of explanation").

There are also those who argue that explanable AI is needed for practical reasons, for instance, if AI is to really make a difference as a supporting tool in medicine, the systems need to be able to reason and explain themselves (for instance, Holzinger et al. "What do we need to build explainable AI systems for the medical domain?" or de Graaf et al. "How people explain action (and autonomous intelligent systems should too").

And there are those who approach the topic from a more philosophical perspective and ask some broader questions about how reasonable it is for humans to ask systems to be able to explain their actions when we cannot ask the same standard of explanations from humans (for instance, Zerilli at al. "Transparency in algoritmic and human decision-making: is there a double standard?")

There are of course many more possible perspectives. Explainable AI is with a growing number of applications influencing our everyday lives, often in critical ways when it comes to safety (self-driving cars, decision support systems for medicine, engineering, logistics, etc).

To me, there is also an obvious HCI angel to this. When humans interact with advanced intelligent systems, many interactivity questions emerge. For instance, if systems are not able to explain what they do and maybe even more, what they can do, we end up with a 'black box' problem. Humans who interact with such a system may have no or very little idea about what the system can do. This can lead to several problems, one is that the user may 'trigger' the system to do things without knowing. When interaction is not transparent, the user might act in ways that are read as 'operations' to the system.

But maybe the most interesting aspect from an interaction point of view is how deep should interaction reach? When humans interact with simple systems, they can be aware of the complete interactability of the system, that is, the ability the system has to interact and act (see Janlert & Stolterman "Things that keep us busy-the elements of interaction"). This is of course not possible with more advanced systems and even less so with more intelligent systems. So how deep should human interaction reach? Just interact with the surface of the system? Or should we be able to, when needed, interact all the way down to the lowest level of the systems abilities?

Anyway, I think that the area of explainable AI is a field where HCI researchers need to engage. It is not only a technical or legal or practical issue, it is to a large extent a question of interation and interactivity.


Comments