Skip to main content

Explainable AI and what it may mean for human-system interaction

The quest for explainable AI is ongoing and there seems to be a universal belief that if AI systems could explain their behavior then all will be good. A great overview of the field is found in this article "The Dark Secret at the Heart of AI: No one really knows how the most advanced algorithms do what they do. That could be a problem." by Will Knight.

To HCI this is a growing concern. If AI is able to produce systems that are intelligent enough then the way we interact with them will drastically change. 

Instead of asking if a system can explain its behavior and decision, we could ask the question "what form of interaction with things and systems do we prefer?" For instance, when we interact with a colleague at work trying to solve a problem or develop something, we usually want the colleague to interact with us, give her feedback, argue strongly for her view and position. We use this interaction as a way to explore and develop a position that makes sense to us. This is a process that goes back and forth, sometimes smooth, sometimes difficult and full of controversy.  It is seldom a process that is fully rational where we expect each interactant to be able to fully explain his or hers or its position. But we engage in this form of interaction because we know from experience that it commonly produces a better result.

So, what would it mean if AI systems would be able to engage in that form of interaction? It would change the way we look at these systems, away from providers of final answers to companions in a process. Instead of trying not to see AI systems as humans (by forcing them to be fully rational and transparent) we would do the opposite, that is, see them as 'humans' with all the flaws and issues that we know we have to deal with when we deal with people.

This would have the consequence that we would automatically not trust systems, in the same way we do not really trust people. And as a consequence of that, we have to foster ways of interaction with these systems that make sense based on that realization.

Comments