Skip to main content

Revisit: Explainable AI, interactivity and HCI

Early November I wrote a post on explainable AI, interactivity, and HCI. It is interesting to see how this topic is developing. I see almost every day articles, blogposts, papers that address AI from other perspectives than traditional AI. People (and researchers) are engaged with all kind of legal, social and societal aspects of the use of AI. Of course, some also point to the ultimate question which is if the growth of AI will lead humans to be second-hand citizens and not in charge of our future.

It is a good thing that so many are concerned with what technology can do. At the same time, it seems as if many researchers take on the issue without grounding it in their own expertise. This means that they start by trying to explain AI almost on a technical level, which in many cases becomes too simplistic and sometimes not even correct. I prefer when experts approach this issue based on their expertise. One of the best texts so far in my view is Zerilli at al. "Transparency in algoritmic and human decision-making: is there a double standard?". These researchers come from the legal field and they approach it from their expertise. I wish that HCI researchers will do the same, that is, approach the question of AI and its use from the perspective of interaction and interactivity.

Comments