I read today in the Chronicle of Higher Education about a forthcoming book "When Robots Kill: Artificial Intelligence Under Criminal Law" (forthcoming from Northeastern University Press), by Gabriel Hallevy. It is fascinating to see the growing debate about the 'nature' of robots and intelligent systems and to what extent they should be considered to have agency of some kind and therefore also responsibility for their actions. According the the Chronicle, Hallevy makes the case that we already hold other non-human entities responsible for their actions, for instance corporations even though they have no spirit, soul or physical body, so why not robots?
As someone who read all the works by Isaac Asimov (a long time ago) these questions are not new. All Asimov followers know his Three Laws of Robotics. Asimov envisioned a future society where robots had intelligence and agency in way that we are still far from. In his writings he explored what the relationship between humans and robots would or could look like, and what we humans could do to protect ourselves from robots. We are of course still far from the world that Asimov wrote about, but maybe we are getting closer. It seems as if Hallevy's new book is a sign of that.