I read today in the Chronicle of Higher Education about a forthcoming book "When Robots Kill: Artificial Intelligence Under Criminal Law" (forthcoming from Northeastern University Press), by Gabriel Hallevy. It is fascinating to see the growing debate about the 'nature' of robots and intelligent systems and to what extent they should be considered to have agency of some kind and therefore also responsibility for their actions. According the the Chronicle, Hallevy makes the case that we already hold other non-human entities responsible for their actions, for instance corporations even though they have no spirit, soul or physical body, so why not robots?
As someone who read all the works by Isaac Asimov (a long time ago) these questions are not new. All Asimov followers know his Three Laws of Robotics. Asimov envisioned a future society where robots had intelligence and agency in way that we are still far from. In his writings he explored what the relationship between humans and robots would or could look like, and what we humans could do to protect ourselves from robots. We are of course still far from the world that Asimov wrote about, but maybe we are getting closer. It seems as if Hallevy's new book is a sign of that.
Subscribe to:
Post Comments (Atom)
"Thinking on Paper" - notes on how to write and design
In 1989 I got the book "Thinking on Paper : Refine, Express, and Actually Generate Ideas by Understanding the Processes of the Mind...
-
Nigel Cross has for a long time been one of the most prominent researchers of design. He has a background as an architect and industrial des...
-
It is obvious that 'design thinking' as an approach to change has never been more popular than now. Everywhere on the web it is poss...
-
I have for many years liked the ideas of Paul Virilio. He is a fascinating and highly challenging thinker. Unfortunately his writings are qu...
No comments:
Post a Comment