Revising Asimov’s Three Laws
J. Storrs Hall is a noted scientist and author. He is chief scientist at Nanorex and has published extensively on the subject. His most recent book is titled Beyond AI: Creating the Conscience of the Machine (2007).
Hall spoke at The Singularity Summit this morning on the topic of revising Asimov’s Three Laws of Robotics. As a refresher, Asimov’s laws follow:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
With Asimov, the 3 laws were “hardwired into the circuitry.” He envisioned the laws being codified in the circuitry. Alas, according to Hall, the Robotic AGIs (Artificial General Intelligence) of the future will be software and wetware. And “Asimov’s robots didn’t Improve Themselves. Our AIs, we hope, Will.”
So, Hall posed the question, “how can you imagine writing a law that is to govern in an environment you can’t predict. Like Hammurabi writing laws that predict the Enron scandal.” Our new “laws” have to be much more abstract and flexible – more like a conscience. According to Hall, we’ve done this for ages – it’s called raising children.
To punctuate his perspective, Hall predicted “by 2050 – most corporations will be run by their management information systems. Their first law will be ‘make a profit’.”
Hall’s New Laws of Robotics:
- Law #1 – A Robot shall understand as much as possible.
Hall referenced Socrates – “there is no good but knowledge, and no evil but ignorance” as a basis for morality across cultures. The same should apply to AGI.
o Law 1a – in particular a robot shall understand mimetic evolution.
Mimetic evolution is the reflective or representative of actuality or reality of human experience (derived from Aristotle's concept of mimesis or imitation). This is important because evolution is where morals come from.
- Law #2 – A robot shall be Open Source.
We live in a world largely run by artificial organizations that have no conscious – Corporations and Governments. But corporations are required by law to have an “open-source motivational system” – Auditing – because Money is their Emotion. Transparency to robot motives and capabilities will be critical with an AGI.
- Law #3 - A robot shall be Economically Sentient
Our economic environment is the necessary outcome of evolution. We must train our AGIs to understand and appreciate the power of economics so that they will drive toward optimal decisions.
- Law #4: A robot shall be “Trustworthy, Loyal, Helpful, Friendly, Courteous, Kind, Obedient, Cheerful, Thrifty, Brave, Clean, and Reverent” and shall do a good turn daily.
0 Comments:
Post a Comment
<< Home