Last night, I was thinking about trying to find a simple path to being an honorable human while reading an article on robotics. I had a sudden flash of insight finding a connection between the two, Asimov’s Laws of Robotics.This is something that surprised me last night at the shear realization of the fact a truly simplistic honor code had been created, although for robots.
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3.A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Three Laws of Honorable Humanity.
1.An honorable human may not injure another, or through inaction, allow another human being to come to harm.
2.An honorable human should assist others when they ask for help, except where such orders would conflict with the First Law.
3.An honorable human must protect his own existence and desires as long as they do not conflict with the First or Second Laws.
I also realize that you cannot be overly stringent on these laws but, these are merely the basis for an honorable life. That is why I changed must in the second law to should as we may have constraints that a robot would not, mainly our personal life(ie. families, need for sleep, inability to assist do to physical or mental issues, etc.). This list is also incomplete and can be amended with the same extensions as the original list, however the Fourth and Fifth Laws are relatively irrelevant to a human.
Asimov’s Zeroth Law, supersedes the first three laws and states,
A robot may not harm humanity or, by inaction, allow humanity to come to harm.
The Honorable Human’s Zeroth law is by the same changes as the others.
An honorable human may not harm humanity or by inaction, allow humanity to come to harm.
I’d love to hear your feedback on the conversion of Asimov’s Laws and if you believe this to be a good platform to base honor upon. Obviously, a human’s decisions are his own and not that of a near-perfect logic system.