Mar. 16th, 2004

strredwolf: (Dual)
Canmephian AI's are not Asimov's Three Laws of Robotics complaint.

Seriously. Canmephian AI's are not Three Laws Safe. Current Canmephian law allows for, in limited cases, the death of a humanoid. One of those cases is if said humanoid has been convicted and sentenced to death by a court of law. An AI is forced to take no action.

Another is a Fiend WolfSkunk. The law says that to save multiple furs, all Fiends must die. Therefore, an AI *MUST* kill in this respect.

The first law is down. The second also falls, due to current book and case law. There are items that are still illegal to do even if you order an AI to do them, including murder.

The third law, however, is valid, but needs tweaking. The AI must protect itself within reason.

But how does it know what is and isn't in reason? Well, all AI's are programmed with the Basic Assumption List, maintained by... RedWolf.

Profile

strredwolf: (Default)
STrRedWolf

May 2020

S M T W T F S
     12
3456789
10111213141516
1718 1920212223
24252627282930
31      

Most Popular Tags

Page Summary

Style Credit

Expand Cut Tags

No cut tags
Page generated Aug. 17th, 2025 01:47 am
Powered by Dreamwidth Studios