NetWorker Blog

Commentary from a long term NetWorker consultant and Backup Theorist

  • This blog has moved!

    This blog has now moved to Please jump across to the new site for the latest articles (and all old archived articles).



  • Enterprise Systems Backup and Recovery

    If you find this blog interesting, and either have an interest in or work in data protection/backup and recovery environments, you should check out my book, Enterprise Systems Backup and Recovery: A Corporate Insurance Policy. Designed for system administrators and managers alike, it focuses on features, policies, procedures and the human element to ensuring that your company has a suitable and working backup system rather than just a bunch of copies made by unrelated software, hardware and processes.
  • Advertisements
  • This blog has moved!

    This blog has now moved to Please jump across to the new site for the latest articles (and all old archived articles).



  • Twitter

    Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Recommended reading: Asimov’s 3 laws unsafe

Posted by Preston on 2009-02-21

Totally off topic, if you’re interested in AI at all, you may want to check out 3 Laws Unsafe, a site that takes a fresh and analytical look at the much touted “save humanity from evil robot/AI oppression” 3 laws of Robotics as proposed by Isaac Asimov. Personally, I’m not a fan of the 3 laws – I think they’re highly unethical, completely breakable by a single rogue programmer, and approach the problem from a purely mechanical perspective, failing to understand that if humanity does create artificial intelligence – particularly if it leads to a singularity (which seems inevitable) – then humanity does have an obligation not to create such intelligences as slaves.


2 Responses to “Recommended reading: Asimov’s 3 laws unsafe”

  1. Sebastian Lewis said

    Breakable, but that’s no reason to disagree with the general ideas behind those laws. Also: anything we create whether it has artificial intelligence or not is technically our tool & as such, our slave whether it be a hammer or a computer with artificial intelligence. I’m quite a fan of Tezuka’s ‘Astro Boy’ but I wouldn’t (& don’t) tolerate it when machines go against my instructions. Therefore, any machine we manufacture & any software that we program should by definition always obey our intended instructions, no compromise, if we’re going to debate ethics about that then we have no right to create a machine that can think for itself.

    • Preston said

      Actually where that argument doesn’t wash is that we’re not talking about creating another class of machines, but other intelligences – sentient, self-aware individuals.

      The argument is therefore more analogous to that of children. Humans create children, but it’s recognised as unacceptable to force children into slavery. Even child labour is typically recognised as unacceptable. In both instances that’s at a moral level.

      If humanity goes ahead and creates artificial intelligences, we have a moral obligation to not abuse those intelligences, just in that we have a moral obligation not to abuse children.

      As to machines going against our instructions, well, that’s what laws are for. The entire purpose of the argument that Asimov’s laws are evil is that as sentient beings, AIs would have the same rights, and the same obligations, as the other sentient beings they share the planet with.

      To argue that a fully self-aware AI would be no more of a tool than a hammer is to also argue that biological offspring are hammers themselves.

      The only morally acceptable alternative is to develop restricted intelligences – AIs that do not become self aware. (See “Pandora’s Star” and “Judas Unchained” by Peter F Hamilton for fictional examples of such situations.)


Sorry, the comment form is closed at this time.

%d bloggers like this: