First Laws of Robotics
I've just learned that various groups, including the South Korean government, are giving serious thought to the ethical issues raised by the proliferation and increased sophistication of robots. These aren't labor-union worries (is it fair to replace me with a machine?) but rather artificial-intelligence worries (what if they "wake up"? is it okay to diddle an android?). More details here.
The sci-fi geek in me is chuffed: I many never get my flying car, but I finally get to see Asimov's laws of robotics introduced into a legitimate news story. Colony spaceships can't be far behind.
The part of me that writes grumpy blog posts, however, is cynically amused. A lot of people say that it's impossible to create, accidentally or deliberately, a truly sentient robot. A lot of people say that we might never be able to prove that we had done so even were we to do so. My sense is that it'll be easy to tell. Keep an eye on advanced robots and computers. The first one to be kidnapped, raped, beaten, robbed, sold, and eventually killed will be the first sentient one. Humans have a hard time not brutalizing and murdering other humans. We send dogs to sniff explosives, dolphins to find aquatic mines, and cows to become burgers. What are the odds we'll be civilized and respectful of steel and silicon critters?