I wonder if there is a university course on this subject. If not, there should be one.
"Robo-Ethicists Want to Revamp Asimov's 3 Laws"
by
Priya Ganapati
July 22nd, 2009
Wired
by
Priya Ganapati
July 22nd, 2009
Wired
Two years ago, a military robot used in the South African army killed nine soldiers after a malfunction. Earlier this year, a Swedish factory was fined after a robot machine injured one of the workers (though part of the blame was assigned to the worker). Robots have been found guilty of other smaller offenses such as an incorrectly responding to a request.
So how do you prevent problems like this from happening? Stop making psychopathic robots, say robot experts.
"If you build artificial intelligence but don’t think about its moral sense or create a conscious sense that feels regret for doing something wrong, then technically it is a psychopath," says Josh Hall, a scientist who wrote the book Beyond AI: Creating the Conscience of a Machine.
For years, science fiction author Issac Asimov's Three Laws of Robotics were regarded as sufficient for robotics enthusiasts. The laws, as first laid out in the short story "Runaround," were simple: A robot may not injure a human being or allow one to come to harm; a robot must obey orders given by human beings; and a robot must protect its own existence. Each of the laws takes precedence over the ones following it, so that under Asimov's rules, a robot cannot be ordered to kill a human, and it must obey orders even if that would result in its own destruction.
But as robots have become more sophisticated and more integrated into human lives, Asimov's laws are just too simplistic, says Chien Hsun Chen, coauthor of a paper published in the International Journal of Social Robotics last month. The paper has sparked off a discussion among robot experts who say it is time for humans to get to work on these ethical dilemmas.
Accordingly, robo-ethicists want to develop a set of guidelines that could outline how to punish a robot, decide who regulates them and even create a "legal machine language" that could help police the next generation of intelligent automated devices.
Even if robots are not entirely autonomous, there needs to be a clear path of responsibility laid out for their actions, says Leila Katayama, research scientist at open-source robotics developer Willow Garage. "We have to know who takes credit when the system does well and when it doesn’t," she says. "That needs to be very transparent."
A human-robot co-existence society could emerge by 2030, says Chen in his paper. Already iRobot's Roomba robotic vacuum cleaner and Scooba floor cleaner are a part of more than 3 million American households. The next generation robots will be more sophisticated and are expected to provide services such as nursing, security, housework and education.
These machines will have the ability to make independent decisions and work reasonably unsupervised. That's why, says Chen, it may be time to decide who regulates robots.
The rules for this new world will have to cover how humans should interact with robots and how robots should behave.
Responsibility for a robot's actions is a one-way street today, says Hall. "So far, it's always a case that if you build a machine that does something wrong it is your fault because you built the machine," he says. "But there's a clear day in the future that we will build machines that are complex enough to make decisions and we need to be ready for that."
Assigning blame in case of a robot-related accident isn't always straightforward. Earlier this year, a Swedish factory was fined after a malfunctioning robot almost killed a factory worker who was attempting to repair the machine generally used to lift heavy rocks. Thinking he had cut off the power supply, the worker approached the robot without any hesitation but the robot came to life and grabbed the victim's head. In that case, the prosecutor held the factory liable for poor safety conditions but also lay part of the blame on the worker.
"Machines will evolve to a point where we will have to increasingly decide whether the fault for doing something wrong lies with someone who designed the machine or the machine itself," says Hall.
Rules also need to govern social interaction between robots and humans, says Henrik Christensen, head of robotics at Georgia Institute of Technology's College of Computing. For instance, robotics expert Hiroshi Ishiguro has created a bot based on his likeness. "There we are getting into the issue of how you want to interact with these robots," says Christensen. "Should you be nice to a person and rude to their likeness? Is it okay to kick a robot dog but tell your kids to not do that with a normal dog? How do you tell your children about the difference?"
Christensen says ethics around robot behavior and human interaction is not so much to protect either, but to ensure the kind of interaction we have with robots is the "right thing."
Some of these guidelines will be hard-coded into the machines, others will become part of the software and a few will require independent monitoring agencies, say experts. That will also require creating a "legal machine language," says Chen. That means a set of non-verbal rules, parts or all of which can be encoded in the robots. These rules would cover areas such as usability that would dictate, for instance, how close a robot can come to a human under various conditions, and safety guidelines that would conform to our current expectations of what is lawful.
Still the efforts to create a robot that can successfully interact with humans over time will likely be incomplete, say experts. "People have been trying to sum up what we mean by moral behavior in humans for thousands of years," says Hall. "Even if we get guidelines on robo-ethics the size of the federal code it would still fall short. Morality is impossible to write in formal terms."
Read the entire paper on human-robot co-existence
Ethics: Robots, androids, and cyborgs
No comments:
Post a Comment