Robot with “morals” makes surprisingly deadly decisions
Anyone excited by the idea of stepping into a driverless car should read the results of a somewhat alarming experiment at Bristol’s University of the West of England, where a robot was programmed to rescue others from certain doom… but often didn’t.
The so-called ‘Ethical robot’, also known as the Asimov robot, after the science fiction writer whose work inspired the film ‘I, Robot’, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone.
The experiment used robots programmed to be ‘aware’ of their surroundings, and with a separate program which instructed the robot to save lives where possible.
Despite having the time to save one out of two ‘humans’ from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the ‘people’ 16 out of 33 times.
...
(Whole article in website
https://uk.news.yahoo.com/first-rob...ingly-deadly-decisions-092809239.html#kKUmvIW
...
It is interesting development of situation. If the robots get power of decision making and ethical dilemmas things can go unexpectedly.
If robots become more and more like people, the difference between right and wrong is less clear (as in humans, after all).
Anyone excited by the idea of stepping into a driverless car should read the results of a somewhat alarming experiment at Bristol’s University of the West of England, where a robot was programmed to rescue others from certain doom… but often didn’t.
The so-called ‘Ethical robot’, also known as the Asimov robot, after the science fiction writer whose work inspired the film ‘I, Robot’, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone.
The experiment used robots programmed to be ‘aware’ of their surroundings, and with a separate program which instructed the robot to save lives where possible.
Despite having the time to save one out of two ‘humans’ from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the ‘people’ 16 out of 33 times.
...
(Whole article in website

https://uk.news.yahoo.com/first-rob...ingly-deadly-decisions-092809239.html#kKUmvIW
...
It is interesting development of situation. If the robots get power of decision making and ethical dilemmas things can go unexpectedly.
If robots become more and more like people, the difference between right and wrong is less clear (as in humans, after all).