Robots refuse to save people? Somewhat alarming experiments in Bristol

Garrick

Regular Member
Messages
1,602
Reaction score
168
Points
0
Robot with “morals” makes surprisingly deadly decisions

Anyone excited by the idea of stepping into a driverless car should read the results of a somewhat alarming experiment at Bristol’s University of the West of England, where a robot was programmed to rescue others from certain doom… but often didn’t.

The so-called ‘Ethical robot’, also known as the Asimov robot, after the science fiction writer whose work inspired the film ‘I, Robot’, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone.

The experiment used robots programmed to be ‘aware’ of their surroundings, and with a separate program which instructed the robot to save lives where possible.

Despite having the time to save one out of two ‘humans’ from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the ‘people’ 16 out of 33 times.

...
(Whole article in website:)

https://uk.news.yahoo.com/first-rob...ingly-deadly-decisions-092809239.html#kKUmvIW

...
It is interesting development of situation. If the robots get power of decision making and ethical dilemmas things can go unexpectedly.

If robots become more and more like people, the difference between right and wrong is less clear (as in humans, after all).
 
And without this experiment, the issue of moral robots is very complex.

Building Moral Robots, With Whose Morals?

http://capeandislands.org/post/building-moral-robots-whose-morals

"Giving robots morals may sound like a good idea, but it's a pursuit fraught with its own moral dilemmas. Like, whose morals?"


"Technological challenges (and there are plenty of them) aside, the prospect of creating robots with morals raises an intriguing question: Whose morals?"
 
Morals are developed through culture over a long time. So the main challenge of robots will be more to create stable ones, as the more complex a structure is the more variation the outcome of the function is (both in biology, as in evolution, and in machinery, as in potential instability) . Imprinting morals into robots will probably never be done with robots for practical progress use, but more a speculative thing for future technology
 
Robots are to serve and follow human orders. The last thing we want are robots with their own consciousness and morality.
 
At the end of the day I believe robots are programmed machines and they follow a programme set by humans (as Lebrok stated), so I would not worry too much, unless they are programmed for malicious reasons.....which is very possible knowing human nature.
 
At the end of the day I believe robots are programmed machines and they follow a programme set by humans (as Lebrok stated), so I would not worry too much, unless they are programmed for malicious reasons.....which is very possible knowing human nature.
I'm sure, "Not to harm humans" will be programed into every robot. Unless they are used by military as soldiers.

I wonder if they will be allowed to protect themselves, "disabling" a person, when attacked? Protection of someones property, as they will be such, in case of vandalism by malicious people.
Perhaps, cardinal robot commandment is in order: No physical action could be undertaken against human under any circumstance?
 
Morals are developed through culture over a long time.
There is also a natural/genetic morality. Look at ants, they can't learn with their puny nervous system, therefore they are born with all their knowledge, their "cultural" knowlage. Yet, they work together, fight together against enemy and sacrifice their lives protecting their group. They protect their offspring, feed their kids, build their home/town together, help one another to pull heavy loads, etc. All very moral behaviour, wouldn't you say.
 

This thread has been viewed 7412 times.

Back
Top