Eupedia Forums
Site NavigationEupedia Top > Eupedia Forum & Japan Forum
Results 1 to 8 of 8

Thread: Robots refuse to save people? Somewhat alarming experiments in Bristol

  1. #1
    Regular Member
    Join Date
    14-12-10
    Posts
    1,603


    Country: Serbia



    Robots refuse to save people? Somewhat alarming experiments in Bristol

    Robot with “morals” makes surprisingly deadly decisions

    Anyone excited by the idea of stepping into a driverless car should read the results of a somewhat alarming experiment at Bristol’s University of the West of England, where a robot was programmed to rescue others from certain doom… but often didn’t.

    The so-called ‘Ethical robot’, also known as the Asimov robot, after the science fiction writer whose work inspired the film ‘I, Robot’, saved robots, acting the part of humans, from falling into a hole: but often stood by and let them trundle into the danger zone.

    The experiment used robots programmed to be ‘aware’ of their surroundings, and with a separate program which instructed the robot to save lives where possible.

    Despite having the time to save one out of two ‘humans’ from the 'hole', the robot failed to do so more than half of the time. In the final experiment, the robot only saved the ‘people’ 16 out of 33 times.

    ...
    (Whole article in website:)

    https://uk.news.yahoo.com/first-robo...9.html#kKUmvIW

    ...
    It is interesting development of situation. If the robots get power of decision making and ethical dilemmas things can go unexpectedly.

    If robots become more and more like people, the difference between right and wrong is less clear (as in humans, after all).

  2. #2
    Regular Member
    Join Date
    14-12-10
    Posts
    1,603


    Country: Serbia



    And without this experiment, the issue of moral robots is very complex.

    Building Moral Robots, With Whose Morals?

    http://capeandislands.org/post/build...s-whose-morals

    "Giving robots morals may sound like a good idea, but it's a pursuit fraught with its own moral dilemmas. Like, whose morals?"


    "Technological challenges (and there are plenty of them) aside, the prospect of creating robots with morals raises an intriguing question: Whose morals?"

  3. #3
    Regular Member
    Join Date
    14-03-15
    Location
    Oslo
    Posts
    12


    Ethnic group
    Finnish and swedish
    Country: Norway



    Morals are developed through culture over a long time. So the main challenge of robots will be more to create stable ones, as the more complex a structure is the more variation the outcome of the function is (both in biology, as in evolution, and in machinery, as in potential instability) . Imprinting morals into robots will probably never be done with robots for practical progress use, but more a speculative thing for future technology

  4. #4
    Advisor LeBrok's Avatar
    Join Date
    18-11-09
    Location
    Calgary
    Posts
    10,293

    Y-DNA haplogroup
    R1b Z2109
    MtDNA haplogroup
    H1c

    Ethnic group
    Citizen of the world
    Country: Canada-Alberta



    Robots are to serve and follow human orders. The last thing we want are robots with their own consciousness and morality.
    Be wary of people who tend to glorify the past, underestimate the present, and demonize the future.

  5. #5
    Earl Maleth's Avatar
    Join Date
    22-03-14
    Location
    Malta
    Posts
    1,919

    Y-DNA haplogroup
    EV13 A7136 y18675G+
    MtDNA haplogroup
    H

    Country: Malta



    At the end of the day I believe robots are programmed machines and they follow a programme set by humans (as Lebrok stated), so I would not worry too much, unless they are programmed for malicious reasons.....which is very possible knowing human nature.

  6. #6
    Advisor LeBrok's Avatar
    Join Date
    18-11-09
    Location
    Calgary
    Posts
    10,293

    Y-DNA haplogroup
    R1b Z2109
    MtDNA haplogroup
    H1c

    Ethnic group
    Citizen of the world
    Country: Canada-Alberta



    Quote Originally Posted by Maleth View Post
    At the end of the day I believe robots are programmed machines and they follow a programme set by humans (as Lebrok stated), so I would not worry too much, unless they are programmed for malicious reasons.....which is very possible knowing human nature.
    I'm sure, "Not to harm humans" will be programed into every robot. Unless they are used by military as soldiers.

    I wonder if they will be allowed to protect themselves, "disabling" a person, when attacked? Protection of someones property, as they will be such, in case of vandalism by malicious people.
    Perhaps, cardinal robot commandment is in order: No physical action could be undertaken against human under any circumstance?

  7. #7
    Advisor LeBrok's Avatar
    Join Date
    18-11-09
    Location
    Calgary
    Posts
    10,293

    Y-DNA haplogroup
    R1b Z2109
    MtDNA haplogroup
    H1c

    Ethnic group
    Citizen of the world
    Country: Canada-Alberta



    Quote Originally Posted by BaltoHeritageNorway View Post
    Morals are developed through culture over a long time.
    There is also a natural/genetic morality. Look at ants, they can't learn with their puny nervous system, therefore they are born with all their knowledge, their "cultural" knowlage. Yet, they work together, fight together against enemy and sacrifice their lives protecting their group. They protect their offspring, feed their kids, build their home/town together, help one another to pull heavy loads, etc. All very moral behaviour, wouldn't you say.

  8. #8
    Regular Member
    Join Date
    21-06-16
    Posts
    11


    Country: United States



    I completely agree with the stated points.

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •