The call to ban autonomous weapons grows louder

One more time!

I’m not big on banning things, unless they pose a public hazard. For example, when people drinking and driving became a problem, we had to ban it. Same for texting and driving. We do that to protect society at large.

Lately, however, there has been a vocal call by many (MANY) in the sciences, including over 1000 Artificial Intelligence researchers, developers and scientists, along with Steve Wozniak, Stephen Hawking, and Elon Musk, to ban autonomous weapons. Those are weapons that could seek out targets and kill them without the involvement of a human operator. They would, for all intents and purposes, be deciding who will live and who will die and doing it all on their own.

On the one hand, I’d love to see enormous, Pacific Rim-style robots stomping around, but for the spectacle more than anything else. As far as them being able to decide on their own who they’ll kill, that gives me significant pause. We have talked extensively about the shortcomings in AI, the problems with it, as well as the inherent flaws in the decision-making capabilities of people, let alone robots, something we’ll talk more about today (or already did talk about, depending on when you read this).

When it comes to life-and-death decisions, no machine should be in charge, whether it’s in a law-enforcement/military capacity, or a medical capacity, or anything else. A human always needs to be involved. Remember, machines don’t decide, and the REAL concern is they could decide that none of us deserve to live.

Let’s not forget the lessons learned from Skynet, or Evil Santa (or YouTube’s automated content filter):

UPDATE (Although the video is still working for me):

Hi IS301 at Nevada State College,

Due to a copyright claim, your YouTube video has been blocked. This means that your video can no longer be played on YouTube, and you may have lost access to some features of YouTube.

Video title: Untitled
Includes: Audiovisual content
Claimed by: FOX

[/video_copynotice?v=6Y5gtRC2yT4]

Why this can happen

  • Your video might contain copyrighted content.
  • Copyright owners can choose to block YouTube videos that contain their content.

– The YouTube Team

  • Autonomous weapons are a scary thought, and if people like Steve Wozniak, Stephen Hawking, and Elon Musk are concerned, there must be an imminent problem on the horizon. From a layman’s view, current weapons contain a fair amount of intelligence. For example, drones that can fly for great distances for long period of time while on a mission. Only if the drone is directed to destroy something or someone, does it preform its duty. The main point of this is that a human being is the one making that choice! A drone that is completely autonomous may use face recognition to find someone with certain features and then destroy them. But what if the target is attending a bring dad or mom to school day. The results would be catastrophic. I too support banning these weapons based on the margin of error that may arise. It may not be on the scale of an atomic bomb, but the devastation could rapidly escalate quickly if this technology is allowed to developed further.

  • First off, I am legitimately impressed at how fast that video was blocked. Second off, I am not sure that the autonomous weapons that we are able to create now will be any bigger a threat than the weapons we already have. I say this because, as was pointed out, they are not making the real decisions yet. A person programs these machines or gives the orders to kill. At least so far as I understand. However, as llwolak pointed out this can create a giant margin of error, which I would think those giving orders or programming would account for when putting the machines out. If I am wrong and these robots are given full decision in the choice of who they kill then I would be extremely curious of what the criteria of who to kill would be.
    On a different note, these robots could be much better than humans in some cases. For instance if they are not programmed to kill but instead just stop targets. This could resolve a lot of issues with people killing each other out of fear. The robot would have the upper hand of not making decisions based off of fear. If we are assuming the autonomy they have is just on how they follow the orders given to them than they could be used to stop threats causing the least damage. While at the same time we would not need to worry about people like officers being hurt.

  • Autonomous weapons that can hunt down and kill people without human intervention? That is scary to think about. Technology is going in a different direction and I have never thought about killer robots. The only time you see and hear about them is when they have come out in the movies but not in real life. Human always needs to be involved and no machine should be in charge. I agree to ban autonomous weapons because I do not think it’s right for machines could seek out targets and kill them without the involvement of a human operator.

  • The risks of using autonomous weapons should be weighed very heavily before they are used, and advances in autonomous weapons should not be done without careful consideration as to what the consequences could potentially be if they were to malfunction. Currently, humans do have to give them the commands of where to go, and what to kill. Even with that being the case, there is still the chance that it will misfire, or not work at all. The name itself implies that the machine takes responsibility for what it does. The biggest issue I have with that is how easily it takes responsibility away from those who are using them should something go wrong.

  • I have to whole heartedly agree that autonomous weapons are a huge risk to society and to our very way of life. It is too concerning when technology can take on the role of deciding who can live and who can die. This is grounds for a futuristic meltdown of our very way of life. It is already questionable in my eyes with the use of drones for war missions. Although, they are saving many lives there are circumstances that only a living breathing human being can effectively make judgement calls. Autonomous weapons are definitely something that needs to be banned worldwide and breaches of this ban need to be dealt with in strong punishments for violations. If some of the greatest minds of all time are seeing the future problems now, just think of what the future could hold for us if we don’t. It is obviously coming down the line, or there wouldn’t be anyone concerned. Robots are very cool, but there must always be a human element involved.
    P.S. Too bad the YouTube video quite, I like Futurama.

  • I completely agree with banning autonomous weapons because it is definitely a huge risk to society. I didn’t even know such thing exists. How can a machine decide who should live and who should die, life and death of a human being should not be a machine or humans’ decision. Even if it is for military purposes and its meant to be used for wars, what if it kills innocent people or what if it turns around and kills everyone. Technology has advanced tremendously but as far as I know machines cannot make decisions, they operate by what they are programmed to do by a human. If robots or machines are still incapable of doing anything besides what it is programmed to do how could the people creating these weapons could even trust that it will make the right decision. This is so scary!

    • I am on board with the scientist that are warning us about where we are headed regarding AI robots. This technology should be used for other things like artificial limbs that can be cheaply made (with more R&D) and be possibly given first to our veterans and then everyone else. This type of technology can be a blessing and a burden if not properly used. Who is to have this type of technology has a lot to do with it too. For example, Nations that would use this technology for military use such as warfare. It would essentially become the Terminator movie, and then we would have to create a time machine to get out of that mess.

  • The thought of autonomous weapons makes me extremely uncomfortable. Maybe I have seen too many science fiction movies, but I would not put my trust in anything with artificial intelligence when it comes to controlling weapons of any kind. I feel it is just asking for trouble and we should not be trying to figure out if artificial intelligence has the ability to make decisions, feel emotions, or can have any other humanlike qualities. I do not feel anything good would come of it. As mentioned in the blog, humans are by far not perfect and make poor choices. Now we want to see if artificial intelligence can behave like a human? And for what purpose are we doing this?

    I found it interesting that Stephen Hawking is doing an ‘Ask Me Anything’ on Reddit where he will be answering questions. As I was scanning the site, I found some interesting questions on artificial intelligence. I will be interested in hearing Stephen Hawking’s responses. I’m not sure if anyone is interested, but the website is:

    https://www.reddit.com/r/science/comments/3eret9/science_ama_series_i_am_stephen_hawking/

  • Should an autonomous weapon be taught to make their own decisions on targeting and eliminating the target? Well, right now who is making these decisions? Is it our commander-in-chief? Is it the boots on the ground? Is there ever any emotion involved in making these decisions? Would a machine be able to run through different approaches faster and possibly come up with a better means than killing a target?
    While I don’t advocate for machines killing people, I do think there is some value to this process and should be further advanced. I would want to know that all my options were considered before pulling the trigger. I am afraid human emotions and mental capacities all vary depending on the person and the situation. My hope would be that a machine could run through the options in a split second and make a different decision rather than death to an opponent. Just saying there is another perspective to this.

  • I agree with the other comments posted on this blog, that autonomous weapons pose a big risk. It is a bad idea in my opinion, and it is not the same as having bullet piercing armor, chemical weapons, or nuclear bombs because there is a human element behind those things. Giving robots the programming, the ability to “make decisions” to execute a human being is only the tip of iceberg. I do believe, that in today’s world of superpowers looking to remain undisputed and countries attempting to counter-balance that power, this could potentially be another reason for an arms race. I wonder, if all of these creative and intelligent minds that have come together to urge the end of autonomous weapons know more about the matter than we are currently hearing about. I am for technological advances, but autonomous robots patrolling our streets or the like is extremely questionable.

  • Chris Rodiilosso

    That’s crazy but when you pretend to be God, that is scary to me. I have to keep these somewhat short because I type slow and it times me out. I don’t understand though that we as humans who desire to control everything would give up 100% to something else.