Domo Arigato, Mr. Roboto?
Killer robots, aka fully autonomous weapons, have raised concern around the world. A report from Human Rights Watch and Harvard Law School’s International Human Rights Clinic raises a variety of issues. The 2015 paper calls out various autonomous weapons systems, including Israel's Iron Dome and the U.S. Phalanx and C-RAM.
“Fully autonomous weapons, also known as ‘killer robots,’ raise serious moral and legal concerns because they would possess the ability to select and engage their targets without meaningful human control. Many people question whether the decision to kill a human being should be left to a machine. There are also grave doubts that fully autonomous weapons would ever be able to replicate human judgment and comply with the legal requirement to distinguish civilian from military targets.”
In April, a boycott of The Korea Advanced Institute of Science and Technology (KAIST) by more than 50 researchers from 30 countries was narrowly averted when the KAIST announced it would not participate in the development of lethal autonomous weapons, ZDNet reported. The South Korean university had previously announced plans to collaborate with a defense contractor on a research center for military applications of AI.
From the researchers’ joint statement:
“If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora's box will be hard to close if it is opened. As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them.“
More recently, 160 AI companies and organizations from dozens of countries signed a pledge to “neither participate in, nor support, the development, manufacture, trade, or use of lethal autonomous weapons,” Internet of Business reported.
We’ll take that as a big “no thank you” to killer robots.
Giving Enemies the Front Door Key and the Password?
One source of controversy around IoT for defense isn’t what we do with it, but what the enemy can do with it.
Using 48,000 miles of classified communication network lines, Lockheed’s C2BMC is an IoT-enabled warfighting network. According to the company, it connects the different elements of the U.S. military’s Ballistic Missile Defense System into a single system-of-systems to counteract threats across the globe. “It takes data from hundreds of sensors, radars, and satellites and translates that data into a common language for the missile defense systems to interact and engage the threat,” said JD Hammond, director of Command & Control at Lockheed Martin.
But Pascal Geenens of Radware is among those who fear the military will fall victim to ransomware. “Seemingly innocuous cameras, sensors, and other IoT devices pervade the military, but are just as rife with security issues as any on the planet,” he told Internet of Business. “Once demonstrable vulnerabilities are validated, how much would a government pay to regain control of weapons or other crucial resources?”
And if you think the military would be careful not to let that happen, consider the news earlier this year about a fitness data service that inadvertently revealed military base locations.
But let’s get back to the really scary stuff.
SkyNet – Without Arnold
Remember Terminator 3? That’s when SkyNet, the self-aware artificial intelligence network, takes over.
Don’t look now but the Army, Navy, Air Force, and Marines are, according to Defense One, “converging on a vision of the future military: connecting every asset on the global battlefield.”
Every weapon, vehicle, and device will be connected and sharing data, “constantly aware of the presence and state of every other node in a truly global network. The effect: an unimaginably large cephalopoid nervous system armed with the world’s most sophisticated weaponry,” Defense One reports.
Motherboard doesn’t mince words: “Mass surveillance, drone swarms, cyborg soldiers, telekinesis, synthetic organisms, and laser beams will determine future conflict by 2030.”
And there’s no John Conner to save us.
Elon Musk is worried, too. You may remember that the Tesla and SpaceX CEO and profound tweeter held forth on the dangers of AI. He suggested that unchecked, AI was much more apt to go haywire than Kim Jong Un. That started a back-and-forth with Facebook's Mark Zuckerberg, who called Musk “alarmist.”
Makes sense: It’s not like AI is unchecked. Raj Dasgupta, a member of Forbes Technology Council, points out that there are many technologies exploring how to neutralize technologies “when they go rogue.”
Still, he seems to have sympathy for Musk’s position. “There are a lot of experts who are quite worried about what happens when robots get ‘feelings.’ Perhaps what they should worry about is the phase when they have no feelings – which is right now.”