4 Controversial IoT and AI Military Applications
Have you been following the Google “Project Maven” controversy? If not, we’ll recap it in a moment. But let’s start with one of the key takeaways: artificial intelligence (AI) and the Internet of Things (IoT) will most certainly have military applications, but those applications will be highly controversial – even among the tech geniuses designing the underlying technology.
So, here’s what happened. In 2017, the Pentagon unveiled Project Maven. Its mission was to deploy AI, big data, and machine learning in the battle against ISIS, reports TechSpot. The objective: sift through and analyze thousands of hours of surveillance drone footage.
Google signed on earlier this year to provide the Pentagon with machine-learning algorithms to analyze the footage. Despite assurances that the tools would be used only for non-offensive purposes, Google employees were, well, up in arms. Some signed petitions, some quit and many organized. And in June, Google announced that it would not renew the contract which expires next year.
The Financial Times points out that this is about more than one project; the decision may have changed the company’s trajectory in the defense contracting space. Google considered the deal “a beachhead for winning more military contracts.”
Former Deputy Defense Secretary Robert Work, who founded Project Maven, was not thrilled, Bloomberg reports.
“I fully agree that it might wind up with us taking a shot, but it could easily save lives. I believe the Google employees created an enormous moral hazard for themselves. They say, look: this data could potentially, down the line, at some point, cause harm to human life. I said, yes, but it might save 500 Americans or 500 allies or 500 innocent civilians.”
That’s the story. And it got us thinking: What other AI and IoT technologies are being deployed by the military? As Deloitte pointed out way back in 2015, “Military commanders have always lived and died by information – both quantity and quality. No surprise, then, that the US military has been an early adopter of the Internet of Things and is looking to expand its applications.”
Clearly, military IoT projects abound, but many are uncontroversial and have little to do with warfare. Supply chain and security are obvious uses, but what about the really contentious ones? We'll give you three more.
Let’s start with the obvious: Killer robots.
Domo arigato, Mr. Roboto?
Killer robots, aka fully autonomous weapons, have raised concern around the world. A report from Human Rights Watch and Harvard Law School’s International Human Rights Clinic raises a variety of issues. The 2015 paper calls out various autonomous weapons systems, including Israel's Iron Dome and the U.S. Phalanx and C-RAM.
“Fully autonomous weapons, also known as ‘killer robots,’ raise serious moral and legal concerns because they would possess the ability to select and engage their targets without meaningful human control. Many people question whether the decision to kill a human being should be left to a machine. There are also grave doubts that fully autonomous weapons would ever be able to replicate human judgment and comply with the legal requirement to distinguish civilian from military targets.”
In April, a boycott of The Korea Advanced Institute of Science and Technology (KAIST) by more than 50 researchers from 30 countries was narrowly averted when the KAIST announced it would not participate in the development of lethal autonomous weapons, ZDNet reported. The South Korean university had previously announced plans to collaborate with a defense contractor on a research center for military applications of AI.
From the researchers’ joint statement:
“If developed, autonomous weapons will be the third revolution in warfare. They will permit war to be fought faster and at a scale greater than ever before. They have the potential to be weapons of terror. Despots and terrorists could use them against innocent populations, removing any ethical restraints. This Pandora's box will be hard to close if it is opened. As with other technologies banned in the past like blinding lasers, we can simply decide not to develop them.“
More recently, 160 AI companies and organizations from dozens of countries signed a pledge to “neither participate in, nor support, the development, manufacture, trade, or use of lethal autonomous weapons,” Internet of Business reported.
We’ll take that as a big “no thank you” to killer robots.
Giving enemies the front door key and the password?
One source of controversy around IoT for defense isn’t what we do with it, but what the enemy can do with it.
Using 48,000 miles of classified communication network lines, Lockheed’s C2BMC is an IoT-enabled warfighting network. According to the company, it connects the different elements of the U.S. military’s Ballistic Missile Defense System into a single system-of-systems to counteract threats across the globe. “It takes data from hundreds of sensors, radars, and satellites and translates that data into a common language for the missile defense systems to interact and engage the threat,” said JD Hammond, director of Command & Control at Lockheed Martin.
But Pascal Geenens of Radware is among those who fear the military will fall victim to ransomware. “Seemingly innocuous cameras, sensors, and other IoT devices pervade the military, but are just as rife with security issues as any on the planet,” he told Internet of Business. “Once demonstrable vulnerabilities are validated, how much would a government pay to regain control of weapons or other crucial resources?”
And if you think the military would be careful not to let that happen, consider the news earlier this year about a fitness data service that inadvertently revealed military base locations.
But let’s get back to the really scary stuff.
SkyNet – without Arnold
Remember Terminator 3? That’s when SkyNet, the self-aware artificial intelligence network, takes over.
Don’t look now but the Army, Navy, Air Force, and Marines are, according to Defense One, “converging on a vision of the future military: connecting every asset on the global battlefield.”
Every weapon, vehicle, and device will be connected and sharing data, “constantly aware of the presence and state of every other node in a truly global network. The effect: an unimaginably large cephalopoid nervous system armed with the world’s most sophisticated weaponry,” Defense One reports.
Motherboard doesn’t mince words: “Mass surveillance, drone swarms, cyborg soldiers, telekinesis, synthetic organisms, and laser beams will determine future conflict by 2030.”
And there’s no John Conner to save us.
Elon Musk is worried, too. You may remember that the Tesla and SpaceX CEO and profound tweeter held forth on the dangers of AI. He suggested that unchecked, AI was much more apt to go haywire than Kim Jong Un. That started a back-and-forth with Facebook's Mark Zuckerberg, who called Musk “alarmist.”
Makes sense: It’s not like AI is unchecked. Raj Dasgupta, a member of Forbes Technology Council, points out that there are many technologies exploring how to neutralize technologies “when they go rogue.”
Still, he seems to have sympathy for Musk’s position. “There are a lot of experts who are quite worried about what happens when robots get ‘feelings.’ Perhaps what they should worry about is the phase when they have no feelings – which is right now.”