The Dangers and Ethics of AI

We’ve all heard of self-driving cars, but how about self-flying planes? Earlier this month, DARPA held a contest called AlphaDogfight (https://www.wired.com/story/dogfight-renews-concerns-ai-lethal-potential/). The goal was to see if an AI could be developed to pilot an F-16 in a dogfight situation better than a human can. Teams from several companies competed and eventually the winner battled a highly trained human Air Force pilot in a simulated dog fight. An AI developed by Heron Systems defeated the other AIs and went on to defeat the human pilot as well. It wasn’t just a lucky shot either. There were five test battles and Heron’s AI shot down the human pilot in all five.

The dogfighting AIs were based on a technique called reinforcement learning. It works by supplying the rules of a game and scoring criteria. The AI then plays the game over and over (often against itself, or another AI) keeping track of how well it scores each time, and the plays that led to getting that score. Over time it tries to favor the plays that lead to the highest scores. Imagine an AI learning a game of Tic Tac Toe this way. It knows it can place X’s in any unoccupied space and whoever gets 3 in a row wins the game. Then you give it scoring criteria: 1 point for winning, 0 points for a tie, -1 for a draw. Now the game will play Tic Tac Toe millions of times. It will start off random, but it will learn by remembering what actions led to winning and losing each game. It will try to repeat the things it did in games it won and avoid the things it did in games it lost. You can imagine that it would learn very quickly which moves were optimal. And indeed, just like humans, a correctly programmed AI can never lose at Tic Tac Toe, only win or draw.

Tic Tac Toe is a very small game with limited possibilities compared to flying an F-16. But it turns out that the same techniques can scale to much bigger problems. The AI would probably start out by flying directly into the ground, but after several hundred (or thousand, or even million) attempts maybe it manages to stay in the air for a few minutes. Then it’s going to start repeating what it did to keep flying. Next it will try to figure out how to shoot down the opponent. Billions of attempts later, it will have learned how to fly, manage its weapons, and even evade the opponent. This is very computationally expensive, but modern advances in hardware have made this sort of training viable.

These kinds of applications of AI have raised a lot of ethical concerns. As stated in the Wired article above, an open letter was written encouraging world governments to restrict the use of AI in weapons. This letter was signed by people like the founders of DeepMind, Google’s AI research arm, Stephen Hawking, and Elon Musk. Some of the key issues surrounding AI weapons are obvious, but some are less so. There are accountability questions of course. When the AI makes a mistake, who’s at fault? Can a programmer be charged with a war crime because there was a bug in their code? Or worse, because the code was functioning properly, but “learned” something unexpected. That’s the easily foreseeable question.

But AI weapons have a unique characteristic. Once you have the software, it’s easy and cheap to make a copy to use in another place. Other weapons of mass destruction often require costly and rare components. Even if I had the plans and expertise to make a nuclear bomb, I can’t actually build one unless I can somehow get some uranium. We can work to keep those resources out of the hands of dictators and terrorists. And even if they get enough to build a bomb, once they’ve used it, it’s used up. With AI software, once they have it, they have it forever and can reuse it indefinitely at essentially no cost. You don’t have to acquire more.

What about the fact that automated weapons will lower the threshold for war? The less we need to use humans to fight, the more willing we will be to fight. On the surface that seems ok, we’re not risking human lives right? But war is always going to be inherently dangerous and lives will be lost. Perhaps they won’t be on the battlefield, perhaps it will be in retaliatory attacks on civilian targets from an overmatched force that doesn’t have the technology to fight the AI, but can certainly blow up a tourist hotel with a car bomb. 

Many of the worlds’ greatest minds and experts are the ethics of AI and their answers will shape the future of its use in defense in the coming years.

Offices:

Maryland Headquarters
6950 Columbia Gateway Drive,
Suite 450 
Columbia, MD 21046 USA
443-563-1870

Georgia Office
100 Grace Hopper Lane,
Suite 3700
Augusta, GA 30901 USA
706-955-1211

Texas Office
3331 General Hudnell Dr,
Suite 3
San Antonio, TX 78226 USA

Contact Info:

Email:
info@intelligenesisllc.com
Fax:
866-511-1193

Identifiers:

DUNS Number
793224366
CAGE Code
4QLA5

Locations:

•Maryland
•Texas
•Georgia
•Colorado
•Hawaii
•Alaska
•Utah