DONATE TODAY HOT TOPICS ▶ Climate Change     Undoing The New Deal     The Real Baltimore     Reality Asserts Itself     United Kingdom    

  September 21, 2017

Killer Robots: The War Over Artificial Intelligence

Artificial intelligence companies and other AI experts are urging the UN to ban the technology in weapons, but are they exaggerating the danger and threat for their own interests?
Members don't see ads. If you are a member, and you're seeing this appeal, click here


Share to Facebook Share to Twitter

I support the real news because they deal with real issues, not meaningless articles and sound bites - Gary
Log in and tell us why you support TRNN


Daniel Bertrand Monk holds the George R. and Myra T. Cooley Chair in Peace and Conflict Studies at Colgate University, where he is a professor of Geography and Middle Eastern and Islamic Studies. He is the author of An Aesthetic Occupation as well as a number of other studies on: the Israel-Palestine conflict, Middle East Wars, the Post-Conflict Environments, Refugees and Humanitarianism, Critical Security Studies, and the politics of Neoliberalism. Monk has been awarded a Mac-Arthur Foundation Fellowship in International Peace and Security, as well as a Woodrow Wilson Fellowship for his research on contemporary conflict. At Colgate University, he is currently the convener of a distributed publishing initiative called ‘The War Seminar,’ in partnership with the Journal, Critical Studies on Security [CSoS].


Eddie Conway: Welcome to The Real News. I'm Eddie Conway coming to you from Baltimore. Recently, there has been reports of artificial intelligent weapons coming online and there has been a petition in the United Nations by 100 leading experts in artificial intelligence field that want to ban artificial intelligent weapons. There's a conference getting ready to take place of 123 nations to have a discussion about this. Joining me today to talk about how these threats exist, how we perceive them is Professor Daniel Monk. Dan, thanks for joining me.

Daniel Monk: Thank you very much.

Eddie Conway: How do you look at what's happening with the leading experts on artificial intelligence, how the United Nations is addressing this threat of killer robots? How do you see all of this?

Daniel Monk: First of all, thanks for including me in this conversation. Second, I guess I would say that there's actually a responsible dimension to this conversation and there's a part of it that demands a bit of examination. What I mean by that, just to clarify, is that when we speak about artificial intelligence, generally speaking, we're not just talking about autonomous machines that could, in principle, kill things. We already have those. Drones, essentially, can function and be programed in such a way as to basically function autonomously and then do what they do.

When we speak about AI, we're really focusing more on a species of intelligence that's now referred to also sometimes as super intelligence, which is basically machines that are self-aware. The threat that is believed to potentially come from those kinds of things in the future would be a threat in which a self-aware machine might decide that the things that it's being asked to do are not in its own self-interest and that maybe the existence of human beings may not be in its own self-interest. It sounds like a science fiction scenario, but in point of fact, those science fiction scenarios are rehearsing a possibility that people are really concerned about.

Now, the moment when the United Nations is now beginning to maybe have this kind of a conversation in the form of a conference, or the moment when public figures like Elon Musk make provocative claims that AI is potentially more dangerous and imminently more dangerous than, for example, North Korea attaining a nuclear weapon, those kinds of issues have more to do with an effort on his part, I think, and on others' part to try to find a way in which to develop a policy framework for how artificial intelligence should develop.

The last thing I'll say about this for the moment is that the model here, and one that I don't see as really acknowledged in any of the news coverage of this is exactly what happened when we reached a threshold in genetics when it became apparent to biologists, in the 1980s or so, that it was going to be perfectly possible for us to begin to genetically modify life, forms of life, and in effect begin to even patent forms of life.

A group of very responsible biologists and geneticists and others began to agitate government to begin to establish limits on what could be done in order to make sure that the threat that could potentially happen from runaway genetic engineering or genomic changes would be forestalled. They did succeed in the long run in establishing limits to research, though in other ways, we do now live in a world where forms of life are patented, and there are real questions about whether that's good or bad for us to have genetically modified organisms.

Eddie Conway: Okay, I would step back a minute because I can vaguely remember through history that the same kind of discussions was going on around the development of nuclear weapons. There had been at some point an agreement of some kind of ban on those weapons and today, the nuclear threat seems to continue to haunt us. I'm wondering how do we deal with these kind of threats, whether they're in genetics or artificial intelligence or nuclear or planetary in terms of the climate? How do we deal with this stuff?

Daniel Monk: Well, if we're going to look at the historical precedence, what's actually fascinating is that however much we act as if this is a runaway technology that we can't really get on top of, the example that you brought up of nuclear weapons, or even before that, of certain kinds of gas in warfare and other kinds of weapons, the human species has by in large with some success managed to limit them. In other words, we do have non-proliferation agreements. We, by and large, do manage to limit the uses of certain kinds of weapons because we as an international community have, over the course of the last 50 years at least, with varying degrees of success, managed to establish pretty strong sanctions on actors in the international arena who would use them.

However much we talk about this as something that we can't control, there is a history of controlling it with varying degrees of success, as I noted earlier, but we do have a history of being able to limit the uses of certain kinds of arms because we arrive as a body of people on the planet, the governments on the planet, at a conclusion that it's in our collective interest.

Now that does point us to the fundamental problem of the present, which is the one that you're raising. In my line of work, this is what we call a “collective action problem.” Without making it sound too fancy, the real issue is this, at what point does the interests of all the people on the planet to work together in concert to limit this risk overcome the interest of particular actors on the planet to develop this particular kind of technology? At what point is it in the interest of all governments on the planet to limit certain kinds of dangerous development when each one of those governments probably feels that it would be more secure if they had artificial intelligence for themselves? This is at the heart of what we're talking about when we're speaking about the threat of AI, if in fact it's a threat.

Eddie Conway: I guess I would ask you, you say it's a collective effort and it's in the interest of the collective societies, and I wonder how much does media play, mass media etc., play in shaping and forming our concerns, our fears or our interests? Is that a key factor in how we perceive this, whether we perceive it as a threat or in self-interest?

Daniel Monk: Sure. Let me just back up for a second and say that I think what I was trying to - just to clarify a previous point - I think what I was trying to say is that it's a collective action problem. The incentive that we have to work together is, at the same time, working against the incentive that each one of us, individually, has to develop the technology. That contradiction, at what point do we all decide that we're better off working together to limit the technology at the expense of my own self-interests in developing it, that's the tipping point that's always a problem. It's always an immense problem.

You touch upon another really important point, which is the question of threat versus threat perception. To give you a very simple analog of this, I live in the Northeastern United States where there's an exploding deer population. We actually know there are statistical models that suggest that if we actually were to introduce wolves into the Northeastern Unites States, the number of people who would die from road accidents from hitting deer would decrease. The perceived threat of releasing wolves into the wild is such that people feel that's more dangerous.

You can see how this transposes onto the problem that we're trying to address here where, are we afraid of artificial intelligence because Elon Musk keeps telling us we should be afraid or Stephen Hawking is telling us we should be afraid? Should we be more afraid of artificial intelligence than of the lava dome under Yosemite finally collapsing and the world's largest super volcano extinguishing life on the planet? Should we be more afraid of AI than, for example, and asteroid, as you mentioned earlier, hitting the planet? Should we be more afraid of AI than nuclear weapons or climate change? In a universe of competing fears, what are we do to? How are to adjudicate between these threats, both in terms of their rankings and about whether they're actually something generated by a certain media scape?

Eddie Conway: One final thing I will ask you, that brings me to the question of, is this something that's in the interest of people that control the media or control government to have the population concerned about these threats, no matter how - obviously we could be hit by a comet - but how impossible it is? Is this a distraction and a control mechanism?

Daniel Monk: That's a brilliant question, and a lot of students of politics are obviously quite interested in it. Speaking for myself only, I don't believe that there's actually a conscious body, that there's some kind of Cabal that's organizing this as a distraction, but looking at all of these catastrophic threats, looking at all of the examples that you and I went over that really talk about a global existential risk, what's common to all of these fears, to these forms of catastrophe that we're obsessing about in the present is that they all seem to preclude the possibility of us doing anything about them. In other words, they seem so overwhelming that the logical response seems to be, "Okay, well we're basically doomed because there's nothing much we can do."

I think that is actually something that needs to be unpacked, undone and analyzed in terms of who benefits, whose interests are served by having a population that believes that they have no agency? Without establishing the direct line, it's like the Koch brothers are doing this. I would say that we live in a particular atmosphere of cultural pessimism that does serve other interests.

Eddie Conway: I know I said that was the last question, but I can't let you walk away without ... what do you think we should do or can do about that?

Daniel Monk: About which, the artificial intelligence question?

Eddie Conway: No, about the threat and the sense of gloom and doom that we can not do anything at all about the threats. How should we be perceiving them and addressing them?

Daniel Monk: Well, first of all, I think understanding that they form a cluster. “Why so many zombies on TV?” is not an irrelevant political question, so I actually believe the debate and that some sort of action in the public sphere matters. To some degree, what I think matters even more is presenting to people examples of the way in which we actually have successfully worked in history to resolve what seemed to be almost overwhelming, insurmountable collective action problems. That's really the best hope I can offer. I wish I could give you a happier conclusion. All I can say is that there's a counterfactual. We've done this before, it might be possible to do it again. On the other hand, it may not.

Eddie Conway: Okay. Professor Daniel Monk, thank you.

Daniel Monk: Thank you.

Eddie Conway: Thank you for joining The Real News.


Our automatic spam filter blocks comments with multiple links and multiple users using the same IP address. Please make thoughtful comments with minimal links using only one user name. If you think your comment has been mistakenly removed please email us at

latest stories

Guns, Toxic Masculinity, and the Alt-Right
Zuma's Catastrophic Presidency Ends in Forced Resignation
Brother of Crooked Cop Says He Knows Who Killed Detective Suiter
Israeli Strikes in Egypt Kept Secret for Years
As the Opioid Crisis Deepens, Will Maryland Democrats Vote to Save Lives?
The Free Market Threat to Democracy
Finding a SALT Tax Deduction Workaround
Leader of Neo-Nazi Militia Says MAGA Hat-Wearing Florida Shooter Trained with Them
Charter School Principal: No Evidence Privatization Is Better For Students
Max Blumenthal in Gaza: Netanyahu Faces Scandal, Palestinians a Crisis
Trump's Infrastructure Fantasy a Gift to His Donors
Netanyahu Could Fall for Corruption, Not War Crimes
Climate Change Costs Insurance Companies Billions, And Price is Rising
Trump's Budget Declares War on Forgotten America
West Virginia Woman Removed From Legislature After Exposing Fossil Fuel Contributions to Lawmakers
Leftist Hopeful's Lead Signals Upheaval for Mexico
Wilkerson: From Trump Parade to Budget, There's 'Too Much Military'
Trump's Budget and Infrastructure Plans Threaten Environment
Catharsis and Corruption in Wake of Dirty Cop Conviction
Confronting Trudeau on Climate Lies and Kinder Morgan Pipeline
Two Cops Found Guilty In Massive Police Corruption Scandal
In First Black Police Chief's Appeal, Judges Weigh Prosecutorial Misconduct, Discrimination
City Council Committee Advances Styrofoam Ban, But Delays Implementation
Trump Privatizes America
Is the Oil Industry Canada's 'Deep State'?
FBI Says It Has No Records on Violent Neo-Nazi Group, While Surveilling Antifascists and Black Activists
Democracy in Crisis: The FBI and Dirty Cops
'Normalizing' Britain's Interest Rates by Raising Them May Slow Economy
Koreas Talk Peace, But Does Trump Want War?
Guilty Verdict in Gun Trace Task Force Corruption Trial,, The Real News Network, Real News Network, The Real News, Real News, Real News For Real People, IWT are trademarks and service marks of Independent World Television inc. "The Real News" is the flagship show of IWT and The Real News Network.

All original content on this site is copyright of The Real News Network. Click here for more

Problems with this site? Please let us know

Web Design, Web Development and Managed Hosting