A semi-autonomous "loyal wingman" military aircraft during test flight near Woomera, Australia

A semi-autonomous "loyal wingman" military aircraft during test flight near Woomera, Australia

Foto: Commonwealth of Australia / Department of Defense

"Perhaps Even More Dangerous than Nuclear Bombs" Tech Expert Toby Walsh on the Menaces of AI

In an interview, the Australian technology expert discusses the potentially devastating effects of using artificial intelligence on the battlefield. We can't get rid of AI, he says, but we must bring it into line with society's values.
Interview Conducted By Hilmar Schmundt
Foto: Dean Sewell / The New York Times / Redux / laif

Toby Walsh is considered the "rock star " of Australia's digital revolution. He is a professor of artificial intelligence at the University of New South Wales. His latest book is titled, "Machines Behaving Badly: The Morality of AI."

DER SPIEGEL: The Russian war of aggression against Ukraine seems at times like a test run for modern weapons systems. Ukraine relied heavily on Turkish Bayraktar drones from the start, and now Russia could follow suit with Iranian drones. Could these weapons also be used autonomously in the future, controlled with the help of artificial intelligence?

Walsh: Yes, the use of AI-controlled killer robots is only a matter of time. I just warned in an essay about a new type of Russian anti-personnel mine called the POM-3. The POM-3 is based on a German Wehrmacht design called the Schrapnellmine, jokingly called "Bouncing Betty" by Allied soldiers. This is because this mine detects footsteps, then first jumps into the air and then detonates at a height of one meter to shred as many soldiers as possible with shrapnel. Russia has announced that it will control this shrapnel mine using AI software that can accurately distinguish whether its own Russian units are approaching, in which case it will not explode; or whether they are enemy soldiers, in which case it will go off. Such landmines are bestial, shredding people indiscriminately, often hitting children. That is why they are outlawed internationally; 164 states have pledged not to use them, including Ukraine. Russia is not among them. My criticism of the POM-3 gave me access to a club: I ended up on an entry ban list and am no longer allowed to travel to Russia. I take that as a compliment.

"Autonomous weapons are perhaps even more dangerous than nuclear bombs. That's because building a nuclear bomb requires an incredible amount of know-how, you need outstanding physicists and engineers, you need fissile material, you need a lot of money."

Toby Walsh, AI researcher

DER SPIEGEL: The world's air forces seem to be one step ahead. The Australian Air Force, for example, is currently testing new types of semi-autonomous jet fighter drones together with the American company Boeing. These interceptor drones are intended to provide bomber pilots with escort protection as a so-called "loyal wingman."

Walsh: The term "loyal wingman" is euphemistic and misleading, suggesting that it is simply about protecting human life, in this case the lives of pilots. But it's actually about something else entirely: a global AI arms race has long been underway, of which the public has so far been largely unaware. The U.S. Army is developing a robotic tank called Atlas. The U.S. Navy is working on a fully automated robotic warship called Sea Hunter, which has already autonomously completed a voyage from Hawaii to the California coast. China is also developing AI-controlled missiles. And Russia wants to develop an autonomous, unmanned submarine called Poseidon that can even be equipped with nuclear weapons. That's a nightmare. Can you think of anything more terrifying than a submarine in which, instead of a captain, a computer program decides whether to start a nuclear war?

The autonomous Sea Hunter military ship

The autonomous Sea Hunter military ship


DER SPIEGEL: Aren't these horror scenarios that don't come true in the end?

Walsh: Not at all. Autonomous weapons are perhaps even more dangerous than nuclear bombs. That's because building a nuclear bomb requires an incredible amount of know-how, you need outstanding physicists and engineers, you need fissile material, you need a lot of money. Nuclear weapons will therefore fortunately remain out of reach for many countries for the foreseeable future. With AI weapons, on the other hand, the situation is quite different. Often, conventional weapon systems that every small warlord has at his disposal are enough; with the appropriate computer chips and accessories from a 3-D printer, they are then refashioned into an autonomous weapon.

DER SPIEGEL: Would such do-it-yourself weapons or a few autonomous Atlas tanks or Sea Hunter frigates really be decisive for war?

Walsh: I don't know, but they could accidentally trigger a war through a malfunction, which would then be fought with conventional weapons or even worse warfare equipment down the road. Russia, for example, allegedly used a hypersonic weapon in its invasion of Ukraine, flying toward its target at many times supersonic speed. In a case like that, there is little time for defenders to react or rule out a false alarm. This accelerates warfare. It could lead to a "flash war," a blitzkrieg in which enemy computer systems rocket each other in the shortest possible time. This automation of war has a destabilizing effect. We have to prevent that at all costs.


The article you are reading originally appeared in German in issue 31/2022 (July 30th, 2022) of DER SPIEGEL.

SPIEGEL International

DER SPIEGEL: Can't semi-autonomous weapons also save lives? The U.S. Phalanx missile defense system for defending warships, for example, responds to attacks much faster than a human could.

Walsh: Sure, if you have to defend against a hypersonic weapon, that's a good thing. But the Phalanx is clearly a defensive system. AI is actually less problematic with defensive weapons. And yet, its use is also tricky. How reliable is such a system? What if the software mistakenly recognizes a passenger jet as an enemy missile and destroys it?

DER SPIEGEL: Who bears responsibility if robotic weapons commit war crimes? As early as 2017, there was serious discussion in the European Parliament about granting some machines something like a special person status, at least in the long term, so that they can be held accountable in the event of misconduct.

Walsh: Yes, but what good would that do? After all, corporations are also considered "legal persons" in some countries. But that's often not very helpful when it comes to white-collar crimes.

DER SPIEGEL: Are offensive killer robots already in use?

Walsh: There's a lot of speculation. But it appears that Turkey has sent an autonomous drone called Kargu with facial recognition software to hunt humans on the Syrian border. Kargu reportedly uses the same-facial recognition algoriths as in your smartphone, with all their errors, to identify and kill people on the ground. Imagine how terrifying it will be to be chased by a swarm of killer drones. Even if they don't work particularly reliably, of course, dictatorships could use them to terrify the population. They would be ideally suited for state terror.

The Phalanx CIWS semi-autonomous weapon system aboard the USS Kearsarge military ship

The Phalanx CIWS semi-autonomous weapon system aboard the USS Kearsarge military ship

Foto: Smith Collection / Gado / Getty Images

DER SPIEGEL: Would it even still be realistic at all to outlaw AI-controlled weapons, for instance through a counterpart to the Nuclear Non-Proliferation Treaty, as you suggest in your new book "Machines Behaving Badly?"

Walsh: Well, outlawing them may not always work perfectly, but it can prevent worse. There are quite a few examples of weapons that were initially used but were later outlawed. Think of the widespread use of poison gas in World War I. Or think of blinding lasers, which can blind soldiers. They were outlawed by a United Nations protocol in 1998 and have almost never appeared on battlefields since, even though civilian laser technology is, as we know, widely used. For anti-personnel mines, the ban doesn't work as well, but at least 40 million of them have been destroyed due to outlawing protocols, saving the lives of many children. It's a similar story with cluster munitions: About 99 percent of the stockpile has been destroyed, even though they were used again in Syria. We can ensure that autonomous weapons become unacceptable by stigmatizing them.

DER SPIEGEL: Just four years ago, you predicted a glorious future for AI in your bestseller "It's Alive." What led to your change of heart?

Walsh: Reality happened! We've just seen a lot of unpleasant side effects of AI. Gradually, it became clearer and clearer the extent to which targeted election advertising was being used to sort of hack people's brains into voting for Donald Trump or Brexit, which often goes against their own interests. And through self-learning programs, these attacks have swelled into a perfect storm.

DER SPIEGEL: Does it give you hope that the European Union is currently working on a directive on "Trusted AI"?

Walsh: The EU is really leading the way when it comes to regulating AI. And the European market is big enough that it's worthwhile for global corporations to adapt their AI products to European rules. However, the devil is in the details. Formulating rules is one thing, but the question is how vigorously compliance with the rules will then actually be enforced.

DER SPIEGEL: There are already considerable differences of opinion in the premlinary stages, for example on the question of transparency. Can AI really be transparent and comprehensible – or isn't it always, by definition, partly a black box?

Walsh: Transparency is overrated. People aren't transparent either – yet we often trust them in our everyday lives. I trust my doctor, for example, even though I'm not a medical professional and can't understand her decisions in detail. And even though I have no idea what's going on inside her. But I do trust the institutions that monitor my doctor.

DER SPIEGEL: How can we make sure that an AI is working according to the rules, even though we don't know its code in detail?

Walsh: This is a tricky problem – but it's not limited to AI. Modern companies are also a form of superhuman intelligence. Not even the smartest person on the planet could build an iPhone all by themself. No one is smart enough to design a power plant by themself. Every large corporation interconnects the intelligence of tens of thousands of moderately intelligent employees to form a superhumanly smart collective – in other words, a kind of artificial intelligence, as it were.

DER SPIEGEL: Couldn't we just pull the plug on an AI system that misbehaves – and that would be the end of it?

Walsh: No way! We can't just turn off the computers of the global banking system, then the global economy would collapse. We can't turn off the computers of air traffic control either, then traffic would collapse. We also can't turn off the computers of power plants, because then we would experience a blackout. We are totally dependent on computers, already today. This dependence is only increasing with AI. We can't get rid of it. We can only try to ensure that the values of AI are in harmony with the values of our society.

DER SPIEGEL: You once correctly predicted that a self-driving car would cause a fatal accident with a cyclist and pedestrian, which is exactly what happened one year later. What do you predict for the next five years?

Walsh: With automatic facial recognition, we will see scandals. The American startup Clearview AI has scraped millions of photos without the consent of the people involved. The company was sued for that, but just keeps going. It's incredible they haven't already been sued into bankruptcy. And one more prediction: deep fakes – i.e. movies and photos manipulated with the help of AI on the internet – will increase. In a few years, deep fakes will decide an election or trigger a war – or even both.

Die Wiedergabe wurde unterbrochen.