Image courtesy of http://catsondrones.tumblr.com
|
Despite a rash of positive coverage –extoling the positive
possible uses of unmanned aerial vehicles (UAVs for short)– the image of the
drone as a killer robot is back (cue the Terminator references) and with a
vengeance. This week the U.N. (yes, the United Nations) is taking up the issue of a
proposed ban on killer robots. As Ishaan Tharoor of WashPo points out, while Human Rights Watch and a number of
other international NGOs banded together about a year ago to launch the international
campaign against killer robots.
This despite the fact that there are
actually no robots out there currently killing anyone on their own. There is an
active “targeted killing” program that
negligibly has replaced the “Global War on Terror” under the Obama
administration typically persecuted by armed MQ-1 Predator and MQ-9 Reaper
drones. But these UAVs are remotely piloted, are not actually robots and do not
possess the Terminator like autonomy to live up to the name of killer robots.
As Charlie Carpenter of UMass Amherst has
astutely pointed out in several venues and forms, this nascent global movement
is fixated on the delivery platform (the drones) when it should be focused on
the indefensible policy (targeted killing or AKA assassination). After all, this
policy certainly could be carried out via other means like Special Forces,
manned aircraft launching identical missiles or Tomahawk cruise missiles (that
never seemed to creep us out quite like a drone does despite their higher
levels of autonomy). Never the less, media reports and blog post continue to
conflate killer robots with a seemingly inevitable string of assassinations
from above (thanks for the hard hitting research, Maureen Dowd). The two issues are linked but
fall along separate lines when one digs into the policy and technical aspects of
these issues.
A far more comprehensive and nuanced critique of drone warfare
is offered by Ian Shaw and Majed Akhter at Understanding Empire in a series of posts
under the heading “The Droneification of State Violence”. Here the tropes of
killer robots are balanced by clear geopolitical, historic and theoretically
informed meditation on the vagaries of imperial violence in our postmodern
moment. For those seeking more depth in the increasingly shallow conversations
surrounding lethal autonomous weapons, this is a much more comprehensive
account. This is not surprising given that this series of blog posts are
essentially cut and pasted from their co-written academic article in the April
2014 issue ofCritical Asian Studies. And
even a cursory look at Understanding Empire establishes its partisan slant and
academic activist bona fides where we have to ask what is
analysis and what is rhetoric. What is clear, though, is that while the
targeted assassination policy of the US and killer robot motif of ‘dronification’
are deeply integrated, we do need to debate each of these elements on their own
separate merits: what the policy of today is doing to global security and what
the future advancements in technology portended for global politics writ large
in the future.
On the other side of the ledger, we find a number of
pronouncements and general “golly gee”/“whiz bang” admiration for technological
wizardry directly from the military branches (as well reported by Michael Peck
at the War is Boring blog) to the equal parts creped
out but gleefully awestruck gadgetry of mainstream technology blogs like Popular
Science Zero Moment blog, Gizmoto and Vice’s
Motherboard.
This points to a general problem with the state of the
conversation that is evident even when one digs deeper into the debates at the U.N. Convention of Certain
Conventional Weapons this week: how do we first separate the hyperbolic
rhetoric against the use of autonomous UAVs that don’t exist yet from what is
technically possible? Once we ask what is technically possible from the
engineers/aviation industry, we run into two problems. First –given the nature
of the research is geared towards military applications couched in the
interests of national security– universities, companies and the military are
reticent to be transparent about what the state of autonomous UAVs are
currently. So second, even from those supporting this technology we see
conjectures (but guestimations, none the less) based more on science fiction
than science fact. Striving for clearer information on the technology side, one
could do worse than Mark Gubrud’s 1.0 Human blog that seems to strike the
right balance of well informed technical knowledge, with a deeper consideration
of the ethical issues, all the while still being partisan against autonomous
weapon systems. Yet, one is left with the sense that we desperately need better
and real information (rather than conjecture) about what is technically
feasible in the near future for these weapons to even start an informed
debate.
Of course, this formulation (first collect the technical data,
and then hold an informed debate on UAVs) is problematic on two levels. Given
that all signs point towards more state security now and in the near future
rather than more transparency, it is doubtful that such details would be
forthcoming in a public arena. Perhaps it’s also the wrong question to ask at
this time, suspending the discussions of ethics while waiting on the
technoscience to catch up. More often than not, the ethics are considered after
the fact. So while the campaign against killer robots is really a campaign
against autonomous technology that doesn’t quite exist yet, we can at least
give kudos to the movements for taking on the issue before it is a fait accompli.