Military Frameworks

Embed Size (px)

Citation preview

  • 8/12/2019 Military Frameworks

    1/23

    Military frameworks: technological know-how and thelegitimization of warfare

    John Kaag and Whitley KaufmanUniversity of Massachusetts Lowell

    Abstract It is the elusive target of policymakers, ethicists and military strategists: thetarget of a just war. Since the advent of precision-guided munitions in the mid-1970s,

    commentators claimed that surgical-strike technology would advance the cause of jus in bello, ending the longstanding tension between effective military engagement andmorality. Today, many policymakers accept that the ethical dilemmas that arise in thefog of war can be negotiated by the technical precision of weaponry. This is, at best, only partially accurate. At worst, its misplaced optimism risks numbing the moral sense of strategists and, just as importantly, the sensibilities of the general populace. We argue thatthe development of precision guided munitions (PGM), stand-off weaponry and militaryrobotics may force policymakers and strategists to experience new ethical tensions with anunprecedented sensitivity and may require them to make specic policy adjustments. In themove toward more quantitative approaches to political science and international affairs itis often forgotten that military ethics, and the ethics of military technologies, turn on thequestion of human judgment. We argue that the ethical implications of revolution inmilitary affairs (RMA) are best investigated by way of a detailed discussion of the tenuousrelationship between ethical decision-making and the workings of military technology.

    Introduction: revisiting questions concerning technology 1

    Our questions concerning military technology may be viewed as owing from thework of Martin Heidegger, who delivered the lecture The question concerningtechnology in 1955, but are more accurately understood in the wider context of many Western thinkers who have taken up the interrogation of the moral andepistemic assumptions that seem to accompany and validate technicalcapabilities. As John Kaag has noted elsewhere, Heidegger delivered his addressat a historical moment in which technological advances were beginning to doubleas the political imperatives and the moral justications of war (Kaag 2008).The arms races that dened the geopolitical landscape of the second half of the20th century may be over, but a new form of technological know-how, one that

    1 The theoretical foundations of this study were rst briey outlined in John Kaag(2008). The current article, however, departs from that article in signicant ways in itsdetailed and exclusive focus on the ethical implications of military (rather than homelandsecurity) technologies. The issue of intelligence-gathering and PGM technologies, rst broached by Kaag (2008), have been developed more fully in the seventh section of thecurrent article

    Cambridge Review of International Affairs,Volume 22, Number 4, December 2009

  • 8/12/2019 Military Frameworks

    2/23

    turns on precision rather than magnitude, now threatens our moral sensibilities.This danger manifests itself in two distinct, yet related, ways.

    First, we risk confusing technical capabilities and normative judgments byassuming that precision weaponry facilitates ethical decision-making. Here

    facilitate, derived from facilitis, means to make easier. Second, we are in danger of allowing techne to facilitate ethics in a more dramatic sense. Here we mightconsider facilitis as stemming from the verb facere, meaning to do or make. We riskour ethical standards when military technologies are purported to make thethoughtful determinations that have always been the sine qua non of ethics.The employment of robotics on the battleeld stands as an extreme case of thisproblem. Military robotics remains in its early form of research and development, but recent reports on battle-ready robots should give ethicists pause. In effect,strategists and theorists have begun to argue that we make the issue of militaryethics an easy one by placing ethical mechanisms in our machinery thereby

    shifting moral responsibility onto techne itself. We argue that the implementationof these robotics must be preceded by a careful reminder of what ethical judgmententails, that warfare must be regarded as a strictly human activity and that moralresponsibility can never be transferred to the technology that is employed therein.

    A brief history of precision in aerial bombardment

    The development of precision-guided munitions, satellite navigation and stealthtechnologies has transformed the character of aerial bombardment.In investigating the ethical pitfalls accompanying the use of these technologies,

    it is only fair to acknowledge the way in which they have reduced the rate andcumulative total of collateral damage suffered in warfare. While the debateconcerning the exact denition of collateral damage continues (whether humancasualties or private and public property should be included in this damage), it isimpossible to argue that strategic bombing has not undergone a radicaltransformation in the past century and that, on the whole, this transformationhas continued to raise the ethical standards of jus in bello. In World War II, between300,000 and 600,000 German civilians were killed by Allied aerial attacksintruth, these estimates might underestimate the total fatalities, since the Red Armyemployed tactical airstrikes that were not calculated in this total. The bombing of

    Dresden on 13 February 1945, compounded by the use of nuclear devices at theend of the Second World War, has come to symbolize the terror of total war andpresents a strong argument for the development of surgical strike capabilities.Over the course of two days, 35,000 refugees and residents of the Dresden areadied in a restorm that could be seen from a distance of 200 miles; and, as scholarscontinue to note, this outcome was possibly part of the strategy of the Dresdenattack (Ramsey 2002, 353). These statistics horrify our modern sensibilities,sensibilities that have been cultivated in an age of precision-guided weaponry.At the time, however, such attacks were standard operating procedure, especiallyfor strategists such as Churchill, who had used widespread strategic bombingsince the early years of the century in order to terrorize and subdue colonialpopulations from India to Egypt to Darfur (Beiriger 1998).

    Today, as Edward Holland notes, a single high-altitude non-nuclear bomber

    586 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    3/23

    required thousands of B-17 sortie missions dropping approximately 9000 bombs(Holland 1992, 39). Since the rst Gulf War, the United States (US) public has seenan increasing number of photographs and lm clips of surgical strike capabilities,images that that might have led one to believe that only precision-guided

    munitions were employed in the American military effort. In truth, only seven oreight per cent of sortie missions in the rst Gulf War were precision guided, butthey were reserved for urban targets and helped military planners avoid directcivilian casualties. Non-precision weaponry was employed in the Kuwaiti theatrewhere US forces faced a variety of stand-alone targets. Due to the type of militarytheatres in Operation Enduring Freedom (7 October 2001) and Operation IraqiFreedom (19 March 2003), 80 per cent of all bombs or missiles deployed by the USAir Force in Operation Iraqi Freedom were guided by video camera, laseror satellite targeting (CBS News 2003). This improvement in the economy of forcehas coincided with a decrease in direct civilian casualties. This being said,

    strategic precision bombing (such as the targeting of an electrical grid or watertreatment facilities) can have lasting effects on the health of a population underattack. For example, the US Department of Commerce reported thatapproximately 3500 civilians died directly from the US bombing of Iraq in 1991.In addition, because of the aftermath of direct attacks on Iraqs economic andphysical infrastructure, it was calculated that there were 111,000 indirect or excessdeaths in 1991 due to what was described as post-war adverse health effects(Daponte 1993). Whereas direct collateral damage and excess deaths used to be atleast loosely correlated, the advent of precision guided munitions (PGM) hasallowed strategists to decouple these gures, destroying infrastructure without

    immediately killing civilians.The discussion concerning direct collateral damage, immediate death anddestruction resulting from a particular attack, and excess deaths helps frame theupcoming treatment of techne and ethical judgement in three distinct ways. First,while PGM helps reduce direct collateral damagewhich has been often regardedas the ethical metric for just war theoriststhese munitions can still adverselyaffect the health of a given population. In short, PGM strikes can satisfy traditionalethical standards, but in so doing make us numb to additional ethical quandariesthat accompany their use. Second, the successful use of military technology tosatisfy one set of ethical standards may allow policymakers and strategists toassume that technological advancement is identical to moral advancement. Third,making the distinction between direct civilian casualties and excess death is thekind of ethical judgement that cannot be made by determinate rules. It requires theexibility and sensitivity that only humans can bring to bear on a given situation.

    Ethical judgments and techne : a philosophical overview

    Following Platos Republic and the Gorgias, Aristotle argues in The Nicomacheanethics that the practical matters of ethical judgement are, by denition,indeterminate ( aorista) (2002, 1137b29). Experientially, this point seems on themark, since in the heat of an ethically charged moment individuals face bewildering choices and conicting ideals. This is the reason for Aristotle tellingus that ethics must rst be outlined or sketched out in rough gure and not with

    Military frameworks 587

  • 8/12/2019 Military Frameworks

    4/23

    be cultivated in light of this fact. It is not the case that our precision must berened in order to account for a particular ethical judgment; rather Aristotleinsists that it is the nature of these [ethical] matters to remain more complex thanany set of rubrics generated by techne (1137b17). This is one reason for Plato to

    argue that Gorgias, and all rhetoricians who attempt to make ethics into a type of science, are unable to claim expert status in the eld of ethics. There are no expertsin a eld that is dened by new and changing situations. Heidegger tried toextend this point in the 1950s when technocrats began to make their way into thecircles of power in Washington and in Europe. At this time, there was a strong yetmisguided belief that strategic experts, detached from the emotional setting of the battleeld, could wage successful and just wars.

    Aristotle is dubious, stating that cases concerning the human good do not fallunder any science [ techne] or under a given rule but the individual himself in eachcase must be attuned to what suits the occasion (1104a10). Moral behaviourhappens in situ, in a human interaction with a particular setting and circumstance.In her analysis of The Nicomachean ethics, Martha Nussbaum explains thatthe general account of ethics is imprecise . . . not because it is not as good as ageneral account of these matters can be, but because of the way that these mattersare: the error is not in the law or the legislator, but in the nature of the thing sincethe matter of practical affairs is like this from the start (Nussbaum 2001, 302).Nussbaums analysis of techne and tuche (chance) is very instructive for scholarstrying to understand the motivations of military management.

    These observations do not foreclose the possibility of developing ethical rulesand standards. Indeed, Aristotle, Cicero and Augustine are the philosophicprogenitors of the standards of just war theory, especially the outline of jus adbellum. All of these thinkers, however, were fully aware that determinate laws are by their very nature general, but are used to interpret and access particular humansituations. The application of general rules to specic cases is always an issue of judgment (Nussbaum 2001, 318 340). For example, determining the thresholds of just cause, legitimate authority and comparative justice in situations such asthe Gulf War or the Bosnian War (19921995) is difcult by virtue of the fact thatthese rules must be tailored to the intricate character of these conicts.

    Cicero and Augustine trace this difculty not only to the complexity of humaninteractions, but also to human emotion that can jeopardize moral deliberationand judgment. This belief seems to underpin much of the current discussion about

    judgments on the battleeld, and it motivates the research and development of technology that might circumvent the ethical mistakes that are attributed toemotionally driven decisions. The hopes for military robotics turn precisely onthis position. In a certain sense, Cicero sets this philosophical groundwork, statingthat emotions risk overriding the rational calculations that encourage an agent tomake genuinely moral decisions (Russell 1977, 5). Along these lines, Augustineheld that God and angels did not have emotions and, for this reason among others,did not have to face the difcult moral choices that dene human affairs. Thisstance begs the question: Do we approach the status of gods and angels if we areable to mechanize morality, that is, create technologies that free humanpractitioners from the choice that is at the heart of ethics?

    There seems to be two distinct answers that emerge to this question. FromAugustine, the answer is unequivocally negative. Early in his career, Augustine

    588 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    5/23

    accompanying temptation of desire, emotion and conveniencenot only mark, but literally dene, the eld of ethics as a eld of human investigation. We are not being ethical by ridding ourselves of the burden of human choice. He writes,without freechoiceofhuman willone could not act rightly (Augustine1982, II, I,5).

    According to Augustine, to pretend that a free choice of moral judgment is nothaunted by fallibility, by the epistemological blindness of the human condition, iseither an act of profound ignorance or, more likely, profound hubris. Conversely,modern military strategists who often rely heavily on precision weaponry seem tooccasionally forget the human character of ethics and assume that fallibility issomething to be fully overcome in the course of scientic investigation. In the caseof military robotics, addressed in coming sections, this forgetfulness occasionallymorphs into a self-conscious attempt to embed decision-making capabilities in thedevelopment of new technologies. We believe that such attempts seek to close thequestion of ethics before it can be opened in a meaningful way. Undoubtedly, these

    trends in military ethics are born of good intentions, namely the intent to be bothefcient and moral. It seems to stem from a more basic trend toward what might becalled the quantication of military strategy, a move to employ game-theoreticalmodelling in optimizing military outcomes. Herman Kahn, a proponent of thisapproach and one of the founding defence intellectuals from RAND Corporationin the 1950s, described the dangers and mitigation of emotional decision-making inUS nuclear strategy:

    It is not that the problems of (warfare) are not inherently emotional. They are. It isperfectly proper for people to feel strongly about them. But while emotion is a goodspur to action, it is rarely a good guide to appropriate action. In the complicated and

    dangerous world in which we are going to live, it will only increase the chance of tragedy if we refuse to make and discuss objectively whatever quantitativeestimates can be made. (Kahn 1960, 47)

    From Kahns statement, it follows that if strategists make and discuss objectivelyquantitative estimates of casualties and destruction in a given attack, they aremore likely to avoid the tragedies of war (Ghamari-Tabrizi 2005, 203204). In onesense, Kahns position seems to make sense. Tragedies present people beingdestroyed by forces that are beyond their control. The development of militarytechnology and the corresponding ability to accurately estimate casualties allowmilitary planners to order aerial strikes with a greater sense of the their

    consequences, thereby achieving a greater degree of control over a given situationin the eld.In another sense, however, Kahns position on techneand quantitative estimates

    appearsto miss itsmark. AsNussbaum and others have noted, tragedy shows goodpeople doing bad things. Sometimes the act that the individual intentionally did isnot the same as the thing that is actually accomplished. Regardless of the degree of precision, strategists must continue to be aware of the possible, indeed inevitable,disjunction between the intended consequences of attacks and the outcomes of militaryconfrontation in actu. As Clausewitz noted in the early 1800s, theoretical orideal plans of attack,despite theirspecicity and precision, will remain out of synchwith the sui generis circumstances of particular campaigns. Techne cannotovercome this Clausewitzian friction, a term that becomes the central theme of On war. For the sake of our discussion of precision-guided munitions, it is worth

    Military frameworks 589

  • 8/12/2019 Military Frameworks

    6/23

    is coupledwith the phrase the fog ofwar, forthe friction betweenplansand actionsturns on the inevitable limitations of human foresight (Clauswitz 1980, 264; Watts2004). A reliance on mathematics and technical precision does not help us out of ambiguous judgements, for, as Clausewitz explains, The road of reason . . . seldom

    allows itself tobe reduced toa mathematical line byprinciples and opinions . . . Theactor in War therefore soon nds he must trust himself to the delicate tact of judgement (Clausewitz 1980, 314). Clausewitz believed that strategists whoassume that scientic approaches to military strategy can fully overcome fog andfriction are making a serious tactical error; we merely extend this point bysuggesting that such an assumption results in serious moral hazards as well.

    In addition to this point concerningunintendedconsequences, tragedypresentsan even more disturbing situation, namely the occurrence of a tragic conict. Insuch intractable instances, an audience looks on as an action is committed by aperson whose ethical character would, under other circumstances, reject such an

    act. Antigone would not normally disobey the mandate of the state, but in herparticular situation she is forced to do so in order to full her sense of lial piety.Tragedy in this case performs the conict between ideals in light of givencircumstances. Antigones act is not physically compelled, nor is its enactment dueto the ignorance or misinformation. Fog is not to blame in this case. Instead, thistype of tragedy turns on the inherent pitfalls of moral decision-making, pitfalls thatcannotbe obviated by thediscussionof quantitativemeasurements.Tragic conictsare interesting and instructive because they remind their audiences that ethicaldecision-making is an unshakably human endeavour that occurs in unique andever-changing situations. The meaning of right and wrong in these particularsituations cannot be determined by a general metric applied to all cases but only byway of a unique interpretation of virtue; indeed, these theatrical scenes continue tofascinate audiences for the sole reason that they are not to be gured out in ascientic manner. In spite of this fact, it seems that Kahns position on the generalethics of technereceived wide acceptance in the wake of the attacks of September 11,2001 and as the US global war on terror (GWT) quickly got underway.

    Before advancing our argument, it seems wise to pause for a moment toconsider the implications of the assertion that has just been made. We hold, in thespirit of Aristotle and Augustine, that the ethics of war turn on the issue of human judgement, and that judgment, by virtue of its sui generis circumstances andemotionally laden character, should be regarded as indeterminate. To say that judgement is indeterminate ( aorista) is in no way to succumb to the type of relativism that bars the way of making normative claims. In our reading, Aristotleis no relativist. Instead, he is a middle-range theorist who believes that good judgement can be achieved only through a humans practised attentiveness toparticular situations, a knowledge of previous forms of judgement, and therenement of ideals that, while never fully attained, can serve as causes to pursue.Aristotles understanding of virtue has been widely criticized for not providingsolid guidelines for right action. As JL Mackie noted in 1977,

    [T]hough Aristotles account is lled out with detailed descriptions of many of thevirtues, moral as well as intellectual, the air of indeterminacy persists. We learn thenames of the pairs of contrary vices that contrast with each of the virtues, but verylittle about where or how to draw the dividing lines, where or how to x the mean.A Sidg i k h l i di t th h b t f i t (M ki 1977 186)

    590 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    7/23

    Mackie is right in the sense that Aristotles Ethics are not going to set out a hardand fast set of guidelines for ethical conduct. He is wrong, however, to suggestthat we should disparage or dismiss Aristotle on these grounds. While Aristotlemight be reticent to prescribe certain rules to guide our action, he is quite happy to

    tell us what is not permissible in making ethical judgementssuch as making ethicsinto a techne. It is the prohibition against the mechanization of judgement thatserves as the theoretical groundwork for our current project.

    Framing ethics: PGM, targeting and the technological mandate

    Our military capabilities are so devastating and precise that we can destroy an Iraqitank under a bridge without damaging the bridge. We do not need to kill thousandsof innocent Iraqis to remove Saddam Hussein from power. At least thats our belief.We believe we can destroy his institutions of power and oppression in an orderlymanner.

    Former US Secretary of Defence, Donald Rumsfeld, 2003 (cited in Kaag 2008)

    The transformation that occurred in military tactics and technology at the beginning of the invasion of Iraq in 2003 was driven by a belief that is dramaticallypresented in Rumsfelds statement. Unfortunately, this beliefthat technicalprecision could allow military strikes to neutralize terrorists while sparing non-combatantsis underpinned by a dangerous logical conation. In the rstsentence of his statement, Rumsfeld describes the physical capabilities of modernPGM. A Hellre missile can destroy one object without destroying another one inclose proximity. It is true that PGM can allow tacticians to target a tank without

    destroying a nearby bridge, yet Rumsfeld makes a shift in his subsequentcomments which describe not technical capabilities but rather normative judgments (Kaag 2008). The designation of enemy combatant, legitimatetargets and oppression are not determinations that can be made by precisionweaponry, but by the individuals who participate in the targeting cycles of USmilitary command. Rumsfeld suggests that targeting enemy combatants is simplya neutral matter of destroying one object while leaving other proximate objectsuntouched. Here, he confuses neutral capabilities with moral permission, andreverses the Kantian ethical maxim that ought implies can by insisting that canimplies ought. This is the sort of conation that worries Heidegger as hewitnesses the rise of modern technocracy with a system that allows scientic andtechnical capacities to drive the formulation and expression of particularnormative claims (Kaag 2008).

    Technological advancement was the keystone of what Rumsfeld termedmilitary transformation, a buzzword that circulated through the Pentagon formany years of global war on terror (GWOT). This was to be a transformation of capabilities, but it now seems that it risks transforming longstanding moralstandards. Rumsfelds rendering of military targeting downplays the inherentmoral decision that is involved in the designation of enemy combatant. Indeed, itseems to allow the neutral sights of stand-off weaponry to replace the fallible andvalue-based lens by which strategists made decisions in the past. Heidegger

    underscores this danger in his 1955 comment:Everywhere we remain unfree and chained to technology, whether we passionately

    Military frameworks 591

  • 8/12/2019 Military Frameworks

    8/23

  • 8/12/2019 Military Frameworks

    9/23

    rhetoric as a tool to single out individuals as potential targets. In light of this fact,strategists now face the temptation of relying on technical precision to make moraldistinctions in the targeting cycle.

    Heidegger suggests that such a danger is real and present. Modernity has

    already allowed technology to reveal the meaning of the natural world; we aresuggesting that technocrats who optimistically speak of military transformationwould allow PGM to reveal important meanings in the worlds of security, politicsand warfare. For Heidegger, scientic and empirical manipulations designatenothing less than the way in which everything presences that is wrought upon bythe revealing that challenges (Heidegger 1993, 323). The risks of this sort of manipulation are front and centre in Heideggers later work, especially in Thequestion concerning technology, the Letter on humanism and The turning.Heidegger believes that in modernitys approach to understanding nature wehave reduced it to its instrumental uses. The river is no longer understood as freeowing, but rather is only understood as the amount of electricity it can generatewhen it is dammed up. That is to say that in the face of technological manipulationthe river becomes merely or solely ( bloss) a source of power. Similarly, the openplateau is no longer understood in its openness, but only as being-cordoned-offfor the purposes of farming; the tree is not understood in its bare facticity, but onlyas a form of cellulose that can be used and employed. While Heidegger seems toirt with romanticism in his comments, he does make a sound point: thetechnologies that are used to put nature in order become the only means of understanding natures emergence. This discussion concerning the enframingof nature may appear far aeld from a discussion of the ethical implications of technologies of violence, oppression and militarism. Appearances can bedeceiving. Heidegger believes that the unquestioned technological manipulationsthat place nature on hand and under our control are the same sort of manipulations that allow mass atrocities to occur on the social and political scene.That is precisely the claim that we are making in this paper in regard to theadvancement of surgical strike capabilities. In a quotation that is often cited, andeven more often misunderstood, Heidegger states that Agriculture is now amotorized food industry, the same thing in its essence as the production of corpsesin the gas chambers and the extermination camps, the same thing as blockadesand the reduction of countries to famine, the same thing as the manufacture of hydrogen bombs (cited in Spanos 1993, 315). Heidegger has been criticized since

    the early 1950s for this comment, for it seems to trivialize the brutality of theHolocaust by making a comparison between genocide and agriculture.While this cryptic remark deserves scrutiny along these lines, it does seem to

    suggest that being mesmerized by technological expediency can blind us to, ordistract us from, other ways of knowing that do not turn on the rhetoric of utility.This is the case in the use of PGM as much as it is the case in the employment of atomic weapons. The promise of the hydrogen bomb is to create an amount of destruction that is orders of magnitude greater than conventional or ssion bombs. Such power can tempt engineers and strategists to develop and test theseweapons without attending to the on-the-ground implications of these devices.Combating this form of moral myopia is, in a certain sense, rather easy, for thedevelopers of these weapons did not purport to save lives, but rather to destroythem. The case of PGM is slightly different. The promise of PGM is to kill or

    Military frameworks 593

  • 8/12/2019 Military Frameworks

    10/23

    and modern military personnel. Such a promise seems like a good one (if thetargets are justly selected), but unfortunately this is a promise that technologyitself cannot keep. Only human beings can make good on this ethical commitment.Despite this fact, the development of precision technologies has enabled the

    rhetoric of safe, cheap and efcient small wars. As Michael Adas argues, theelision between surgical strike technology and the rhetoric of efcient warfare is just the most recent version of the longstanding partnership between technologyand imperialism (Adas 2006). This reliance on technical capabilities is not easilycriticized by ethicists, since the development of these weapons is made in the nameof ethics. When the mouthpieces of war machines coopt the language of ethics and justice, ethicists face greater and more nuanced challenges.

    In light of this discussion, a related question arises: Do military professionalsunderstand the moral challenges of particular battle-spaces by way of ethicaltraining or only through the technical frameworks of the weaponry employed?

    Heidegger restates the broad point concerning technological enframing in hislater writing: When modern physics exerts itself to establish the worlds formula,what occurs thereby is this: the being of entities has resolved itself into the methodof the totally calculable (Heidegger 1998, 327). The current revolution in militaryaffairs driven by the US DoD has encouraged modern physics and technology toexert itself in order to establish a formula for modern warfare. The 2006Quadrennial Defence Review (QDR), which provides objectives and projectionsfor US strategy, aims at minimizing costs to the United States while imposingcosts on adversaries, in particular by sustaining Americas scientic andtechnological advantage over potential competitors (QDR 2006, 5). This comment

    indicates that US military tactics are tacitly employing an egoistic version of autilitarian standard as their ethical norm (the good is achieved in minimizingcosts while maximizing benets to allies and friends). Many disadvantages of this moral framework have been repeatedly voiced by critics of utilitarianismone of which is the fact that the metric of utility changes in reference to US militarypersonnel, innocent civilians and enemy combatants. There is, however, onesupposed advantage of utilitarianism, namely that is that it is totally calculable.The calculations of utilitarian measurements are allied closely with thecalculations of technical precision, and, in the case of the QDR objective,strategists seem to indicate that technological advantage can aid in making thismoral calculation of cost benet analysis. The philosophical underpinnings of theQDR are reected in, and seem to motivate, the research and development of technologies such as military robotics that will fully replace the human soldier in battleeld situations.

    Ethical reections on battle-ready robots

    Perhaps no single idea better expresses the technological fantasy of futuristicwarfareor even of transcending warthan the idea of robot soldiers. Robots, weare told, have already stepped out of the science ction pages and onto the battleeld (Carafano and Gudgel 2007, 1). In fact, while the US military alreadyhas several thousand robots in operation, these machines are not fullyautonomous systems but are remotely operated by human beings in real time

    594 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    11/23

    include the Air Force Predator Drones, a type of unmanned aerial vehicle (UAV)with both surveillance and combat capability, from which important al-Qaedaoperatives have been killed using Hellre missiles. Of equal importance is theland-based bomb disposal robot, crucial against improvised explosive devices

    (IEDs) in Iraq, the cause of the large majority of US casualties. Robots arecurrently used to disarm bombs, explore caves and buildings and scout dangerousareas so that human soldiers can be spared from these dangerous tasks. SomeIsraeli military robots are equipped with submachine guns and with robotic armscapable of throwing grenades; however, as with US robots, the decision whetherto use these weapons is in the hands of a remote human operator. While theseremotely operated machines are important technological advances and in someways are already dramatically changing the way war is fought, it is misleading tocall them battleeld robots, nor do they appear to raise especially complex ornovel ethics or policy questions beyond what has already been discussed inreference to PGM.

    However, we now face (so we are told) the prospect of genuinely autonomousrobot soldiers and vehicles, those that involve articial intelligence (AI) andhence do not need human operators. The Future Combat Systems Project, alreadyunderway at a projected cost of US$300 billion, aims to develop a robot army by2012, including a variety of unmanned systems with the capacity to use lethalforce against enemies, requiring the ability to locate, identify an enemy, determinethe enemys level of dangerousness and use the appropriate level of force toneutralize the target, though it is unclear what degree of autonomy theseunmanned systems will have. The US military is now one of the major sources of funding for robotics and articial intelligence research (Sparrow 2007, 62). Whileat present true robot soldiers remain mere vapourware, this has not stoppedenthusiasts of futuristic warfare from speculating about the imminenttransformation of war. John Pike, recently writing in the Washington Post, declaresthat Soonyears, not decades from nowAmerican armed robots will patrol onthe ground as well [as in the air], fundamentally transforming the face of battle(2009, B03). Wallach and Allen tell us that current technology is converging on thecreation of (ro)bots whose independence from direct human oversight, and whosepotential impact on human well-being, are the stuff of science ction (Wallachand Allen 2008, 3). According to a 2005 article in the New York Times, ThePentagon predicts that robots will be a major ghting force in the American

    military in less than a decade, hunting and killing enemies in combat (Weiner2005). Whereas Isaac Asimovs famous Laws of Robotics mandated that no robotmay injure a human, these robots will, in contrast, be programmed for the veryopposite purpose: to harm and kill human beings, ie, the enemy.

    The deployment of genuinely autonomous armed robots in battle, capable of making independent decisions as to the application of lethal force without humancontrol, and often without any direct human oversight at all, would constitute agenuine military as well as moral revolution. It would involve entrusting theultimate ethical question to a machinewho should live and who should die?Of course, machines already make lethal decisions. An ordinary land mine, forexample, uses lethal force against soldiers or vehicles by detecting their presence based on pressure, sound or magnetism; advanced mines even are capable of distinguishing between enemy and friendly vehicles. However, the very lack of

    Military frameworks 595

  • 8/12/2019 Military Frameworks

    12/23

    prohibited the use of such weapons, since they do not reliably distinguish between soldiers and civilians (or even animals) and can be deadly long after theconict is nished. Hence the development of genuinely robotic lethal decision-makers, capable of making rational decisions as to what constitutes a legitimate

    target, would in theory surmount this objection and would constitute anunprecedented step in military technology.A machine capable of making reliable moral judgments would presumably

    require strong AI, that is achieve actual intelligence equivalent to or superior toour own, a project that to date remains yet a speculative possibility. It seemstherefore quite premature to consider the ethical ramications of genuinelyautonomous lethal robot soldiers. Indeed, the very project threatens to be self-defeating if the underlying motivation for robot soldiers is to replace humans insituations that are dangerous or otherwise undesirable. For a machine thatachieved equivalent mental capacity to human beings could arguably claimequivalent moral status as well and as such have equal right to be protected fromthe dangers of warfare (it should be noted that the Czech word from which wederive robot means serf or slave). Of course the robots might be better suited fordangerous missions, having built-in armour and weaponry to protect them.However, some proponents (such as Arkin) call for designing these robots withoutan instinct of self-preservation; even if this is possible, the denial of a right of self-protection to a moral agent it is itself ethically problematic. Alternatively, it ispossible that such autonomous machines would lack some crucial elementrequired for attaining moral status and hence could be treated as mere machinesnot protected by the rights of soldiers. However, we do not even know whether a being is capable of moral decision making without being itself a moral agent.It thus seems pointless even to try to answer such questions at this stage until weeven know whether such beings are possible and what they would be like(e.g. whether they would they have desires and purposes just like us, or whetherthey would be capable of suffering) (Sparrow 2007, 7173). A prior moral issueinvolves asking just what the goals are in developing such robot soldiers:To protect humans from harm? To save money? To wage war more effectively?To make war more ethical and humane to both sides? Clearly, the purpose withwhich we engage on this project will inuence the nature of the robots created andtheir ethical legitimacy.

    The rhetoric and the predictions for an imminent AI robot army run so far

    ahead of any actual engineering capabilities for the near future that it seems thatthe disproportionate attention is more a product of the seductive fascination of technology than of realistic engineering possibility. These robot soldiers offer thedream of a transformed way of waging war. In a 2005 New York Times article,Gordon Johnson of the Joint Forces Command at the Pentagon is quoted as statingthe advantages of robots over human soldiers: They dont get hungry. Theyre notafraid. They dont forget their orders. They dont care if the guy next to them has been shot. Will they do a better job than humans? Yes (Weiner 2005). RoboticistRonald Arkin hopes that robots, with their (hypothetically) superior perceptualskills, will be better able to discriminate in the fog of war and also make ethicallysuperior decisions (Arkin 2007, 6). John Pike suggests that the very existence of war and genocide is to be blamed on human weakness, and makes utterlyfantastic claims for the ability of robot soldiers to usher in a new millennium of

    596 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    13/23

    with human soldiers is not merely their physical limitations and their cost buteven more fundamentally their psychological limitations, including particularlytheir vulnerability to human emotions such as sympathy and compassion thatmakes them hesitant to kill. Pike cites the celebrated 1947 study by SLA Marshall

    as support for the proposition that most soldiers will not even re their weaponsat the enemy. However, Marshalls evidence has long been discredited asspeculative at best, and as sheer invention at the worst. Note that even if soldiersare hesitant to re, it is unclear whether that hesitation is due to sympathy, fear oreven mundane factors such as the need to clean ones weapon.

    The widespread fascination with the possibility of robot soldiers and thecredulous acceptance in the media of claims about their imminent arrival, long before there is any realistic possibility of producing them, suggests that what isreally at work is what historian David Noble has labelled the religion of technology (Noble 1999). Noble argues that the Western (and especially

    American) obsession with technology has long been a sort of secular religion.That is, its aims (however purportedly scientic) have paralleled the religious goalof salvation by overcoming human imperfection and creating a new and better being (often an immortal one). The motivations behind technology have rarely been merely practical and mundane, but rather rooted in a desire to transcend theimperfections of the human condition and become godlike. As Moravec claims,Technical civilization, and the human minds that support it, are the rst feeblestirrings of a radically new form of existence (Moravec 2000, 12). Noble contendsthat this quasi-worship of technology has resulted in the unfortunate doubletendency to engage in escapist fantasies rather than realistic assessments of whatis technically feasible, and at the same time to display a pathologicaldissatisfaction with, and deprecation of, the human condition (Wallach andAllen 2008, 207).

    Both of these tendencies are on display in the current fascination with robotsoldiers. For one thing, as Wallach and Allen point out, the more optimisticscenarios for AI are based on assumptions that border on blind faith (Wallachand Allen 2008, 194). They quote as an example Michael LaChats prediction thatthe articial intelligent being will become a morally perfect entity; Wallach andAllen remark that the word entity should be replaced by deity (Wallach andAllen 2008, 194). For another, even if such robots were technologically feasible, thewild claims that they would wholly transform or even end war are simply of apiece with the historical tendency to declare that any major new powerful weaponwill mean the end of war. John Pike writing in the Washington Post predicts thatthis robot army will make Americas military irresistible in battle and usher in anew robotic Pax Americana that would end the large-scale organized killing thathas characterized six millenniums of human history. 2 Such predictions inevitablyneglect the ability of enemies to discover new tactics and strategies to outwit thetechnological advantages of their opponents, and moreover the fact that, oncecreated, it is impossible to keep a monopoly on a new technology. Once both sideshad access to robot armies, it seems unlikely that war would become any lessfrequent, or any more humane; if anything it would likely become more lethal and

    destructive, especially if Pike is right in that the major advantage of robot soldiers

    Military frameworks 597

  • 8/12/2019 Military Frameworks

    14/23

    would be the utter ruthless efciency with which they would be able to takehuman life.

    Arkin follows a long tradition in articial intelligence of locating humanlimitations in our emotions that prevent us from reasoning clearly; for him

    emotions distort our reasoning faculty and produce biases such as the scenariofullment fallacy in which people select information to conform to their pre-existing expectations (Arkin 2007, 6). Robotics appears to offer a path towardescaping the fog of war through eliminating those elements of the human thoughtprocess that interfere with the clarity and precision of reason. This outlookreects the inuence of Cartesian dualism and its radical distinction betweenreason/mind and emotion/body. This extreme and implausible dualism seems to be motivated by the technophiles goal of separating out the sources of ambiguityin human judgment from those elements that can be made clear and distinct, sothat a perfect reasoning machine can be made that is not subject to human foibles.

    In fact, it remains an open question, to put it mildly, whether an autonomousintelligent agent could be created without endowing it with emotions comparableto those of humans; indeed a substantial literature rejects the Cartesianassumption that emotion and reason are separable at all (Damasio 2005).Furthermore, it is quite possible that emotions may be necessary to ethical judgment; one who could not feel compassion for the sufferings of others mightnot be capable of making good moral decisions. In fact, while Arkin and othersobserve that Augustine worried that emotions such as anger or fear can distortmoral judgment in war, what they neglect to note is that for Augustine theproblem is not emotions in themselves but the wrong emotions. For Augustine

    what makes war morally permissible is precisely that it is fought with the rightemotion, love rather than anger or pride (Russell 1987).Essential to the moral evaluation of the use of genuinely autonomous robots

    capable of inicting lethal force is of course the degree of reliability of their moral judgments. Would it be ethical to use robots that can discriminate soldiers fromcivilians somewhat less effectively than humans can on the grounds that theywere cheaper than human soldiers? What if robots could discriminate betteroverall than humans, but had a particular blind spot (say, the capacity to recognizehospitals as non-military targets)would it be ethical to deploy them anyway?It is perhaps no surprise that advocates of the new robot army insist that the robotswould not merely be capable of equalling human moral capacity, but of surpassing it. Arkin, while disavowing any claim that robot soldiers would becapable of being perfectly ethical in the battleeld, nonetheless is convinced thatthey can perform more ethically than humans are capable of (Arkin 2007, 6). Hecites a series of ndings demonstrating the ethical failings of United Statessoldiers, including the fact that substantial numbers of soldiers report havingmistreated non-combatants, especially when angry; that some units modify theRules of Engagement (ROE) in order to accomplish a mission; and that a third of soldiers reported facing ethical situations in which they did not know how torespond (Arkin 2007, 8). Arkins project, funded by the DoD, is to provide a set of design recommendations for the implementation of an ethical control andreasoning system potentially suitable for constraining lethal actions in anautonomous robotic system so that they fall within the bounds prescribed by the

    598 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    15/23

    We have already criticized the widespread assumption that unethical conductin war can largely be attributed to emotions and their distorting impact on reason.Equally problematic is the claim that a signicant cause of unethical behaviouris a lack of understanding as to what ethics requires: that is, the failure of soldiers

    to grasp the clear demands of the rules of war. The unstated assumption, of course, is that moral ambiguity is attributable to human ignorance or irrationality,rather than being an intrinsic and inevitable element of ethical reasoning. Arkin(2007) for example seems to believe that modifying the ROE constitutes in itself aviolation of ethics or the laws of war. But it is fallacious to assume that a change inthe ROE reects a moral failing, for the modied ROEs may be no less morallyacceptable than the prior ones, or might even render the ROE more consistent withmorality. For example, Evan Wright (2008) describes a situation in the Iraq War inwhich the initial ROE, permitting targeting only those carrying weapons, imposedan undue restriction on the troops, for it precluded the use of force against

    forward observers for mortar attacks, men dressed in civilian clothes andcarrying binoculars and cell phones but no weapons; these men report thecoordinates of American troops so as to make the mortar rounds more accurate.Wright describes how a modication in the ROE worked its way up the chain of command, as marines requested permission to re on forward observers,permission that was eventually approved (Wright 2008, 102). It is quite plausiblethat a forward observer is a legitimate target both under the laws of war andmorality despite not actually carrying a weapon, though the moral analysis is byno means obvious (an example of the fog of war). The presumed advantage of arobot soldier in its deterministic inability to modify the rules is by no means an

    obvious improvement over the human judgment process; it is far from obviousthat human exibility (even with its capacity for misuse) is intrinsically worsethan a robots mechanical determinism to follow rules that may themselves beawed.

    Mechanizing judgment

    Advocates of robot soldiers will no doubt argue that the problem lies in theambiguity of the prior rules; by more precisely specifying who is a legitimatetarget, we can avoid the need for exibility. But such a claim is unconvincing, for

    the above example arguably demonstrates intrinsic ambiguity in morality ratherthan ambiguity due to perceptual limitations or lack of clarity in the rules. Forwhether someone counts as a legitimate target is necessarily a matter of degree; atone end of the spectrum is the man ring the gun; at the other end is the civilianplaying no role in the attack. In between is a continuum of cases varying by thelevel of involvement or support being provided in the attack. While radioing indirections to a mortar team is probably sufcient to render one a combatant(despite not carrying arms), other cases are not so easy, for instance civilians whomerely warn the mortar crew that Americans are coming, or civilians who providefood or water to the crew, or even just merely give them words of support. It isunlikely that any set of rules can be prescribed in advance to determine whenlethal force is permissible.

    Nor is this the end of the intrinsic moral ambiguity of such situations. David

    Military frameworks 599

  • 8/12/2019 Military Frameworks

    16/23

    were using a small child of ve or six as a forward observer. In such a situation,even though there was no doubt about the boys role and his essential function intargeting the Americans, nonetheless the American soldiers declined to target thechild on moral grounds; as Bellavia and Bruning explains, Nobody wants a child

    on his conscience (Bellavia and Bruning 2007, 10). The fact that a robot soldierwould presumably lack a conscience and be able to kill the ve-year-old is hardlyevidence of its superiority to human soldiers, at least in moral terms. Moreover,the age problem raises yet another continuum problem; at what age does a person become sufciently morally accountable to be a legitimate target? It is unlikelythat any rule can be formulated in advance to cover such situations; the ability torespond exibly and contextually to such ambiguity is a reection of the humancapacity to exercise moral judgment in complex situations.

    Nor can the problem of perceptual ambiguity be eliminated by deployingrobots in place of humans, despite frequent assertions to the contrary. As MaxBoot states, [t]he US military operates a bewildering array of sensors to cutthrough the fog of war (Boot 2003). There is no doubt that machines candramatically improve on humans in such matters as visual acuity, electronicsurveillance and so forth. And in many cases this will make a robot capable of better complying with the rules of war and even ethics, for example if it candetermine denitively that the apparent civilian is in fact a forward observer andnot merely a spectator with a cell phone. But it is highly doubtful that even the best machines could eliminate the intrinsic perceptual ambiguity of the battleeld.In Noel Sharkeys example, it is unlikely that a robot could decide whether awoman is pregnant or carrying explosives without the use of the human skill of mere common sense (quoted in Flemming 2008). Perceptual ambiguity willalways be part of the fog of war and cannot be eliminated by technologicalsolutions. Indeed, the very question of how much perceptual evidence is required before deciding it is appropriate to resort to lethal force is itself inextricablyintertwined with moral judgment. Thus even if one could ascertain that the personwere in fact communicating with Iraqi soldiers, that would not of course dictatethat he or she is a legitimate target; such a judgment would require appreciation of psychological and moral complexity, for example whether he or she is merelyencouraging them or providing necessary technical assistance.

    Even more troubling is the possibility that ethical principles themselves may be modied to suit the needs of a technological imperative. A remarkable example

    of this is Arkins discussion of how to choose between the many contested moraltheories; Arkin rejects one of these moral theories, virtue theory, on the groundsthat it does not lend itself well by denition to a model based on a strict ethicalcode (Arkin 2007, 43). There may of course be good substantive reasons forrejecting a given moral theory, but to do so based on the technical criterion of operationalizability is to let ethics be guided by techne rather than techne guided by ethics. The drive to unite ethics with technology risks subordinating the formerto the latter; ethical principles may be distorted by the need to implement them inan algorithmic form suitable to machine architecture. Wallach and Allenscomment seem to warrant considerations along these lines in their noting thatdesigning a robot ethics based on reasoning provides a more immediatelytractable project than one based on moral emotions (Wallach and Allen 2008, 108).

    To take another example, Arkin calls for making the moral criterion of

    600 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    17/23

  • 8/12/2019 Military Frameworks

    18/23

    even hard moral choices, conducted by superior and infallible machines. Wallachand Allen express the concern that we have started on the slippery slope towardthe abandonment of moral responsibility by human decision makers(Wallach and Allen 2008, 40). Or even worse, it may be that the value of robot

    soldiers is that they will be unconstrained by human weaknesses such ascompassion that limit military effectiveness, and hence ruthless and uncon-strained in their use of force against the enemy.

    There is an alternative view of the role of robots in war, though it has had farless attention because it is less dramatic and glamorous. Instead of envisioningrobots as idealized replacements for human soldiers, one might see the role of robotics as assisting human decision-making capacity. As Roger Clarke argues,The goal should be to achieve complementary intelligence rather than to continuepursuing the chimera of unneeded articially intelligence. While computers excelat computational problems, humans are unsurpassed in what one might broadly

    call common sense, as Clarke explains to include unstructured or open-textured decision-making requiring judgment rather than calculation(Clarke 1993, 64). Humans are, as Wallach and Allen assert, far superior tocomputers in managing information that is incomplete, contradictory, orunformatted, and in making decisions when the consequences of actions cannot be determined (Wallach and Allen 2008, 142). In other words, human superioritywill remain in the eld of ethics itself and above all the ethics of the battleeld,where situations are complex, changing and unpredictable, and the rulesthemselves open-ended. None of this is to deny the crucial role for remote-controlled or semi-autonomous robot units taking over tasks that are especially

    dangerous; but moral judgments about the taking of life or even the destruction of property must remain the domain of the human soldier, at least for the foreseeablefuture. For all that technology can do to improve human life, there is no reason atpresent to believe that it can solve ethical problems that have challenged humansfor thousands of years, or to eliminate the fog of war.

    Aftershocks: how military technologies effect intelligence-gathering

    Smart bombs are neither smart nor moral. The use of Hellre missiles to carry outthe targeted assassination of alleged terroristsas in the case of the targeting of

    the Hussein brothers in downtown Mosulis morally and legally problematic.As PW Singers Wired for war (2009) underscores, the use of stand-off technologiesand unmanned vehicles in such surgical strikes can create a videogame/voyeur-istic approach to warfare in which military personnel who control these dronesfrom remote locations lose touch with the realities of the battleeld (Singer 2009).The use of similar weaponry, including battle-ready robots, which might result inthe killing of innocent civilians due to technical malfunction or faulty intelligence-gathering and communications, is more obviously questionable. In January 2002,former Deputy Secretary Paul Wolfowitz expressed hopes for the impending USmilitary campaigns: they aimed to apply a very small force on the ground andleverage it in a dramatic way not only through precision-guided munitions butthrough precision communications that would get those munitions accurately tothe right target instead of accurately to the wrong target. Wolfowitzs comment,

    602 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    19/23

    technological precision in the formation of strategy, by concluding that accuracy by itself doesnt do you any good if your target identication is wrong (cited inSoloman 2007, 82). His comment seems to apply equally to the use of PGM andany future developments of battle-ready robots, both of which could be used to

    moral or immoral ends depending on the plans and purposes of militarycommanders.At rst glance, it seems that Wolfowitz understood at least one of the moral

    lessons of precision-guided munitions, namely that precision is only as good asthe intelligence that is used in a given targeting cycle. Learning this lesson,however, can lead to ethically problematic conclusions. As Kaag has argued,Painting and destroying specic enemy targets with laser-guided accuracydepends on the reliability of intelligence, and this intelligence is often garneredfrom the interrogation and coercion of enemy prisoners. The demands of PGM androbotic targeting, the need to specify an enemys exact position and character, mayplace undue burden on interrogators who feel responsible for providing thisinformation (Kaag 2008). Wolfowitzs seemingly cautious remark concerning PGMmust be understood in the wider scope of the global war on terror. On 27 December2001, a week before Wolfowitzs comment, Rumsfeld (at that point, Wolfowitzsimmediate superior) announced the establishment of Guantanamo Bay as aholding site for detainees. The interrogation techniques used at this site, many of which were previously used to break trainees at the Armed Forces SERE(Survival Evasion Resistance Escape) School, were sanctioned by DoD ofcials inthe months surrounding Wolfowitzs comments concerning the use of PGM.

    We are not claiming that the development of military technologies directlycause abuse or torture; this would be overstating the point. We are, however,suggesting that the demand to use precision-guided munitions and militaryrobotics in a moral way will place unprecedented pressure on interrogators togarner the intelligence to identify appropriate targets. This may have alreadyresulted in compromising the standards set by the Geneva Convention for thetreatment of prisoners of war, or, more likely, the wholesale dismissal of thesestandards. Exposing the relationship between military technology and interrog-ation practices is not meant to shift responsibility away from the strategists andcommanders who enact morally questionable policies. Instead, we echo writerssuch as Bauman and Coward by observing that the structures of technology and bureaucracy can contribute to the articulation of new forms violence while

    masking the unique and deeply problematic character of this violence (Coward2009, 45). There is in this case a complex symbiosis between stand-off andprecision weaponry and the intelligence-gathering techniques that might informits use. This point is driven home when we come to recognize that even theinitiation of the Iraq War, undoubtedly the most technologically advanced warever waged, was motivated by allegedly savage interrogation procedures.Stephen Gray, who investigated Central Intelligence Agency (CIA) detentioncentres, alleges that the supposed connection between al-Qaeda and SaddamHussein was corroborated by intelligence gathered from Iban al Shakh al Libby,who provided this information only after being tortured in prisons in Egypt(Agence France Press 2006).

    Much more could be said about this topic in light of the history of philosophy.For example, Friedrich Schiller wrote his Aesthetic letters in 1794, in the midst of

    Military frameworks 603

  • 8/12/2019 Military Frameworks

    20/23

    distinct ways. On the one hand, they could turn to savagery in which oneprioritizes feeling and emotion over reason and science, in his words,when feeling predominates over principle. On the other, they could become barbarians and could prioritize science and techne at the expense of

    human feeling and sentiment, in Schillers words, when principle destroysfeeling (Schiller 2004, 34). Schillers warning comes home to us when we examinethe relationship between advanced military technologies, forms of techne that aimto remove all human feeling and sentiment from the battleeld, and recentmethods of intelligence-gathering, methods that appear to break basic ethicalprinciples. Indeed, such an investigation may expose the unique way in which barbarism and savagery enable one another in the course of modern warfare.

    Conclusion

    In the dialogue Protagoras, Plato recounts the myth of Prometheus bringingtechnology to humankind. The gift of techne threatened to result in the destructionof all humans, since humans lacked any standards for the proper use of thesedangerous powers. Zeus, fearing the possible extermination of humans, sentHermes to deliver them the gift of justice ( dike) to bring order and conciliation tomen as a necessary supplement to technology. Moreover, Zeus insisted that theknowledge of justice be distributed among all people, and not given merely to asmall number of experts, for, he says, cities cannot be formed if only a few have ashare of these as of other arts [ technon] (Plato 1990, 321 323).

    Platos warning about the relation between techneand ethics is even more valid

    in an age when technology can cause far more damage far more quickly than wasimaginable to the ancient Greeks. The seductive power of technology promiseswar on the cheap, cheap both in blood and in treasure, and, even moreimportantly, it holds out the possibility of a war puried of all moral tragedy.Technology perpetually threatens to coopt ethics. Efcient means tend to becomeends in themselves by means of the technological imperative in which it becomesperceived as morally permissible to use a tool merely because we have it (often bymeans of the fallacious argument that if we dont use it someone else will). Or thevery ease of striking a target becomes the rationale for doing so; the technologydetermines what counts as a legitimate military target rather than vice versa.

    The allure of the technocratic ideal reverses Platos warning by promising thatethics can be made into a eld of expert knowledge, circumventing the difcultprocess of moral deliberation and judgment. The fantasy of robot soldiers is butthe extreme of all of these trends; here moral choice is taken out of the hands of human soldiers and indeed of humans altogether, and the technocratic expert isthe technology itself, the machine making accurate moral choices. We have arguedhere that technology can never eliminate the challenge of difcult moral choicesand moral dilemmas, though it is in the very nature of technology to continuallytempt us to think it can do so. This dangerous illusion results in inappropriatelylow thresholds for the decision to go to war, a failure to engage in moraldeliberation on such tricky moral issues as targeted assassination, and theparadox of pushing us into even greater moral wrongs such as torture in order toprovide the precise intelligence needed for technology to be successful. Techne is

    604 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    21/23

    public with the promise of a perfectly just war due to modern intelligence andsmart weaponry. But moral judgement will always be difcult and controversialin all circumstances, and above all in war, where the cost in human life andwelfare is so high and where collateral damage is inevitable. Technology has

    great potential to make war less destructive and to avoid harming innocent bystanders. Yet technology can never be a substitute for ethics itself; the decisionto go to war, and the means of ghting war, will always belong in human hands.

    ReferencesAdas, Michael (2006) Dominance by design: technological imperatives and Americas civilizing

    mission (Cambridge, Massachusetts: Belknap Press of Harvard University Press)Agence France Press (2006) Confession that formed the base for invasion of Iraq was

    gathered under torture, 27 October, , http://www.commondreams.org/headlines06/1027-04.htm . , accessed 23 January 2009

    Aristotle (2002) The Nicomachean ethics, transl Saul Broadie and C Rowe (Oxford: OxfordUniversity Press)Arkin, Ronald (2007) Governing legal behavior: embedding ethics in a hybrid

    deliberative/reactive robot architecture, GVU Technical Report GIT-GVU-07-11,College of Computing, Georgia Tech

    Asaro, Peter (2006) What should we want from a robot ethic?, International Review of Information Ethics, 6:2, 916

    Augustine (1982) On the free choice of will, transl Thomas Williams (New York: Hackett)Beiriger, Charles (1998) Churchill, munitions, and mechanical warfare (Ann Arbor: University

    of Michigan Press)Bellavia, David and John Bruning (2007) House to house (New York: Free Press)Boot, Max (2003) The new American way of war, Foreign Affairs, 82:4, 4158Bosquet, Antoine and Michael Dwyer (2009) Scientic way of warfare: order and chaos on the

    battleelds of modernity (New York: Columbia University Press)Carafano, James and Andrew Gudgel (2007) The Pentagons robots: arming the future,

    Backgrounder, 19 December 2007, 5361CBS News (2003) Todays bombs smarter, cheaper, 25 March 2003Clarke, Roger (1993) Asimovs laws of robotics: implications for information technology,

    IEEE Computer, December 1993Clauswitz, Carl von (1980) Von krieg (Bonn: Dummler Press)Coward, Martin (2009) Urbicide: the politics of urban destruction (New York: Routledge)Damasio, Antonio (2005) Descartes error: emotion, reason, and the human brain (New York:

    Penguin)Daponte, Beth Osborne (1993) A case study in estimating casualties from war and its

    aftermath: the 1991 Persian Gulf War, Medicine & Global Survival, 3:2, , http://www.ippnw.org/Resources/MGS/PSRQV3N2Daponte.html . , accessed 24 January 2009

    Flemming, Nic (2008) Robot wars will be a reality within 10 years, Daily Telegraph,27 February 2008

    GhamariTabrizi, Sharon (2005) The worlds of Herman Kahn (Cambridge, Massachusetts:Harvard University Press)

    Graham, Gordon (1997) Ethics and international relations (Cambridge, United Kingdom:Blackwell)

    Heidegger, Martin (1993) The question concerning technology in David Krell (ed) Martin Heidegger basic writings (San Francisco: Harper), 307342

    Heidegger, Martin (1998) Hegel and the Greeks pathmarks (Cambridge, UK: CambridgeUniversity Press)

    Holland III, Edward (1992) Fighting with a conscience: the effects of an American sense of

    morality in the evolution of strategic bombing campaigns, thesis presented to the USAir Force School of Advanced Airpower Studies, Maxwell AFB, Alabama, May 1992,, http://research.maxwell.af.mil/papers/saas/holland.pdf . , accessed 13 December

    Military frameworks 605

  • 8/12/2019 Military Frameworks

    22/23

    Kaag, John (2008) Another question concerning technology: the ethical implications of homeland defense and security technologies, Homeland Security Affairs, 4:1

    Kahn, Herman (1960) On thermonuclear war (Princeton, New Jersey: Princeton UniversityPress)

    Mackie, J (1977) Ethics: inventing right and wrong (New York: Penguin)Moravec, Hans (2000) Robot: from mere machine to transcendent mind (New York: OxfordUniversity Press)Noble, David (1999) The religion of technology (New York: Penguin Books)Nussbaum, Martha (2001) The fragility of goodness (Cambridge, UK: Cambridge University

    Press)Pike, John (2009) Coming to the battleeld: stone-cold robot killers, Washington Post ,

    4 January 2009.Plato (1990) Protagoras, transl Walter Lamb (London: Loeb Classics)Ramsey, Paul (2002) The just war: force and political responsibility (New York: Rowman &

    Littleeld)Russell, Frederick (1977) The just war in the Middle Ages (Cambridge, UK: Cambridge

    University Press)

    Russell, Frederick (1987) Love and hate in medieval warfare: the contribution of SaintAugustine, Nottingham Medieval Studies, 31, 108124Schiller, Friedrich (2004) Aesthetic education of man, trans. by R. Snell (New York: Courier

    Publications)Singer, Peter (2009) Wired for war: the robotics revolution and conict in the 21st century

    (New York: Penguin Press)Soloman, Lewis D. (2007) Paul D Wolfowitz: visionary intellectual, policymaker and strategist

    (New York: Greenwood).Spanos, William (1993) Heidegger and criticism (Minneapolis: University of Minnesota Press)Sparrow, Robert (2007) Killer robots, Journal of Applied Philosophy, 24:1, 6277US Department of Defense (2002) Deputy Secretary Wolfowitzs interview with the

    New York Times, news transcript, 7 January, , http://www.defenselink.mil/transcripts/transcript.aspx?transcriptid 2039. , accessed 3 January 2009

    Wallach, Wendell and Colin Allen (2008) Moral machines (New York: Oxford UniversityPress)

    Watts, Barry (2004) Clauswitzian friction and future war (Washington: Institute for NationalStrategic Studies)

    Weiner, Tim (2005) New model army soldier rolls closer to battle, New York Times,16 February 2005

    Wright, Evan (2008) Generation kill (New York: Berkley Caliber)

    606 John Kaag and Whitley Kaufman

  • 8/12/2019 Military Frameworks

    23/23

    Copyright of Cambridge Review of International Affairs is the property of Routledge and its content may not be

    copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written

    permission. However, users may print, download, or email articles for individual use.