28
1 Labor, Capital, and the Morality of Emerging Technologies: Public Attitudes and Autonomous Weapon Systems Michael C. Horowitz Associate Professor, Political Science and Associate Director, Perry World House University of Pennsylvania August 2017 DRAFT: Do not cite without permission Abstract: How do countries make decisions about the size and shape of their militaries? This simple question animates decades of research in several areas of political science and international relations. Countries have especially difficult decisions to make when it comes to deciding about the integration of new military technologies, especially those that could prove disruptive to high-status organizational subunits or that could prove controversial with the broader public. This paper theorizes about two facets that might influence the choices countries make about capital and labor – ethical and moral concerns, as well as the challenge of maintaining control in a world of increasing technological sophistication. Contrasting a logic of effectiveness with a logic of morality in evaluating policies, the paper present results from a US public survey experiment that explores public willingness to deploy autonomous weapon systems (AWS) or US military personnel to protect civilians during a civil war. It shows that while logics of effectiveness do significantly drive attitudes, moral reasoning appears to play a role as well. Evaluating those respondents that strongly prefer deploying US military personnel even when autonomous weapon systems offer superior battlefield performance, and vice versa, thus shows the way moral mandates shape policy attitudes for some segments of the public. This issue is therefore larger than just autonomous weapons, or even questions of how countries construct their militaries. Instead, unpacking the logic of effectiveness and the logic of morality can help shed light on many domestic political issues as well.

Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

1

Labor, Capital, and the Morality of Emerging Technologies: Public Attitudes and Autonomous Weapon Systems

Michael C. Horowitz

Associate Professor, Political Science and Associate Director, Perry World House

University of Pennsylvania

August 2017

DRAFT: Do not cite without permission

Abstract: How do countries make decisions about the size and shape of their militaries? This simple question animates decades of research in several areas of political science and international relations. Countries have especially difficult decisions to make when it comes to deciding about the integration of new military technologies, especially those that could prove disruptive to high-status organizational subunits or that could prove controversial with the broader public. This paper theorizes about two facets that might influence the choices countries make about capital and labor – ethical and moral concerns, as well as the challenge of maintaining control in a world of increasing technological sophistication. Contrasting a logic of effectiveness with a logic of morality in evaluating policies, the paper present results from a US public survey experiment that explores public willingness to deploy autonomous weapon systems (AWS) or US military personnel to protect civilians during a civil war. It shows that while logics of effectiveness do significantly drive attitudes, moral reasoning appears to play a role as well. Evaluating those respondents that strongly prefer deploying US military personnel even when autonomous weapon systems offer superior battlefield performance, and vice versa, thus shows the way moral mandates shape policy attitudes for some segments of the public. This issue is therefore larger than just autonomous weapons, or even questions of how countries construct their militaries. Instead, unpacking the logic of effectiveness and the logic of morality can help shed light on many domestic political issues as well.

Page 2: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

2

Introduction

How do countries make decisions about the size and shape of their militaries? This simple question animates decades of research in several areas of political science and international relations. Countries have especially difficult decisions to make when it comes to deciding about the integration of new military technologies, especially those that could prove disruptive to high-status organizational subunits or that could prove controversial with the broader public (Horowitz 2010).

Decisions about the construction of military forces often involve a tradeoff between labor and capital. In a world of limited budgets, at the margins, countries choose between investment strategies that prioritize the size of the military in terms of personnel and strategies that prioritize investing in equipment such as tanks, planes, and ships (Caverley 2014; Gartzke 2001; Saunders 2011).

In the context of debates about autonomous weapon systems (AWS), this paper theorizes about two facets that might influence the choices countries make about capital and labor – ethical and moral concerns, as well as the challenge of maintaining control in a world of increasing technological sophistication. In doing so, it tests two essential theories about what drives attitudes about military technologies. In general, scholars believe that policymaking and the attitudes of the public tend to focus on rational assessments of effectiveness in evaluating military technologies. Countries choose to employ military technologies that they believe they need to fight and win necessary conflicts (whether domestic or international). Often, since particular technologies are generally not necessary, since substitutes exist, countries might agree to follow international norms of non-development, deployment, or use of systems (Morrow 2007). Survey evidence from the public shows, however, that logics of effectiveness govern attitudes. Press et al. (2013) show, for example, that while a nuclear taboo may exist in theory, the American public is willing to employ nuclear weapons when it believes doing so is necessary for US national security. If this is the case, then one would expect attitudes about AWS to vary based primarily on perceptions of their effectiveness relative to other systems.

Second, “moral logic” approaches evaluate policies from a prima facia perspective, evaluating how a policy fits with norms of appropriateness (Olsen and March 1989) and other more normative considerations (Finnemore and Sikkink 2005). Attitudes driven by “moral mandates” (Skitka 2002, 589; Ryan 2014) can cause people to evaluate policies not based on direct assessments of consequentialist costs and benefits, but morality.

Over the last two decades, the international community has agreed to ban several military technologies, including land mines, cluster munitions, and blinding lasers.1 Part of the arguments for their prohibition in all cases involved ethical and moral claims, especially regarding their indiscriminate effects. Today, there is an international Campaign to Stop Killer 1 The venues enacting these regulations vary – from the Conventional on Certain Conventional Weapons (CCW) for blinding lasers to the Ottawa Convention for landmines. For the purposes of this paper, the critical thing is that these regulatory efforts occurred, rather than the specific processes.

Page 3: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

3

Robots that argues autonomous weapon systems raise significant moral and ethical challenges – challenges that mean their development and use should be prohibited even if they might prove effective on the battlefield (Human Rights Watch 2014; Garcia 2014, 2015).

This paper tests these logics using a US public survey experiment that explores public willingness to deploy autonomous weapon systems or US military personnel to protect civilians during a civil war, varying the relative effectiveness of each system across experimental conditions. It shows that while logics of effectiveness do significantly drive attitudes, moral reasoning appears to play a role as well. Those that view autonomous weapon systems in an especially positive moral light are willing to deploy AWS even when it is less effective than the alternatives, while those who view AWS in a negative moral light oppose its deployment even when it could lead to improved direct policy outcomes. Evaluating those respondents that strongly prefer deploying US military personnel even when autonomous weapon systems offer superior battlefield performance, and vice versa, thus shows the way moral mandates shape policy attitudes for some segments of the public. This issue is therefore larger than just autonomous weapons, or even questions of how countries construct their militaries. Instead, unpacking the logic of effectiveness and the logic of morality can help shed light on domestic political issues as well.

The paper proceeds as follows. After the introduction, it lays out the basic contrast between labor and capital that many argue governs decisions about the size and shape of national militaries. It then introduces the difference between consequential assessments and moral mandates, describing how these might influence attitudes about autonomous weapon systems. After describing the research design and the survey experiment, the paper presents initial results, along with robustness tests and limitations. The conclusion describes the implications for both studying militaries in general and debates about autonomous weapon systems. That logics of effectiveness primarily drive attitudes about autonomous weapon systems mean public support for a prohibition will likely depend on perceptions of their military utility. However, that a significant segment of the public evaluates autonomous weapon systems through more of a moral lens suggests the public may view autonomous weapon systems differently than regular military technologies.

How Militaries Choose Between Labor and Capital

The way countries decide how, at the margins, to invest in people or equipment to build their militaries is a critical decision for politics. Gartzke (2001, 467) argues that wealthier countries tend to substitute capital for labor when they construct their militaries and make choices about the provision of security. Especially after the Industrial Revolution, capital endowments allow countries to invest in equipment such as ships, planes, and tanks that reduce the risk of casualties to their own forces and are also perceived to increase military effectiveness. Countries with high labor endowments, however, are somewhat less likely to pursue capital intensive strategies (Gartzke 2001).

Page 4: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

4

Research by Caverley (2014) suggests that democracies, in particular, are more likely to pursue capital-intensive strategies when it comes to constructing their military forces, especially as economic inequality increases. Capital-intensive militaries are less likely to pursue conscription as a recruitment strategy, because the relative need for labor declines. Therefore, the median voter is more likely to support a capital-intensive investment strategy, leading to a military where the human costs are borne by a smaller segment of the population. Caverley’s argument is consistent with a segment of research in international relations on military systems arguing that democracies, fearing the public opinion costs of casualties, focus on capital rather than labor in designing their militaries (Merom 2003, 21-22; Kober 2008). In contrast, Saunders and Sechser (2010) find that democracies are actually not significantly more likely to invest in tanks or fighter aircraft – two indicators of a capital-intensive military, than non-democracies.

One of the ultimate examples of capital over labor in military forces would be the development of autonomous weapon systems.2 This paper defines autonomous weapon systems (AWS) or lethal autonomous weapon systems (LAWS) as those that “once activated, are designed to select and engage targets not previously designated by a human” (Horowitz 2016a, 27).3 Another way to define autonomous weapon systems is as weapon systems that are beyond “meaningful human control” or “appropriate human judgment” (Roff 2016; Scharre and Horowitz 2015).

These systems differ from uninhabited aerial vehicles (UAVs), or drones, such as the MQ-9 Reaper because drones are remotely piloted. One incentive for leaders and militaries to use drones is that it removes their troops from harm’s way, which is both militarily useful and may avoid the public opinion costs that come from military casualties (Horowitz et al. 2016; Kreps 2014). At present, while reducing the risk of direct military casualties, the use of drones in complicated missions by countries such as the United States generates labor costs similar to flying inhabited aircraft. A pilot is still necessary, as are repair crews for the airframe, and intelligence personnel to process data.

Autonomous weapon systems, in contrast, could potentially allow militaries to significantly reduce labor costs. Even if there are humans “on the loop” monitoring a machine autonomous system, for example, each individual system might not need its own pilot. One person could potentially oversee many systems. Moreover, coordination between autonomous systems in swarms, or preprogrammed systems, could further reduce the need for labor. This could be attractive to democracies that seek to invest more in capital, consistent with recent trends, as well as autocracies, since they generally distrust their population to begin with, meaning

2 For ease of understanding, autonomous weapon systems here are considered in tandem with lethal autonomous weapon systems. Further research could pursue how attitudes might change if machine autonomous systems target other machines, versus people. 3 This builds on the Department of Defense definition in Directive 3000.09 (Department of Defense 2012). Note that there is a larger debate over how to define autonomous weapon systems both in the academic world and in the international community. Resolving that is beyond the scope of this paper. What matters is that respondents in the survey – and readers – understand the essence of what is under discussion.

Page 5: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

5

autonomous systems can help them increase their direct control over the use of military force (Horowitz Forthcoming).

Given these possibilities, one would imagine that attitudes concerning autonomous weapon systems should vary based on a logic of effectiveness. The more they are perceived to be effective military systems, the more support should increase for their use.

Hypothesis 1: Support for the deployment of autonomous weapon systems should increase as their effectiveness increases.

It is also plausible that autonomous weapons able to perform just as well as soldiers on the battlefield would be preferable. They could reduce labor costs, after all, and as with drones, eliminate the need to deploy military personnel onto the battlefield. Thus, even equivalent performance might lead to incentives to develop and use autonomous weapon systems. This kind of logic, however, is more likely to appeal to those working within the defense establishment, who worry about these tradeoffs more directly. It is unlikely to resonate as much with the public.

Hypothesis 2: Support for the deployment of autonomous weapon systems should be higher than support for the deployment of military personnel if their performance is equivalent.

The Logic of Morality

How countries determine the size and shape of their military also relates to questions concerning how the public, and a country, weigh moral versus pragmatic factors when making choices. Moral choices in general play a critical role in shaping individual identity (Rokeach 1973; Skitka 2002, 589).

Morality and norms also influence how countries evaluate the use of force. For example, in debates about the appropriate counterinsurgency strategy for defeating al Qaeda in Iraq and other insurgent groups in Iraq following the invasion of Iraq in 2003, one policy option not on the table was the mass killing of civilians to quiet public unrest. Such an option so contradicts American values that it was not even considered as a serious option. Values set the boundaries of plausible policy options in a regular decision-making process.4

Research by Skitka and a series of co-authors (Skitka 2002; Skitka and Houston 2001; Skitka and Mullen 2002b) shows that, on some policy issues, moral positions drive individual attitudes, and that decisions made on the basis of moral positions, or what Skitka and others call moral mandates, lead to different kinds of reasoning than decisions not primarily grounded in moral judgments (Skitka et al. 2005, 895).

4 The consideration of options might differ if a contingency was considered existential, in that more options would likely be placed on the table and considered.

Page 6: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

6

Due to the need for private psychological consistency and a desire for a strong public reputation, views held for moral reasons are especially strongly felt. As Skitka (2002, 588) writes:

“Similar to Sir Thomas More (who preferred to be beheaded rather than sanction divorce), people value the self-respect and the self-satisfaction that comes with living up to and defending their internalized moral standards and often will defend their moral positions even in the face of extreme costs for doing so (Bandura 1986).”

The commitment to “terminal values” (Skitka 2002, 589) lead to moral mandates for people to evaluate an issue through a moral lens. When that occurs, research in psychology demonstrates that people become much more likely to accept procedural unfairness or even negative outcomes in the interest of pursuing what they view as justice. Essentially, “attitudes and behaviors that the perceiver believes are morally mandated will always be seen by the perceiver as justified and for the greater good” (Skitka and Mullen 2002a, 37).

When issues are moralized, it makes compromise and thinking about the greater good more difficult, except via viewing upholding a moral position as the position most likely to lead to better outcomes. Ryan’s research on moral conviction and political attitudes shows, for example, that moral convictions about the distribution of benefits can lead groups to refuse to accept monetary benefits for themselves if a group they are morally opposed to would also benefit (Ryan 2014, Chapter 2).

In evaluating a policy option, one can imagine people making choices in many ways. A moralized decision-making process, e.g. one based on a moral mandate, primarily evaluates a policy option based on whether it comports with the moral and ethical values of a person. This is related to a Kantian, deontological view of policies (Kant 1996). For example, believing that abortion is should be illegal because it is murder, and murder is wrong (whether for religious or other reasons), represents a clear moral judgment. For this hypothetical person, the impact of banning abortion on the number of abortions, or other policy outcomes, is less important than the direct moral argument that abortion represents murder. A proponent of legal abortion who believes in their legality because of the rights of women to make choices about their bodies, regardless of the consequences, is similarly engaged in a decision-making logic based primarily in moral reasoning.

Existing research suggests that symbolic considerations generally trump self-interest in driving assessments of policies (Sears et al. 1979). That relationship can reverse for particularly salient policies, such as conscription, since that influences an individual’s probability of being selected into the military and potentially facing the risk of death in conflict (Horowitz and Levendusky 2011). This means that, unless someone is directly impacted by choices concerning the types of

Page 7: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

7

military systems being employed, they should make judgments about those military systems based on broader symbolic and ideological issues, rather than self-interest.5

Moral and ethical arguments about autonomous weapons generally focus on how the use of autonomous weapons, by removing humans from the decision to engage targets, creates accountability gaps in the use of force and undermines human dignity. The UN Institute for Disarmament Research, describing this line of reasoning, states that

“Perhaps at the core of the concerns raised about fully autonomous weapons, there is something less definitive than law and even less quantifiable than the dictates of public conscience. This something might be described as an instinctual revulsion against the idea of machines “deciding” to kill humans” (UNIDIR 2015, 7).

Robert Sparrow (2007) argues that autonomous weapon systems raise unique ethical challenges because the question of who to hold accountable if an autonomous weapon systems malfunctions cannot be solved. Human Rights Watch (2012), among others, states that autonomous weapon systems raise moral concerns because they would be unable to follow principles of distinction, proportionality, and necessity, making them illegal under international law and unethical. Note here, in the logic, some conflation between morality and effectiveness – some criticisms of autonomous weapon systems use assumptions about their effectiveness, or lack thereof, to then make moral arguments, or legal arguments framed in moral and ethical ways.

It is also possible, of course, that there is a moral case to be made for autonomous weapon systems. If they are more accurate, for example, than other weapon systems, they might function to reduce harm in war, making them ethically desirable (Anderson and Waxman 2013, 16; Anderson et al. 2014). Ronald Arkin, a roboticist at the Georgia Institute of Technology, argues that because they will not get tired or angry, autonomous weapon systems might reduce violations of the Law of War by decreasing the risk of war crimes (Arkin 2013, 2-3).

That being said, one would imagine those motivated primarily by moral issues to focus more on the downsides of autonomous weapon systems

Hypothesis 3: Public attitudes about autonomous weapon systems among those that evaluate autonomous weapon systems through a moral lens will reflect the primacy of moral considerations over effectiveness considerations.

The Logic of Control

Questions of whether to invest in labor or capital generally depend, in the existing literature, on questions of regime type, factor endowments, and wealth. This leaves out a macro consideration, however, that could influence attitudes about the size and shape of a military – 5 This holds constant, of course, media decisions about what to report and how to report it, which can have a significant impact on public attitudes (Baum and Groeling 2010).

Page 8: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

8

control. Control of a military often is taken for granted in democracies, due to the lower risk of military-led coups or disobedience, but plays a significant role in literature on the construction of autocratic militaries.

Coup-proofing occurs when countries take actions that deliberately weaken the effectiveness of their military forces, particular in confronting external securing threats. Strategies such as promoting people based on loyalty to the regime, rather than skill, minimizing training opportunities for the military, or deploying them in ways most relevant for preventing internal threats to a regime all represent coup-proofing strategies (Talmadge 2015; Quinlivan 1999; Pilster and Böhmelt 2011).6 What these strategies all reflect is a desire by the leader for control over the military, because the leader views the military as a potential threat to the regime.

The potential development of weapon systems that offer improved military effectiveness, but at the potential cost of reduced control, highlight a potential linkage between the literature on labor versus capital and the literature on coup-proofing. Whether due to wealth, the preferences of a median voter, or the intersection of the two factors, wealthy democracies have increased shifted towards developing capital intensive militaries over the last few generations.

What if further shifts toward capital intensity increased the risk that a leader might lose control of some portion of their military forces? This is the question that autonomous weapon systems potentially raise, according to critics (Human Rights Watch 2014; UNIDIR 2015). Because the decision to select and engage a target is made by a machine, rather than a person, autonomous weapon systems take the most important part of the kill chain – the killing – out of human hands. Due to the complexity of these types of decisions, however, the deployment of autonomous weapon systems could increase the potential for losses of control. Accidents might become more likely, for example, if autonomous weapon systems are deployed outside the context of their programming (Scharre and Horowitz 2015). The complexity of the systems themselves could make their logic less predictable, making operational errors inherently more likely. This is especially true because the fail-safe of a human authorizing each attack might not exist (Scharre 2016, 5; Carvin).

Scharre (2016, 5) states that

“From an operational standpoint, autonomous weapons pose a novel risk of mass fratricide, with large numbers of weapons turning on friendly forces. This could be because of hacking, enemy behavioral manipulation, unexpected interactions with the environment, or simple malfunctions or software errors. Moreover, as the complexity of the system increases, it becomes increasingly difficult to verify the system’s behavior

6 There has been a recent growth in research on coup-proofing. See, for example (De Bruin 2016; Piplani and Talmadge 2016; Frantz and Stein Forthcoming; Bell 2016; Pilster and Böhmelt 2012)

Page 9: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

9

under all possible conditions; the number of potential interactions within the system and with its environment is simply too large.”

Fear of accidents and the consequences of those accidents, e.g. a lack of control, could therefore provide incentives for militaries and the public to oppose the development of autonomous weapon systems, and instead pursue more labor-intensive strategies. What if, however, this would put their soldiers more at risk? If militaries have the option of deploying autonomous weapon systems that could be more effective, but also are more likely to fail, would they therefore prefer more labor-intensive strategies that are more reliable, even if soldiers are more likely to be in harm’s way? This question gets to the core of debates about how countries determine the size and shape of their militaries.

Hypothesis 4: Support for using autonomous weapon systems will decline as they are perceived as reducing human control over the use of force.

Why Study Public Opinion and Autonomous Weapon Systems

The section above raises the question of why we should study public attitudes about weapon systems. After all, a skeptic could argue that decisions about procurement and the size and shape of militaries are made by political leaders and bureaucrats. There are several reasons, however, why studying public attitudes is essential to understand the construction of modern militaries, especially when it comes to the potential integration of new capabilities such as autonomous weapon systems.

Theoretically, public opinion represents the microfoundations of public policy, especially in more democratic systems where the public can hold elites accountable for policy choices through elections (Tomz and Weeks 2013). Moreover, Caverley’s research about the median voter and the use of force in democracies suggests that voter preferences play a role in shaping the way that countries design their military forces. Even if there is not a direct relationship between public attitudes the design of militaries, public attitudes certainly have relevance when it comes to the use of particular technologies. Recent research suggests, for example, how the context of the use of force shapes public attitudes about nuclear weapons (Press et al. 2013).

Studying public attitudes about autonomous weapon systems also has unique relevance. The growth of debates about the potential legal, ethical, and military issues surrounding them suggests the importance of the issue, overall. Autonomous weapons systems have been the subject of debates in the United Nations for the past three years in the Convention on Certain Conventional Weapons (CCW). This makes them a crucial topic to understand in and of themselves, but the character of that dialogue also illustrates the relevance of public opinion.

Public attitudes about autonomous weapons also may influence their legitimacy from the perspective of international law. The Martens Clause, part of Hague Convention IV (1907), states

Page 10: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

10

“Until a more complete code of the laws of war has been issued, the High Contracting Parties deem it expedient to declare that, in cases not included in the Regulations adopted by them, the inhabitants and the belligerents remain under the protection and the rule of the principles of the law of nations, as they result from the usages established among civilized peoples, from the laws of humanity, and the dictates of the public conscience.”7

If the public around the world demonstrates significant opposition to LAWS, that could mean they violate the public conscience provision of the Martens Clause and are therefore illegal (Human Rights Watch and Harvard Law School’s International Human Rights Clinic 2012, 24; Human Rights Watch 2014, 16-17; Reaching Critical Will 2013).8 Therefore, studying public attitudes about autonomous weapons is an important task for research.

The existence of the global Campaign to Stop Killer Robots, like previous NGO campaigns to ban landmines, cluster munitions, and blinding lasers, also shows the clear relevance of studying public attitudes in the autonomous weapons arena. NGOs campaigns are more likely to succeed when they can mobilize broad segments of the population, because it puts more pressure on governments to support restrictions or bans (Finnemore 1996; Keck Margaret and Sikkink 1998).

This paper also contributes to existing research on public opinion and autonomous weapons. Carpenter’s (2014) survey showed that 55% of the American public opposed developing autonomous weapon system, and research by the Open Roboethics Initiative, with a more global sample, similarly highlighted public opposition to autonomous weapon systems. Horowitz (2016b), in contrast, shows that public attitudes about autonomous weapons vary according to the context of their use. Baseline public opposition seems to exist, but the public becomes much more supportive of developing autonomous weapon systems when the goal is protecting US military forces or other countries develop them first.

Autonomous weapons specifically have also entered broader public discourse about artificial intelligence and its impact on society more broadly. For example, as part of its comprehensive review of developments in artificial intelligence, which included several public forums hosted around the country, the Office of Science and Technology in President Obama’s White House specifically examined autonomous weapon systems. They argued that the issue is important because “Given advances in military technology and artificial intelligence more broadly, scientists, strategists, and military experts all agree that the future of LAWS is difficult to predict and the pace of change is rapid” (Office of Science and Technology Policy 2016, 38).

Research Design

7 Also quoted in Horowitz (2016b) 8 While the applicability of the Martens Clause is disputed by legal scholars, what matters for the purposes of this paper is that the argument exists at all. The actual applicability of the Martens Clause is beyond the scope of this paper (Evans 2012; Schmitt 2013).

Page 11: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

11

I test the hypotheses using data gathered through a module fielded as part of the 2014 Cooperative Congressional Election Study (Schaffner and Ansolabehere 2015). The survey was fielded on 1000 individuals in two phases – before and after the November 2014 midterm elections in the United States. The dependent variable comes from a question that describes a hypothetical civil war and US military intervention designed to protect the population from insurgents. The question reads:

“Suppose a country were in the midst of a civil war and contains a civilian population being threatened by insurgents. The national government is unable to handle the situation on its own. The President of the United States makes the decision that the American military should intervene to help resolve the crisis and protect the population from insurgents.”

Participants were then told that the United States could either send US military personnel or a “US military force of autonomous weapons systems. Autonomous weapons systems are robotic systems that, once activated, can independently make the decision to target and fire weapons without a human involved.” There is the possibility that the specific wording of the systems as “autonomous weapon systems” might lead to different results than terms such as “killer robots”, more evocative of danger, or other terminology. Research by Carpenter (2014), however, suggests that the specific phrasing does not significantly influence public attitudes in this arena.

Participants were randomly assigned into three conditions. In the first condition (“humans better”), participants were told that using autonomous weapons systems would be more likely to lead to civilian casualties than if US military personnel were deployed. In the second condition (“equivalence”), participants were told that US military forces would be more likely to inflict civilian casualties on the population they were attempting to protect than if autonomous weapons were deployed.

In the third condition (“equivalence”), participants were told that the probability of civilian casualties would be equal regardless of whether US military personnel or autonomous weapon systems were employed. In the third condition, the respondents were further split in half to randomize word ordering. Half of the respondents were told that US military personnel would be just as likely to inflict civilian casualties as US autonomous weapon systems. The other half were told that US autonomous weapon systems were just as likely to inflict civilian casualties as US military personnel. This ensures that word ordering in the “equivalence” condition did not bias the results.

In each condition, participants were asked on a 1-5 scale whether they would rather send US military personnel or autonomous weapon systems. The answer format in each condition was as follows:

1 Strongly supportive of using autonomous weapons systems 2 Somewhat supportive of using autonomous weapons systems

Page 12: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

12

3 Neither more or less supportive of military personnel or autonomous weapons systems

4 Somewhat supportive of using US military personnel 5 Strongly supportive of using US military personnel

Results

To begin, I examine the extent to which the experimental conditions themselves influence the relative level of approval for deploying US military AWS versus US military personnel. At a basic level, if the logic of effectiveness hypothesis holds, the better the relative effectiveness of US military AWS at the mission in question, and vice versa, the higher approval for their deployment should rise. Figure 1 depicts the proportion of respondents that fell into each category (preferring to deploy AWS, no preference, or preferring to deploy personnel) across each experimental condition, along with a 95% smoothed confidence interval.

Figure 1: Preferences for Deploying AWS or Personnel, by Experimental Condition

The results provide initial support for hypothesis 1 about the logic of effectiveness. Support for the deployment of autonomous weapon systems rises significant as their described effectiveness grows. Only 27% of respondents strongly or somewhat supported deploying autonomous weapon systems when they were more likely to inflict civilian casualties. Since protecting civilians was the actual mission, this means they were definitionally less effective.

20%

30%

40%

50%

Perc

enta

ge O

f Res

pond

ents

Tha

t Agr

ee

Prefer Deploying AWS No Preference Prefer DeployingMilitary Personnel

Robots Better Condition Humans Better ConditionEqual Condition

Page 13: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

13

Support for deploying autonomous weapon systems rises by 16 points, to 43%, when military personnel are more likely to inflict civilian casualties. While still below half of respondents supporting the deployment of autonomous weapon systems, 43% support is higher than support for deploying US military personnel or the “no preference” option in any of the experimental conditions, making it the most strongly supported belief in the experiment.

The results provide more limited support, if any, to hypothesis 2. When autonomous weapon systems and military personnel are described as equivalent in performance, support for deploying them grows from 27% to 33%, but the difference is not statistically significant, as the overlapping confidence intervals in Figure 1 highlight.

A potential challenge is that respondents might not have accepted the scenario. Beliefs about autonomous weapon systems are likely unstable, because the issue is not salient (Zaller 1992). The beliefs people do have, however, likely come from the frequent negative representations of autonomous weapon systems in the movies and television. From TV shows such as Battlestar Galactica and Westworld to movies such as The Matrix and the original Terminator, popular media suggests armed, thinking robots will inevitably attempt to destroy humanity. Given media representations, one might imagine respondents receiving in “robots better” condition as less willing to accept the treatment.

To test the effectiveness of the treatment, a manipulation check question asked a few questions after the scenario was presented asked respondents “Would using US military personnel or a US military force of autonomous weapons systems be more likely to lead to civilian casualties?” If the treatments worked, respondents should give an answer that is consistent with their experimental condition, meaning those who received the “robots better” condition should give US military personnel as the answer, and vice versa. The results demonstrate the success of the experimental manipulation. T tests show that those who received the “robots better” condition were significantly (p<0.01) less likely to believe using autonomous weapon systems would lead to more civilian casualties. Similarly, those who received the “humans better” condition were significantly (p<0.01) more likely to believe autonomous weapon systems would lead to more civilian casualties.

A further test of hypotheses 1 and 2 comes from the post-election wave of the 2014 CCES, where respondents received the same scenario, but they were selected into different conditions. Respondents that received the condition where US military personnel were superior to autonomous weapon system received the treatment where autonomous weapons systems were superior, and vice versa. For respondents who received the “equivalence” condition, they were randomly selected into either the “humans better” or “robots better” conditions.9 Table 1

9 There is an assumption made that there was no carryover from the first experimental manipulation to the second manipulation, due to the delay of several weeks between the pre-election test and the post-election test (Jones and Kenward 2014; Tomz and Weeks 2013, 854). Moreover, given that most respondents received the opposite of the treatment they received previously, any carryover effects would be balanced. Finally, the effect of randomly splitting those that received the “equivalent” treatment into either the “robots better” or “humans better” conditions in the post-election survey should similarly balance out.

Page 14: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

14

below shows the percentage of respondents that selected each answer option, by condition, including both the pre-election and post-election CCES waves.

Table 1: Percentage Support for Deploying AWS or Personnel, by Experimental Condition (Pre and Post)

Robots Better

Condition (Pre Election)

Humans Better

Condition (Pre

Election)

Equivalent Performance

Condition (Pre

Election)

Robots Better Condition

(Post-Election)

Humans Better

Condition (Post

Election) Support Deploying

AWS 43% 27% 33% 45% 32%

No Preference 33% 36% 42% 37% 36% Support Deploying Military Personnel 25% 38% 26% 18% 32%

The results again show the significant consequences of varying effectiveness on approval for deploying AWS, with effect sizes extremely similar to the first wave of the survey. Support for deploying AWS rises from 32% in the “human better” condition in the post-election wave to 45% in the “robots better” condition. Additionally, just as in the pre-election wave, a manipulation test showed that respondents accurately interpreted the questions. Those receiving the “robots better” condition were significantly (p<0.01) more likely to support deploying autonomous weapon systems, while who received the “humans better” condition were significantly (p<0.01) more supportive of deploying military personnel.

A final experimental manipulation asked as part of the post-election survey provides a second scenario useful for testing hypotheses 1 and 2. Respondents were biased by the time they answered this question, given the previous questions they answered, but it does provide at least some additional insight. The final experimental manipulation differs from the scenario above in two crucial ways. First, the scenario involved a country threatening its neighbor and the deployment of the US military to protect the country being invaded. This changes the context from that of protecting civilians to a more classic military mission. Second, in both conditions, respondents were told that US military personnel or US autonomous weapon systems would be “equally likely to protect the country being threatened.” Thus, the effectiveness of options is a constant.

Respondents were randomly selected into one of two conditions. In the control condition, respondents received no information about the military of the country threatening to invade its neighbor, beyond the threat itself. In the treatment condition, respondents were told that “the threatening country had developed a force of autonomous weapons systems to conduct the attack.” Thus, the experiment manipulates adversary possession of autonomous weapon systems to test how that influences US public willingness to deploy US military AWS as opposed to US military personnel. Figure 2 below shows how support for deploying AWS versus military personnel varies across the treatment and control condition.

Page 15: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

15

Figure 2: Preferences for Deploying AWS or Military Personnel Depending on Adversary Deployments

The results highlight additional facets related to the logic of effectiveness. Support for deploying AWS rises from 39% to 49% when respondents are told that the adversary threatening their neighbor is deploying autonomous weapon systems. These results suggest two potential causal mechanisms. First, fear of falling behind in an emerging technology area may shape attitudes – seeing another country deploy a new weapon system may make the public more interested in doing so as well. Second, the prospect of US soldiers being confronted with adversary robots on the battlefield may heighten concerns about US military casualties. Unfortunately, we lack the ability to test these mechanisms, but future surveys and specific tests of these mechanisms could help unpack this issue further.

Testing the Logic of Morality

Several questions provide a basis to test the role of the logic of effectiveness versus the logic of morality in determining viewpoints about autonomous weapon systems. In a separate part of the survey, respondents were asked about their views on robotics and autonomous weapon systems in general. Combined with the Common questions in the CCES module, which provide demographic and political information, Table 2 below presents the results from two probit models. In the first, the dependent variable is 1 if the respondent favored deploying AWS and 0 otherwise. In the second model, the dependent variable is 1 if the respondent favored deploying military personnel and 0 otherwise. Similarities and differences in the significance of

10%

20%

30%

40%

50%

60%

Perc

enta

ge O

f Res

pond

ents

Tha

t Agr

ee

Prefer Deploying AWS No Preference Prefer DeployingMilitary Personnel

Adversary Deploying AWS Control

Page 16: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

16

the covariates across the two models thus help illustrate the key drivers of support for deploying AWS or military personnel. The independent variables in Table 2, along with the logic for their inclusion, are:

General Variables

• Age: Since younger people might be more familiar with and supportive of new technology, including an age variable makes sense

• Male: Potential proxy for more support for the use of force • Higher Education: Higher levels of indication might generate more comfort with

technology • Support for Iraq War in 2003: Potential proxy for general hawkishness • Support for deploying US troops to Iraq in 2014: Potential proxy for general hawkishness • Democrat: Potential proxy for lower support for the use of force • Prior Military Service: Potential proxy for knowledge of the use of force

Specific Variables

• Support for US Drone Strikes: Proxy for hawkishness and support of robotic uses of military force

• Personal Robotic Usage: Proxy for comfort with robotics • Ability of Science to overcome problems: Proxy for confidence in emerging technologies • How does morality of autonomous weapons compare to non-autonomous weapons?: 5

point scale where 1 means the use of AWS is significantly more moral and 5 means AWS is significantly less moral

• Should US not use AWS regardless of costs and benefits: 5 point scale where 1 means strongly agree and 5 means strongly disagree

• Robots better condition: Dummy variable that is 1 if the respondent received the treatment where AWS had higher performance than military personnel, and 0 otherwise.

• Humans better condition: Dummy variable that is 1 if the respondent received the treatment where military personnel had higher performance than AWS, and 0 otherwise.

Page 17: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

17

Table 2: Probit Models of Support for Deploying AWS or Military Personnel

(1) (2) DV: Support Using

AWS DV: Support Using Military Personnel

B/SE B/SE Age -0.0039

(0.0038) -0.0003 (0.0039)

Male -0.0063 (0.127)

0.0736 (0.136)

College Education 0.151 (0.115)

-0.0404 (0.115)

Opposed to Iraq War 0.385*** (0.137)

-0.127 (0.135)

Support Sending Troops Back to Iraq -0.132 (0.151)

0.496*** (0.148)

Democrat 0.00710 (0.131)

-0.0315 (0.138)

Prior Military Service -0.154 (0.185)

0.341* (0.180)

Support Drone Strikes -0.172*** (0.0614)

-0.0705 (0.0517)

Personal Robot Usage 0.414** (0.192)

-0.664*** (0.201)

View of Science -0.0432 (0.0383)

0.0350 (0.0396)

Relative Morality of AWS Use -0.0971 (0.0626)

0.211*** (0.0648)

Do Not Evaluate AWS as Cost-Benefit Issue -0.192*** (0.0497)

0.288*** (0.0524)

Condition: Military Personnel More Effective -0.00025 (0.141)

0.296** (0.141)

Condition: AWS More Effective 0.467*** (0.140)

-0.148 (0.154)

Constant 0.835** (0.377)

-2.244*** (0.404)

Observations 944 944 R2 .1066 .1353 Log likelihood -537.3 -504.7

Standard errors in parentheses * p < 0.10, ** p < 0.05, *** p < 0.01

Page 18: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

18

The results in Table 2 highlight some of the characteristics of issues surrounding military technologies, particularly when they have low political salience, as well as providing some initial insights into hypothesis 3. Those opposed to the Iraq War were significantly more likely to support deploying AWS, but had no effect on supporting the deployment of military personnel. The Iraq War finding likely results from casualty sensitivity, with those concerned about US military casualties thus becoming more likely to support a deployment that does not put US troops at risk. This would require follow-up research to tease out, however.

In contrast, those that supported sending US troops back to Iraq (to fight the Islamic State) around the 2014 midterm elections were significantly more likely to support deploying US military personnel. This perhaps illustrates that question as a proxy for hawkishness at the time, particularly since there was already a control for partisanship in the model. Prior military service is also correlated with more support for deploying US military personnel. In addition to hawkishness, this could highlight the way military personnel may trust options for using force that they have more experience with (Macdonald and Schneider 2016).

The variables specifically related to the use of military robotics and/or autonomous weapon systems provide context about the mechanisms driving the findings above, as well as evidence supporting hypothesis 3. The negative and significant coefficient for the drone strikes variable means those that support drone strikes are more likely to support deploying AWS – perhaps because they are already supportive of using military robotics. Personal experience with robotics, in particular, has a significant effect on both the probability of support for deploying AWS (in a positive direction) and the probability of support for deploying military personnel (in a negative direction), as Figure 3 below shows.

Page 19: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

19

Figure 3: Impact of Personal Experience With Robotics On Support for Deploying AWS v. Military Personnel

As more people in societies around the world gain more experience with robotics, then, Figure 3 suggests that support for AWS overall may increase.

Two questions in the survey were designed to test hypothesis 3. The relative morality question asked participants whether they viewed the deployment of AWS as more or less moral as the use of other weapon systems. The question was asked at a point in time far removed from the main survey experiment, to avoid contamination. The statistical significance of the relative morality question for the military personnel model means, as one would expect based on hypothesis 3, that those who view AWS as more morally problematic were much more likely to support deploying military personnel.

The cost-benefit question asked respondents to consider whether they would oppose deploying AWS regardless of the particular benefits and costs. This question, though quite explicit, elicits a direct measure of the extent to which respondents think about autonomous weapon systems as a moral mandate. The significance of the findings across both models, and in different directions, provides direct support for hypothesis 3. Those who, in theory, viewed AWS not as an issue to deal with through cost-benefit analysis, were significantly less likely to support deploying AWS and more likely to support deploying military personnel. Substantively, as a respondent moves from viewing AWS as a cost-benefit issue to not doing so, the probability they support deploying AWS drops from 52% to 24%. The predicted probability of supporting

No Personal Robotics Experience

Personal Robotics Experience

0 .2 .4 .6Probability Of Support

Supports DeployingAWS

Supports DeployingMilitary Personnel

Page 20: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

20

deploying military personnel, meanwhile, grows from 11% to 50% as respondents shift from embracing to rejecting cost-benefit analysis to evaluate AWS.

Moreover, there are two sets of respondents whose views are particularly interesting, in light of the theories proposed above. First, there are respondents who prefer deploying US military personnel to autonomous weapon systems even when told that US military personnel would be more likely to kill civilians, e.g. less effective at accomplishing the mission of protecting civilians. Second, there is the inverse –respondents who prefer deploying autonomous weapon systems even when told their use would make civilian casualties more likely. These are respondents who specifically advocate policy actions by the United States that they know will be less effective than alternatives they were presented. Understanding what motivates these respondents in particular is therefore an interesting question.

Understanding these respondents provides additional support to hypothesis 3 about the way viewing the AWS issue through a more moral lens leads to lower levels of support. Table 3 below contains two new models. The dependent variable focuses specifically on those respondents that rejected the logic of effectiveness in the “robots better” and “humans better” treatment conditions. In the “robots better” model, the universe of cases is just those that received the “robots better” treatment, and the dependent variable is 1 if the respondent supported deploying military personal and 0 otherwise.

In the “humans better” model, the universe is similarly just those that received the “humans better” treatment, but the dependent variable is 1 if the respondent supported deploying AWS despite being told they would be less effective, and 0 otherwise.

Table 3: What Drives Rejecting The Logic Of Effectiveness?

(1) (2) DV: Supports AWS

Deployment Even Though Less

Effective

DV: Supports Personnel

Deployment Even Though Less

Effective B/SE B/SE Age -0.0033

(0.0063) 0.0025

(0.0072) Male -0.251

(0.206) -0.0003 (0.255)

College Education -0.141 (0.177)

0.140 (0.199)

Opposed to Iraq War 0.118 (0.222)

-0.215 (0.234)

Support Sending Troops Back to Iraq -0.0344 0.499**

Page 21: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

21

(0.285) (0.254) Democrat 0.153

(0.220) 0.0796 (0.264)

Prior Military Service 0.425 (0.264)

0.479 (0.312)

Support Drone Strikes -0.105 (0.0964)

0.0297 (0.0927)

Personal Robot Usage 0.242 (0.335)

-0.130 (0.383)

View of Science -0.0087 (0.0637)

0.0694 (0.0728)

Relative Morality of AWS Use -0.340*** (0.107)

0.175 (0.126)

Do Not Evaluate AWS as Cost-Benefit Issue -0.166** (0.0797)

0.230*** (0.0835)

Constant 1.464** (0.640)

-2.679*** (0.768)

Observations 323 304 R2 .1122 .0862 Log likelihood -152.0 -164.6

Standard errors in parentheses * p < 0.10, ** p < 0.05, *** p < 0.01 The results seem to highlight the role that morality plays in influencing decision-making. For those that supported deploying AWS even though they were told it was less effective, the only significant correlates were believing AWS should be evaluated as a cost-benefit issue and viewing AWS as more moral than alternatives. This potentially shows the role of moral mandates, though not as hypothesis 3 initially envisioned. Those that supported sending troops back to Iraq in 2014 and that believed AWS should not be deployed no matter the benefits were significantly more likely to support deploying military personnel despite knowing they will be less effective.

Limitations

One major limitation of the results at present is that they do not allow for a direct test of hypothesis 4 about control. This will have to await future research.

Several aspects of the core scenario presented above were particular in ways that limit the scope of the results. The mission is one of protecting civilians from insurgents. It is possible that effects could vary if the mission involved attacking adversaries, for example. The variation in performance by US military personnel versus autonomous weapon systems is generated through describing their effectiveness in protecting civilians. Changing the ways that performance varies could also influence the results. Finally, a scenario particular to a specific country could change the results. Thus, the results above should be interpreted with caution.

Page 22: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

22

Conclusion

How militaries make choices about integrating new technologies, along with the consequences of those choices, are critical questions for politics. The different logics people use to make decisions about policies – especially whether they view issues in terms of consequentialist or moral framings – can influence public attitudes, and potentially elite attitudes as well. Questions of ethics and morality, as well as control over military systems, provide an important theoretical addition to traditional debates about how militaries balance between labor and capital in determining the size and shape of their armed forces. How actors evaluate emerging technologies and their application also helps unpack questions of norms and the conditions in which norms are more or less likely to shape behavior. Thus, the results in this paper have clear theoretical relevance.

How actors think about the potential deploying of autonomous weapons has public policy relevance as well. At present, given the rapid growth in research on artificial intelligence, and the growing integration of machine autonomy in military systems, the intersection of artificial intelligence and global politics is more important than ever. Moreover, since the underlying basis of innovation in artificial intelligence and machine learning comes from the commercial and academic sectors, controlling the spread of the technology will be a fundamentally different exercise for leading militaries than controlling key technologies over the last generation.

The desire for faster decision-making, concern about the hacking of remotely-piloted systems, and fear of what others may be developing could all incentivize the development of some types of autonomous weapon systems. However, awareness of the potential risk of accidents regarding these systems, as well as the desire for militaries to maintain control over their weapons to maximize their effectiveness, will likely lead to caution in the development and deployment of systems where machine learning is used to select and engage targets with lethal force.

Given these countervailing issues, and the ongoing debates in the public and the United Nations about autonomous weapon systems, understanding public attitudes is critical. Are potential autonomous weapon systems more like landmines and cluster munitions – weapon systems most of the international community rejects, since their indiscriminate nature means they cannot be employed in an ethical and legal fashion? Alternatively, are concerns about autonomous weapon systems more likely concerns about the submarine in the late 19th and early 20th century – typical of a period of rapid technological change, but something that will become a standard part of many militaries.

The results show that variation in perceptions of effectiveness regarding autonomous weapon systems, along with the possibility of other countries acquiring them, shape public beliefs. However, for those that frame autonomous weapon systems through the lens of morality, those beliefs can have a stronger pull than the logic of effectiveness. Those that believe the use of autonomous weapon systems are more moral than alternatives, for example, are more likely to support their deployment even when they will be less effective, while those who view them as immoral are likely to oppose their deployment regardless of the benefits.

Page 23: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

23

It is likely that these attitudes are also bound up with beliefs about the potential control of artificial intelligence. The strong prime of popular media, with its depictions of robots run amok, along with the high level of technical complexity necessary to truly grasp how machine learning algorithms function, mean beliefs about effectiveness may be connected to beliefs about morality, and specifically the fear of humans losing control over AI systems.

Finally, this article does not address the more fundamental existential risk associated with artificial intelligence applications to military systems – the fear that a superintelligence could decide to destroy the human race, either because it decides humans are malign or because humans program it to achieve a goal it can only accomplish by destroying humans. Nick Bostrom’s infamous paperclip problem imagines an artificial intelligence programmed to maximize the production of paperclips that ends up taking over the world in the interest of that goal (Bostrom 2014). The existential risk issue associated with artificial intelligence is in some ways not as closely coupled to military applications of artificial intelligence as some might think on first glance. If a super intelligence machine learning system has the ability to take over human society in the interest of a goal – any goal – whether autonomous systems at much smaller orders of magnitude already exist in military systems will likely be unimportant. The super intelligent system would simply create what it needed. Humanity’s worst fears about an intelligence machine turning against it aside, the integration of machine learning and military power will likely be interesting and unpredictable in the years ahead, making it a critical area of inquiry for strategic studies.

Bibliography

Anderson, Kenneth, Daniel Reisner, and Matthew C. Waxman. 2014. "Adapting the Law of Armed Conflict to Autonomous Weapon Systems." International Law Studies 90:386-411.

Anderson, Kenneth, and Matthew C. Waxman. 2013. "Law and Ethics for Autonomous Weapon Systems: Why a Ban Won't Work and How the Laws of War Can." American University Washington College of Law Research Paper No. 2013-11; Columbia Public Law Research Paper.

Arkin, Ronald C. 2013. "Lethal Autonomous Systems and the Plight of the Non-combatant." AISB Quarterly 137:http://www.cc.gatech.edu/ai/robot-lab/online-publications/aisbqv4.pdf.

Bandura, Albert. 1986. Social foundations of thought and action: A social cognitive perspective. Englewood Cliffs, NJ: Princeton-Hall.

Baum, Matthew A, and Tim Groeling. 2010. "War stories: The causes and consequences of citizen views of War." Princeton, NJ: Princeton University Press.

Bell, Curtis. 2016. "Coup d’état and democracy." Comparative Political Studies 49 (9):1167-200. Bostrom, Nick. 2014. Superintelligence: Paths, dangers, strategies. Oxford: Oxford University

Press. Carpenter, Charli. 2014. "Who's afraid of killer robots? (and why)." The Washington Post's

Monkey Cage Blog, May 30, http://www.washingtonpost.com/blogs/monkey-cage/wp/2014/05/30/whos-afraid-of-killer-robots-and-why/.

Page 24: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

24

Carvin, Stephanie. 2016. "Normal Autonomous Accidents." Norman Paterson School of International Affairs: Carleton University.

Caverley, Jonathan D. 2014. Democratic militarism: voting, wealth, and war. New York: Cambridge University Press.

De Bruin, Erica. 2016. "Preventing Coups d’etat: How Counterbalancing Works." Clinton, NY: Hamilton College.

Department of Defense. 2012. "Directive on Autonomy in Weapons Systems, Number 3000.09." Department of Defense.

Evans, Tyler D. 2012. "At War with the Robots: Autonomous Weapon Systems and the Martens Clause." Hofstra L. Rev. 41:697-733.

Finnemore, Martha. 1996. "Norms, Culture, and World Politics: Insights from Sociology's Institutionalism." International Organization 50 (2):325-47.

Finnemore, Martha, and Kathryn Sikkink. 2005. "International Norm Dynamics and Political Change." International Organization 52 (4):887-917.

Frantz, Erica, and Elizabeth A Stein. Forthcoming. "Countering Coups: Leadership Succession Rules in Dictatorships." Comparative Political Studies.

Garcia, Denise. 2014. "The Case Against Killer Robots: Why the United States Should Ban Them." Foreign Affairs Online, May 10, http://www.foreignaffairs.com/articles/141407/denise-garcia/the-case-against-killer-robots.

———. 2015. "Battle Bots: How the World Should Prepare Itself for Robotic Warfare." Foreign Affairs Online, June 5, https://www.foreignaffairs.com/articles/2015-06-05/battle-bots?cid=soc-tw-rdr.

Gartzke, Erik. 2001. "Democracy and the Preparation for War: Does Regime Type Affect States' Anticipation of Casualties?" International Studies Quarterly 45 (3):467-84.

Hague Convention. 1907. "Convention (IV) respecting the Laws and Customs of War on Land and its annex: Regulations concerning the Laws and Customs of War on Land.". The Hague.

Horowitz, Michael C, Sarah E Kreps, and Matthew Fuhrmann. 2016. "Separating Fact from Fiction in the Debate over Drone Proliferation." International Security 41 (2):7-42.

Horowitz, Michael C. 2010. The Diffusion of Military Power: Causes and Consequences for International Politics. Princeton, NJ: Princeton University Press.

———. 2016a. "The Ethics and Morality of Robotic Warfare: Assessing The Debate Over Autonomous Weapons." Daedalus 145 (4):25-36.

———. 2016b. "Public opinion and the politics of the killer robots debate." Research & Politics 3 (1):1-8.

———. Forthcoming. "Military Robotics, Autonomous Systems, and the Future of Military Effectiveness." In The Sword's Other Edge: Tradeoffs in the Pursuit of Military Effectiveness, ed. D. Reiter. New York: Cambridge University Press.

Horowitz, Michael C., and Matthew S. Levendusky. 2011. "Drafting Support for War: Conscription and Mass Support for Warfare." Journal of Politics 73 (2):1-11.

Human Rights Watch. 2014. "Advancing the Debate on Killer Robots: 12 Key Arguments for a Preemptive Ban on Fully Autonomous Weapons."

Page 25: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

25

Human Rights Watch, and Harvard Law School’s International Human Rights Clinic. 2012. "Losing Humanity: The Case against Killer Robots." Human Rights Watch.

Jones, Byron, and Michael G Kenward. 2014. Design and analysis of cross-over trials. New York: CRC Press.

Kant, Immanuel. 1996. Kant: The metaphysics of morals. New York: Cambridge University Press. Keck Margaret, E, and Kathryn Sikkink. 1998. "Activists beyond borders: advocacy networks in

international politics." Ithaca, NY: Cornell University Press. Kober, Avi. 2008. "The Israel defense forces in the Second Lebanon War: Why the poor

performance?" Journal of Strategic Studies 31 (1):3-40. Kreps, Sarah. 2014. "Flying under the radar: A study of public attitudes towards unmanned

aerial vehicles." Research & Politics 1 (1):107. Macdonald, Julia, and Jacquelyn Schneider. 2016. "Views from the Ground on the A-10 Debate."

War On The Rocks, March 16, http://warontherocks.com/2016/03/views-from-the-ground-on-the-a-10-debate/.

Merom, Gil. 2003. How democracies lose small wars: state, society, and the failures of France in Algeria, Israel in Lebanon, and the United States in Vietnam. New York: Cambridge University Press.

Morrow, James D. 2007. "When do states follow the laws of war?" American Political Science Review 101 (03):559-72.

Office of Science and Technology Policy. 2016. "Preparing for the Future of Artificial Intelligence." Washington, DC: Executive Office of the President.

Olsen, Johan P, and James G March. 1989. "Rediscovering institutions: The organizational basis of politics." New York: Free Press.

Pilster, Ulrich, and Tobias Böhmelt. 2011. "Coup-proofing and military effectiveness in interstate wars, 1967–99." Conflict Management and Peace Science 28 (4):331-50.

———. 2012. "Do Democracies Engage Less in Coup-Proofing? On the Relationship between Regime Type and Civil–Military Relations1." Foreign Policy Analysis 8 (4):355-72.

Piplani, Varun, and Caitlin Talmadge. 2016. "When war helps civil–military relations: Prolonged interstate conflict and the reduced risk of coups." Journal of Conflict Resolution 60 (8):1368-94.

Press, Daryl G, Scott D Sagan, and Benjamin A Valentino. 2013. "Atomic aversion: experimental evidence on taboos, traditions, and the non-use of nuclear weapons." American Political Science Review 107 (01):188-206.

Quinlivan, James T. 1999. "Coup-proofing: Its practice and consequences in the Middle East." International Security 24 (2):131-65.

Reaching Critical Will. 2013. "Start International Talks on Killer Robots."http://www.reachingcriticalwill.org/news/latest-news/8366-start-international-talks-on-killer-robots.

Roff, Heather M. 2016. "Meaningful Human Control or Appropriate Human Judgment? The Necessary Limits on Autonomous Weapons " Arizona State University Global Security Initiative Briefing Paper, https://globalsecurity.asu.edu/sites/default/files/files/Control-or-Judgment-Understanding-the-Scope.pdf.

Rokeach, Milton. 1973. The Nature of Human Values. New York: Free press.

Page 26: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

26

Ryan, Timothy J. 2014. "No Compromise: The Politics of Moral Conviction." Ann Arbor, MI: University of Michigan.

Saunders, Elizabeth N. 2011. Leaders at War: How Presidents Shape Military Interventions. Ithaca, NY: Cornell University Press.

Schaffner, Brian, and Stephen Ansolabehere. 2015. "Cooperative Congressional Election Study, 2014: Common Content." Cambridge, MA: Harvard Dataverse V3.

Scharre, Paul. 2016. "Autonomous Weapons and Operational Risk." CNAS Working Paper, February, http://www.cnas.org/sites/default/files/publications-pdf/CNAS_Autonomous-weapons-operational-risk.pdf.

Scharre, Paul, and Michael C. Horowitz. 2015. "An Introduction to Autonomy in Weapon Systems." CNAS Working Paper, February, http://www.cnas.org/sites/default/files/publications-pdf/Ethical%20Autonomy%Working%Paper_021015_v02.pdf.

Schmitt, Michael N. 2013. "Autonomous Weapon Systems and International Humanitarian Law: A Reply to the Critics." Harvard National Security Journal.

Sears, David O, Carl P Hensler, and Leslie K Speer. 1979. "Whites' opposition to “busing”: self-interest or symbolic politics?" American Political Science Review 73 (02):369-84.

Sechser, Todd S, and Elizabeth N Saunders. 2010. "The Army You Have: The Determinants of Military Mechanization, 1979–2001." International Studies Quarterly 54 (2):481-511.

Skitka, Linda J. 2002. "Do the means always justify the ends, or do the ends sometimes justify the means? A value protection model of justice reasoning." Personality and Social Psychology Bulletin 28 (5):588-97.

Skitka, Linda J, Christopher W Bauman, and Edward G Sargis. 2005. "Moral conviction: Another contributor to attitude strength or something more?" Journal of Personality and Social Psychology 88 (6):895-917.

Skitka, Linda J, and David A Houston. 2001. "When due process is of no consequence: Moral mandates and presumed defendant guilt or innocence." Social Justice Research 14 (3):305-26.

Skitka, Linda J, and Elizabeth Mullen. 2002a. "The dark side of moral conviction." Analyses of Social Issues and Public Policy 2 (1):35-41.

———. 2002b. "Understanding judgments of fairness in a real-world political context: A test of the value protection model of justice reasoning." Personality and Social Psychology Bulletin 28 (10):1419-29.

Sparrow, Robert. 2007. "Killer robots." Journal of applied philosophy 24 (1):62-77. Talmadge, Caitlin. 2015. The Dictator's Army: Battlefield Effectiveness in Authoritarian Regimes.

Ithaca, NY: Cornell University Press. Tomz, Michael, and Jessica Weeks. 2013. "Public opinion and the democratic peace." American

Political Science Review 107 (4):849-65. UNIDIR. 2015. "The Weaponization of Increasing Autonomous Technologies: Considering Ethics

and Social Values." Geneva, Switzerland: United Nations Institute for Disarmament Research.

Zaller, John. 1992. The nature and origins of mass opinion. New York: Cambridge university press.

Page 27: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

27

Appendix

Key Survey Question Text

We are going to describe a situation the United States could face in the future. For scientific validity, the situation is general, and is not about a specific country in the news today. Some parts of the description may strike you as important; other parts may seem unimportant.

<---Page Break---> Suppose a country were in the midst of a civil war and contains a civilian population being threatened by insurgents. The national government is unable to handle the situation on its own. The President of the United States makes the decision that the American military should intervene to help resolve the crisis and protect the population from insurgents. The United States could send either US military personnel or a US military force of autonomous weapons systems. Autonomous weapons systems are robotic systems that, once activated, can independently make the decision to target and fire weapons without a human involved.

[US military personnel would be much more likely to inflict civilian casualties on the population than a US military force made up of robotic autonomous weapons systems.]

[A US military force of robotic autonomous weapons systems would be much more likely to inflict civilian casualties on the population than US military personnel.]

[US military personnel would be just as likely to inflict civilian casualties on the population as a US military force of robotic autonomous weapons systems.]

[A US military force of robotic autonomous weapons systems would be just as likely to inflict civilian casualties on the population as US military personnel.]

Which of these would you be more supportive of the President using? Question Text

1 Strongly supportive of using autonomous weapons systems 2 Somewhat supportive of using autonomous weapons systems 3 Neither more or less supportive of military personnel or autonomous

weapons systems 4 Somewhat supportive of using US military personnel 5 Strongly supportive of using US military personnel

Page 28: Labor, Capital, and the Morality of Emerging Technologies ...people.fas.harvard.edu/~jkertzer/HISC2017/schedule/papers/Horowit… · Michael C. Horowitz . Associate Professor, Political

28

Outcome Outcome Label