The Great Commission and the Great Game

By Brian J. Auten

 

I’ll begin by asserting that intelligence ethics is in the midst of its awkward teenage years. The speciality spent its schoolyard days in the post 9/11 and Second Iraq War debate over the morality of torture and enhanced interrogation, leading to breakthrough US and UK work on “just intelligence,” a speciality-focused journal, and the release of broader anthologies and books by the likes of Jan Goldman, Jim Olson, David Perry, and Michael Skerker.[1] Now, after active, productive and wider-ranging elementary and tween years, the now-mid-to-late teen has spent the three or so years up in the bedroom, with headphones, flipping through niche music on a smartphone. Applying earlier insights to a handful of subcategories — most pointedly, the ethics of surveillance (“just surveillance”[2]) and drone use[3] — there has been an attendant lack of broader analyses and a few (unfortunately way over-priced) speciality overviews.[4] The following should be taken as a couple of research program suggestions as the speciality graduates and enters its early twenties.

Temptation and Moral Injury in Human Intelligence Collection

 

More often than not, discussion of the moral challenges of good, old-fashioned human intelligence (HUMINT) collection is left to fiction. Indeed, the central tension of the upcoming storyline of John Le Carré’s A Legacy of Spies involves competing ethical frameworks — 21st century British Intelligence apparatus versus Cold War agent handling of “Smiley’s people.” While I certainly appreciate espionage fiction — and, trust me, I’ve already pre-ordered Le Carré’s newest — we need more academically-oriented ethical, theological and historical reflections on the nuts and bolts of HUMINT collection: elicitation, recruitment and running agents.

For instance, how does temptation work in elicitation and recruitment? In intelligence, elicitation is about structuring the environment so as to move someone to communicate to you their personal information, their access to sensitive materials, and, over time, grow comfortable enough to pass you that sensitive information. “A traitor needs two things,” LeCarre explained in The Secret Pilgrim, “somebody to hate, and somebody to love,” and to convince someone to share their hates and loves and, again over time, provide you with sensitive information, the agent recruiter plays the tempter, and the recruited agent the successfully tempted. In theological ethics, particularly in extra bellum circumstances, what does it mean for an employee of the state to serve their career as a professional tempter? And what about non-state intelligence collectors? Jim Rockford’s recruitment of neighborhood sources can’t hold a candle to the rapid proliferation of corporate threat intelligence centers with international reach. Today, you might find a professional intelligence collector who’s an employee of the Walt Disney Corporation, say, or of one of the panoply of cyber security specialists like ThreatConnect, CrowdStrike and FireEye. These are people who, like their government counterparts, are fully active in the temptation business.

And what might the literature on moral injury have to say about HUMINT? Much has been written on the subject in recent years, but the lion’s share of effort is, understandably, focused on soldiering,[5] albeit with a few tie-ins to the debate over torture. If HUMINT collection involves tempters and the successfully tempted, how potentially injurious is this activity to psyches, relationships, and souls? Eric Ambler (again, note the over-reliance on fiction here) famously quipped that counterespionage professionals were “[among men,] the most suspicious, unbelieving, unreasonable, petty, inhuman, sadistic, [and] double-crossing set of bastards.” If true, intelligence officers would seem fruitful ground for an application of moral injury. Additionally, for intelligence officers and recruited agents alike, there can also be pressures associated with living behind enemy lines under sustained pretense. The long-term use of cover identities (think of The Americans), a lengthy stint as a recruitment-in-place followed by defection, or long-term work as a dangle and double agent — in each of these examples, what does living deceptively for years on end do, morally, to one’s social and psychological fiber?

 

Autonomy and Responsibility in the Use of Technology for Intelligence Collection and Analysis

 

In this post-Snowden, Predator-rich environment, there will be no shortage of ethically-oriented work on drones, “big data,” and surveillance. In these areas, I expect continued (and sometimes agonized) wrestling matches over ad bellum questions of authority and privacy— who authorizes, and under what circumstances? — and on in bello concerns over discrimination — that is, once one designates legitimate versus illegitimate targets, how do you operationally distinguish between them? These challenges aren’t going away, and they will require even deeper reflection as algorithms and machine automation take lead roles in the collection, initial processing and analysis of intelligence.

Who is the ethically-responsible party when an algorithm does the heavy lifting? We are nowhere close to William Gibson’s “aunties,” the autonomous algorithms from his recent novel The Peripheral who (pronoun choice intentional) monitor and police public order. But, we are certainly at a place where algorithms are regularly applied in screening, filtering, identifying patterns and links, and for ameliorating information glut. For now, algorithms assist, but people — moral agents — haven’t been completely removed from the process. There are the programmers, contractors or government employees who set the algorithm’s parameters and sensitivity levels. There are the intelligence analysts who evaluate, assess and combine the algorithm’s results with other reporting. Given the human element in our present man-machine national security combinations, some solid work could probably be done by blending ethical inquiry with some of the older classics on technology, complexity and risk like Charles Perrow’s Normal Accidents and Robert Jervis’ System Effects.

However, the ethical scene becomes trickier as systems approach the “fully autonomous” end of the continuum. Naturally, most academic energy has been spent on autonomous weapon systems and distinctions between semi-autonomous and fully-autonomous “killer robots” — the latter being, according to some, thoroughly counter to international humanitarian law (IHL). In political science and strategic studies circles, debates over autonomous weapons systems revolve around the establishment of international norms and levels of regulation,[6] but in the area of ethics, Kara Slade, a Ph.D. candidate at Duke Divinity (and a former NASA research engineer) in 2015 published an article exploring what autonomous drones mean for theological anthropology.[7] Much, much more can be done here — and it shouldn’t be limited to the ethics of autonomous weapons, but autonomous alert and alarm systems, border protection and policing systems, and autonomous machine creation and use of intelligence.

There is, for example, a strong push for autonomous insider threat detection — computer-run flagging of “indicators” an employee may be stealing information, committing espionage, or has a comparative propensity for doing so.[8] In man-machine scenarios, there are quality (or reality) checks on outputs, but in machine-only variants, outputs might automatically precipitate internal affairs and counterespionage investigations. This last point connects well with some of the current hand-wringing over the loss of faith in expertise, and the negative impact of algorithms on human decision-making. As experts, not to mention the very idea of expertise, lose cachet in an era of skepticism and “truthiness,” will algorithms be accepted as substitute experts? And what happens when those algorithms work in systems which skewing ever more closely to the “fully autonomous” side of things?[9]

To summarize, the above should be considered a brief set of “thoughts in progress” and potential research programs for those interested in the intelligence ethics specialty. As I hope one can see, the field is ripe for additional work, and could benefit from the insights of scholars and practitioners steeped in historical, orthodox Christianity and the Christian ethical tradition.

 

Brian J. Auten currently serves as a supervisory intelligence analyst with the United States government and is an adjunct professor at the Department of Government at Patrick Henry College in Purcellville, Virginia. All views, opinions, and conclusions are solely those of the author and not the US government, or any entity within the US intelligence community. This article was submitted and approved through his agency’s pre-publication process.

 

Footnotes

 

[1] Jan Goldman, The Ethics of Spying, Vol. 1 (2006), Vol. 2 (2009); James M. Olson, Fair Play: The Moral Dilemmas of Spying (2007); David L. Perry, Partly Cloudy: Ethics in War, Espionage, Covert Action, and Interrogation, Vol. 1 (2009); Michael Skerker, An Ethics of Interrogation (2010).

 

[2] For an overview of the “just surveillance” literature, see Brian Auten, “Just Intelligence, Just Surveillance, and the Least Intrusive Standard,” Providence: A Journal of Christianity and American Foreign Policy (Spring 2016), https://providencemag.com/2016/09/just-intelligence- just-surveillance-least-intrusive-standard/.

 

[3] Gregoire Chamayou, A Theory of the Drone (2015); Bradley Jay Strawser, Killing by Remote Control: The Ethics of an Unmanned Military (2013); Kenneth Himes, Drones and the Ethics of Targeted Killing (2015); Jameel Jaffer, The Drone Memos: Targeted Killing, Secrecy and the Law (2016); Scott Shane, Objective Troy: A Terrorist, a President, and the Rise of the Drone (2015); Hugh Gusterson, Drone: Remote Control Warfare (2016); John Kaag and Sarah Kreps, Drone Warfare (2014); Christopher J. Fuller, See It/Shoot It: The Secret History of the CIA’s Lethal Drone Program (2017)

 

[4] Jai Galliott and Warren Reed (eds.), Ethics and the Future of Spying (2016); Ross Bellaby, The Ethics of Intelligence: A New Framework (2016); Darrell Cole, Just War and the Ethics of Espionage (2014).

 

[5] David Wood, What Have We Done: The Moral Injury of Our Longest Wars (2016); Rita Nakashima Brock and Gabriella Lettini, Soul Repair: Recovering from Moral Injury After War (2013); Robert Emmet Meagher and Jonathan Shay, Killing from the Inside Out: Moral Injury and Just War (2014); Nancy Sherman, Afterwar: Healing the Moral Wounds of our Soldiers (2015); Edward Tick, Warrior’s Return: Restoring the Soul after War (2014); Tom Frame, Moral Injury: Unseen Woods in an Age of Barbarism (2016). Also, see Timothy Mallard, “The (Twin) Wounds of War,” Providence: A Journal of Christianity and American Foreign Policy (Fall 2016), https://providencemag.com/2017/02/twin-wounds-war-spiritual-injury-moral-injury/; and Marc LiVecche, “With Malice Toward None: The Moral Ground for Killing in War,” Ph.D. thesis, University of Chicago (2015). Jonathan Shay’s older works were groundbreaking in this area: Achilles in Vietnam: Combat Trauma and the Undoing of Character (1994); and Odysseus in America: Combat Trauma and the Trials of Homecoming (2003).

 

[6] Dustin Lewis, et. al., “War-Algorithm Accountability” (2016), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2832734; Human Rights Watch, “Losing Humanity: The Case Against Killer Robots” (2012), https://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf.

[7] Kara N. Slade, “Unmanned: Autonomous Drones as a Problem of Theological Anthropology,” Journal of Moral Theology, Vol. 4, No. 1, (2015). Slade will be addressing an October 2017 conference at Duke, “One Nation Under Drones: Conference Against Drone Warfare,” which I expect is also tied to the timed release of a Duke University Press book: Lisa Parks and Caren Kaplan (eds.), Life in the Age of Drone Warfare.

 

[8] See, for example, Iffat A. Gheyas and Ali E. Abdallah, “Detection and Prediction of Insider Threats to Cyber Security: A Systematic Literature Review and Meta-Analysis,” Big Data Analytics (August 2016), https://bdataanalytics.biomedcentral.com/articles/10.1186/ s41044-016-0006-0. For the use of algorithms in detecting police misconduct, see Rob Arthur, “We Now Have Algorithms to Predict Police Misconduct,” FiveThirtyEight, 9 March 2016, https:// fivethirtyeight.com/features/we-now-have-algorithms-to-predict-police-misconduct/.

 

[9] See, for example, Thomas Nichols, The Death of Expertise: The Campaign Against Established Knowledge and Why It Matters (2017); Cecilia Mazanec, “Will Algorithms Erode Our Decision-Making Skills,” National Public Radio, 8 February 2017, http://www.npr.org/sections/ alltechconsidered/2017/02/08/514120713/will-algorithms-erode-our-decision-making-skills.


TAGS

RELATED POSTS