What is algorithmic ethics for digital humanists? This paper explores the nascent field of “algorithmic ethics” and its potential for shaping research and practice in the digital humanities.
The ubiquity of computational systems in our lifeworld is bringing scholarly attention to the societal effects of algorithms. Ed Finn, Hannah Fry, Safiya Umoja Noble, among others, have shown that algorithms are not socially neutral, illustrating how they reflect, shape, and reinforce cultural prejudices. How should digital humanists identify and categorize ethically complex algorithms?
Computer scientists use the so-called ‘Big O’ notation to represent the time and space complexity of algorithms. They classify algorithms, for instance, as constant, logarithmic, linear, linearithmic, quadratic, etc., aiming to understand how they scale with inputs. In essence, computer scientists categorize algorithms by abstracting from concrete details of implementation like the operating system, processor(s), and other empirical characteristics of the computing environment. Instead, they focus on the number of operations that algorithms take as they scale with inputs, considering the 'worst case' scenario to discern the upper bounds of their computational complexity.
Might digital humanists develop analogous notation for categorizing algorithms according to their potential social effects at scale? Should digital humanists ask a similar question when evaluating the ethical complexity of algorithms, namely, how algorithms might negatively affect human actors under 'worst case' scenarios as they scale? However, asking such a question requires digital humanists to retain and study the empirical context in which algorithms are deployed, a crucial disanalogy from the way computer scientists employ the 'Big O' notation to indicate computational complexity.
Drawing on the growing literature on algorithmic ethics, this paper suggests ways of working toward a code of ethics for algorithms based on identifying potential 'worst case' scenarios at different scales in order to anticipate bias and mitigate social harm from the use of algorithms in the digital humanities.
 Felicitas Kraemer, Kees van Overveld, and Martin Peterson, “Is There an Ethics of Algorithms?” Ethics and Information Technology 13, no. 3 (September 2011): 251–60, doi:10.1007/s10676-010-9233-7.
 Ed Finn, What Algorithms Want: Imagination in the Age of Computing (Cambridge, MA: MIT Press, 2017).
 Hannah Fry, Hello World: Being Human in the Age of Algorithms (New York: W. W. Norton & Company, 2018).
 Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).
 Brent Daniel Mittelstadt et al., “The Ethics of Algorithms: Mapping the Debate,” Big Data & Society 3, no. 2 (December 2016): 12, doi:10.1177/2053951716679679.