Conference Agenda

Session Overview
Session
#SA1: Embodied Data Paper Session 1
Time:
Wednesday, 24/Jul/2019:
9:00am - 10:30am

Session Chair: Quinn Dombrowski
Location: Salons 4 & 5, Grand Ballroom, Marriott City Center

Presentations

XM<LGBT/>: Enacting the QueerOS

Abbie Levesque

Northeastern University, United States of America

In the Debates in the Digital Humanities 2016, Barnett et al. laid out the groundwork for a software version of Kara Keeling’s “Queer OS.” The digital humanities have recently been inquiring into the ways tools and systems think, and how those ways of thinking are affected by the cultures that created those tools and systems. If tools and systems are created to think like hegemonic groups do, then they re-inscribe the values of the hegemony onto their data. When Kara Keeling called for the initial “Queer OS,” she was using OS as a way to refer to a framework of thinking, not necessarily an actualized software. And notably, Barnett et al. did not actually build a QueerOS. In a research project on queer writing center practices, I built a coding language called XM<LGBT/> as part of the qualitative coding process. I have, with this coding, attempted to put into practice a qualitative research methodology for writing center research that uses a queer logic system and is coded with the LGBTAQ community’s interests in mind. That is, I have built a small enaction of the call for a QueerOS. I wish to discuss the applications of queer computer systems in rhetoric and composition research work, and the ways this differed from the traditional uses of qualitative coding. I also wish to discuss the limitations of the system and possible future directions for both queered methodologies and queer coding systems.



Tejidos Autómatas: Simbología Textil Indígena Latinoamericana desde los modelos de autómatas celulares

Iván Terceros

CIESPAL, Ecuador

El acto de programar se fundamenta sobre la idea de aceptar lo binario como motor constitutivo de la vida, una convención occidental referida al dialéctica, por la cual se entiende el mundo en; inicios y fines, encendidos y apagados, buenos y malos, unos y ceros. Si bien la intrepretación del mundo corresponde a estructuras sumamente complejas, no se extingue la binariedad como el acto convencional micro y fundacional. Esta es una reflexión fundamental para el estudio de la Computación Decolonial.

TejidosAutómatas, es un proyecto desarrollado en forma de talleres en los cuales los participantes de diversos campos y orígenes culturales, reflexionan sobre los condiciones políticas y culturales de la filosofía de la tecnología, como construcción hegemónica occidental, e intentando proponer otras formas reflexivas de la tecnología mediante los sistemas filosóficos indígenas, expresados en el estudio de la simbología de los tejidos indígenas como una fuente de inspiración de sistemas alternativos de codificación.

Para esta serie de experimentos, se hace uso de modelos de autómatas celulares, particularmente de el Juego de la Vida de John Horton Conway, como abstracción concreta de la aplicación de reglas definas a un sistema social, dentro de la binariedad, la cual debe posteriormente ser modificada desde los fundamentos filosóficos de diversos pueblos indígenas latinoamericanos (andinos fundamentalmente).

Durante un par de semanas, el taller se nutre de varios campos de estudio, fundamentos básicos de la teoría general de sistemas, teoría de sistemas sociales de Niklas Luhmann, reflexiones de la colonialidad del saber desde el pensamiento decolonial, estudios semióticos y antropológicos del diseño de tejidos indígenas, programación básica con P5.js, estructuras de cosmología indígena, modelos de autómatas celulares y finalmente sesiones de construcción de diseños hipotéticos de modelos sociales basados en la supuestos filosóficos indǵienas para ser probadas en modelos de nuevos autómatas celulares interpretación un tejido indígena digital.



The Ethics of (Digital Humanities) Algorithms: Toward a Code of Ethics for Algorithms in the Digital Humanities

Clifford B. Anderson

Vanderbilt University, United States of America

What is algorithmic ethics for digital humanists? This paper explores the nascent field of “algorithmic ethics”[1] and its potential for shaping research and practice in the digital humanities.

The ubiquity of computational systems in our lifeworld is bringing scholarly attention to the societal effects of algorithms. Ed Finn,[2] Hannah Fry,[3] Safiya Umoja Noble,[4] among others, have shown that algorithms are not socially neutral, illustrating how they reflect, shape, and reinforce cultural prejudices. How should digital humanists identify and categorize ethically complex algorithms?

Computer scientists use the so-called ‘Big O’ notation to represent the time and space complexity of algorithms. They classify algorithms, for instance, as constant, logarithmic, linear, linearithmic, quadratic, etc., aiming to understand how they scale with inputs. In essence, computer scientists categorize algorithms by abstracting from concrete details of implementation like the operating system, processor(s), and other empirical characteristics of the computing environment. Instead, they focus on the number of operations that algorithms take as they scale with inputs, considering the 'worst case' scenario to discern the upper bounds of their computational complexity.

Might digital humanists develop analogous notation for categorizing algorithms according to their potential social effects at scale? Should digital humanists ask a similar question when evaluating the ethical complexity of algorithms, namely, how algorithms might negatively affect human actors under 'worst case' scenarios as they scale? However, asking such a question requires digital humanists to retain and study the empirical context in which algorithms are deployed, a crucial disanalogy from the way computer scientists employ the 'Big O' notation to indicate computational complexity.

Drawing on the growing literature on algorithmic ethics,[5] this paper suggests ways of working toward a code of ethics for algorithms based on identifying potential 'worst case' scenarios at different scales in order to anticipate bias and mitigate social harm from the use of algorithms in the digital humanities.

Works Cited

[1] Felicitas Kraemer, Kees van Overveld, and Martin Peterson, “Is There an Ethics of Algorithms?” Ethics and Information Technology 13, no. 3 (September 2011): 251–60, doi:10.1007/s10676-010-9233-7.

[2] Ed Finn, What Algorithms Want: Imagination in the Age of Computing (Cambridge, MA: MIT Press, 2017).

[3] Hannah Fry, Hello World: Being Human in the Age of Algorithms (New York: W. W. Norton & Company, 2018).

[4] Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).

[5] Brent Daniel Mittelstadt et al., “The Ethics of Algorithms: Mapping the Debate,” Big Data & Society 3, no. 2 (December 2016): 12, doi:10.1177/2053951716679679.