Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Date: Tuesday, 23/Jul/2019
8:00am - 8:40am#ACH2019 Org: ACH2019 Organizers' Meeting
Session Chair: Jennifer Guiliano
Session Chair: Alison Langmead
Note: This meeting is for members of the steering committee of ACH2019 only.
Duquesne University, Dougherty Ballroom B, Power Center 
8:15am - 9:00amWorkshop Check-In (AM & Full day only)
Pre-registered participants for are welcome to check in for the workshops and conference.
Duquesne University, Power Center, Shepperson Suite, 5th floor 
8:15am - 2:00pm#Registration: Conference Registration
Pre-registered conference attendees only are welcome to check in during this session for the main conference.
Duquesne University, Power Center, Shepperson Suite, 5th floor 
9:00am - 12:00pm#WkshpAM1: From Discovery to Archaeology Workshop
Duquesne University, Union Building, Room 119 
 

From Discovery to Archaeology: Working with Rare-Book MARC Records as Data

Dolsy Smith, Jennifer King, Leah Richardson

The George Washington University, United States of America

This three-hour (or half day) workshop will introduce participants to methods, tools, and techniques for working with MARC records as data. While covering some basic computational approaches to cleaning and analyzing a large set of catalog records, we will give special attention to the complexities and ambiguities that make the MARC format and its associated descriptive standards -- especially in the context of rare-book cataloging -- both a rich source of meaning and a challenging topology for computation. Our metaphor of an archaeology is meant to suggest the multiple strata of information that can reside in such a collection. In addition to information about the contents and publication history of the items in the collection, these records contain evidence of the history of the collection itself (i.e., its provenance and its shape over time), as well as markers of changing descriptive practices.

The goals of the workshop are as follows:

  • To formulate research questions about collections as data, with a focus on rare-book collections;

  • To engage in shared inquiry into the challenges associated with normalizing and analyzing MARC records;

  • To gain familiarity with writing Python code to address such questions.

Participants will get hands-on practice in the following:

  • Loading and parsing a set of MARC records with Python;

  • Serializing a subset of MARC fields and subfields for analysis;

  • Cleaning and normalizing the data in those fields;

  • Doing basic data visualization with Python libraries;

  • Using Python in the Jupyter environment (which facilitates code documentation and reproducibility).

While we will provide a set of MARC records for participants to work with, they are also invited to use their own set of records (provided those records can be downloaded ahead of time and saved as a binary or text file). In addition, we take a de-centered, anti-hierarchical approach to instruction, seeking to create an environment in which workshop leaders and participants alike can share their expertise as programmers, catalogers, rare-book scholars, or simply curious readers. Prior experience in either MARC cataloging or Python is NOT required. The only prerequisite is a a willingness to experiment, to be challenged, and to participate in a shared enterprise of discovery: in other words, curiosity. We prize curiosity as a form of expertise in its own right.

 
9:00am - 12:00pm#WkshpAM2: Introduction to Sound Analysis Workshop
Duquesne University, Gumberg Library, Room 408 
 

Introduction to Sound Analysis

Tanya E. Clement1, Brian McFee2

1University of Texas at Austin, United States of America; 2New York University, Steinhardt

Learning to work with computational approaches to sound studies must begin with a basic understanding about the ways in which sound is represented as data in the computational environment. The sound analysis workflows that we will share with participants will include basic steps for employing common, free and open-source, python libraries for accessing, processing, analyzing, and visualizing audio data. These workflows will be presented as Jupyter notebooks that users can easily adapt and run from local computers during and after the workshop. Topics that we will cover in this 3-hour workshop include:

  1. An introduction to setting up your computer for sound analysis with a brief introduction to using Binder and the Docker image we will set up for participants;
  2. An introduction to sound analysis using python, python libraries (such as McFee’s Librosa python package https://github.com/librosa/librosa/), and Jupyter notebooks;
  3. A demonstration with an example slicing and dicing audio that will introduce participants to basic audio concepts in signal processing such as sampling rates and the fundamental frequency;
  4. A hands-on group activity with a data set we have constructed including poetry performances from the SpokenWeb project (https://spokenweb.ca/) that will include:
  1. Silence/non-silence detection and auto-segmentation using unsupervised learning approaches such as K-means;
  2. Vocal / non-vocal detection using a pre-designed model and data set, during which participants will be introduced to how a supervised model is trained;
  3. Speaker segmentation and grouping using a pre-designed model and data set, during which participants will be introduced to how a supervised model is trained;

This workshop is appropriate for beginners to programming and sound analysis. No experience needed. Participants should bring their own laptops unless a computer lab can be provided.

 
9:00am - 12:00pm#WkshpAM3: Toward Embodied Knowledge Workshop
Duquesne University, Union Building, Room 109 
 

Toward Embodied Knowledge: Observing and Notating Movement

Rommie Leigh Stalnaker1, Susan Lynn Wiesner2

1Independent Artist, United States of America; 2University of Maryland Performing Arts Library

This workshop proposal steps outside of the digital in order to return to the digital. Recent experiences in DH circles have supported our belief that those in DH are becoming more interested in the possibilities for research offered by the performing arts, humanities writ large. That interest is often due to a greater desire to understand embodied knowledge, how it is transferred, preserved, and developed through, and in support of, digital projects.

Yet, DH researchers and scholars who consider embodiment often do so without the expertise of those for whom the body is the central core of study. Instead, they pursue technologies without a grounding of the elements of nonverbal communication and existing systems for movement observation. An example is the use of Rudolph Laban’s theories, specifically the notation system developed by his student/collaborators Albrecht Knust and Irmgard Bartenieff: Labanotation (US)/Kinetography Laban (UK/EU). Too often work using Laban ignores basic principles of these systems and the complexities of movement, specifically structured movement as in dance.

Therefore, we propose a ½ day workshop on movement observation and notation systems that provide means of transference and preservation of embodied knowledge. Our purpose is twofold: to (a) remedy this deficiency, and (b) expand a community of practice within DH to explore future projects which incorporate embodied knowledge. Four specific notation systems will be explored: Feuillet notation, Labanotation, Motif Writing, and memory. To support an understanding of these systems, we will use the Laban/Bartenieff Movement System and explore the Body, moving in Space, with Effort, creating Shape. As attendees gain more knowledge they can begin discussions of the value of using LBMS and notation in a variety of digital projects.

Open to all, participants from any area of DH will have opportunities to both read and write graphic movement notation and to observe movement and discuss options for notating it. We hope to build a bridge between artists, technologists, and digital humanists. As an end result we do not expect a “completed work”, but rather a conversation starter, an idea builder as we present ways to work with existing systems. Who knows, perhaps together we will start to design a new system of digital movement notation!

 
9:00am - 12:00pm#WkshpAM4: I Can Haz Consultation? Workshop
Duquesne University, Gumberg Library, Room 202 
 

I Can Haz Consultation?: Investigating the Theory and Practice of the Digital Humanities Consultation

Jean Ann Bauer

Princeton University, United States of America

In North American digital humanities, the “DH consultation” is a ubiquitous practice. To be in DH, on some level, means either being paid or volunteering to consult with people about digital humanities and potential projects. From specialized digital humanities centers to digital humanities librarians to solo practitioners (be they students, faculty, university staff, or members of the GLAM sector), everyone does consultations. But what are we doing when we offer to “consult” on digital humanities?

This workshop will go beyond “tips and tricks” to invite participants to thoughtfully examine this fundamental practice of digital humanities in a community setting.

Questions relevant to the question of consultations in digital humanities include:

  • How do consultations play into (or subvert) power structures in the academy and the tech sector?
  • Do consultations present digital humanities as a service or a collaboration? When does that matter?
  • How do we surface the invisible and emotional labor of consultations (such as finding a mutually agreeable time, hearing someone out, shaping someone else’s thoughts into next steps in real time)?
  • What is the relationship between digital humanities consultations and the long-studied and -taught “reference interview” from library and information science?
  • How do expectations about consultations play into contingent labor in digital humanities?
  • How does a “traditional” digital humanities consultation need to change to fully involve community partnerships and promote knowledge/resource sharing among people with highly diverse lived experiences?
  • Do people who receive consultations owe credit to their consultants?

Participants will not answer all of these questions, but they will be provided with a space where they can be raised, investigated, and respectfully debated. To maximize participation and accessibility, workshop activities will be a mix of small group discussions, group writing, and sharing experiences. By approaching consultations from multiple theoretical and practical directions, the workshop will be welcoming to DH newcomers and veterans. The preferred setting for this workshop is a three hour time slot and 25 participants.

The workshop convener* has over 10 years experience giving DH consultations, first as a graduate student, then as a technologist and digital humanities librarian, and now as a center director. The convener is native English speaker and fluent in Spanish, and so could provide bilingual materials and support Spanish language groups/writing sessions in the workshop (other language groups are also welcome). Possible outcomes of the workshop include new resources for the community, along the lines of: a consultation “syllabus”, a DH consultants’ “Bill of Rights”, or some other set of best practices or thought provocations.

* Name withheld for double blind peer review

 
9:00am - 3:00pm#ACHExec: ACH Executive Meeting
Session Chair: Vika Zafrin
Session Chair: Matthew Gold
Note: This meeting is for current ACH Executive Members only
Duquesne University, Dougherty Ballroom B, Power Center 
9:00am - 3:00pm#PreConf: Inclusive Pedagogy Pre-conference
The Inclusive Pedagogy Pre-Conference Workshop is sponsored by the Humanities Intensive Learning & Teaching Institute (HILT).
Duquesne University, Dougherty Ballroom A, Power Center 
 

Inclusive Digital Pedagogy: Theory and Praxis

Katherine Walden1, Anelise Hanson Shrout2, Lisa Tagliaferri3, Anne Cong-Huyen4, Simon Appleford5, Rebekah Walker6, Danica Savonick7, Tiffany Salter2, Kush Patel4, Joe Bauer4

1Grinnell College; 2Bates College; 3Massachusetts Institute of Technology; 4University of Michigan; 5Creighton University; 6Rochester Institute of Technology; 7SUNY Cortland

The rising interdisciplinary, cross-institutional, and transnational collaborations in digital humanities pedagogies are at once constituting and constituted by ground-up solidaries against neoliberalism, sexism, and racism in higher education. These networks of knowledge-making and community-building, however, also bring with them unique place-based and role-centered challenges:

  • In what ways and to what extent might we able to build and sustain a common ground when contingencies and pracarities are at the heart of where we are and what we do?
  • In what ways might we still engage our institutions and this increasing “trend” towards professionalizing curricular and co-curricular programs around digital literacy, digital computation, and digital competencies?

Bringing together faculty, librarians, staff, and “alt-ac” digital humanists, this proposal aims to engage colleagues and participants at the ACH conference who are committed to building inclusive pedagogical strategies that advance an intersectional feminist ethos of mutual empowerment and shared expertise. Conceptualized as a “track” or “stream” of linked sessions, this proposal makes the resources of an in-depth workshop on inclusive digital pedagogy available in more formats, to more facilitators, and for more ACH community members.

The goal of these linked sessions is to move toward intersectional feminist conversations around digital pedagogy that empower students, equip faculty, and acknowledge diverse forms of labor and contribution. These sessions also seek to promote and cultivate digital pedagogy communities informed by movements like #transformDH, #AnticolonialDh, #OurDhIs, postcolonial DH, and more broadly critical cultural studies. Starting with an acknowledgement of the Indigenous territories on which we live and work, a diverse slate of facilitators will work to reimagine traditional hierarchies and structures of power, center ignored and minoritized voices, and move toward addressing the still unanswered questions of co-shaping a more equitable and inclusive digital future.

 
9:00am - 3:00pm#WkshpFull: Collections as Data Jam
Duquesne University, Dougherty Ballroom C, Power Center 
 

Collections as Data Jam

Nick Budak1, Rebecca Koeser1, Natalia Ermolaev1, Abigail Potter2, Meghan Ferriter2, Stewart Varner3, Laurie Allen3, Ben Hicks1, Thomas Padilla4

1Princeton University; 2Library of Congress; 3University of Pennsylvania; 4UNLV

As the digital products of humanistic research grow in number and complexity, DH project teams have repeatedly borrowed and adapted from the software development industry to bolster the sustainability of their digital scholarship. At the same time, some have asked what could be gained from embracing an ephemeral approach to DH work.

The “data jam” is a concept that draws from both the longstanding “hackathon” tradition as well as the relatively recent phenomenon of the “game jam” - brief, fast-paced and cooperative events where participants produce an ephemeral, proof-of-concept final product. In the industry, the creative output of hackathons includes the now-ubiquitous Facebook “like” button. Although hackathons are associated with an exclusionary hacker culture that perpetuates hacking as the defining activity of a technological “priesthood” class, the recent emergence of collaborative coding platforms like glitch.com has the potential to democratize hacking as an inclusive, collaborative activity more in alignment with the principles of DH. Moreover, research indicates that the peer-based learning and networking associated with the hackathon model can serve as a community-building tool that could put those new to DH on a level playing field with established practitioners.

The success of the Collections as Data initiative has revealed the gulf between the needs of scholars using computational methods on humanities collections and the accessibility of those collections as usable data. The initiative stipulates that such collections designed “for everyone” will inevitably fail the individual who approaches the data for a particular purpose, looking to perform their singular brand of research. Yet the hidden strength of data is that it allows us to forge our own paths of entry into the collection, eschewing preset personas and established methods.

This workshop seeks to apply the methodology of the data jam to spontaneously uncover new points of entry into established collections as data. Prior to the workshop, a set of exemplary collections as data will be provided to participants, encouraging them to come with ideas. Over the course of a full day, participants will break into teams and utilize collaborative coding tools to produce an ephemeral digital product, with the ultimate goal of creating a novel window into a particular collection that goes beyond the personas envisioned by the Collections as Data initiative. The products of the jam will be showcased by participants and analyzed by a panel responsible for rewarding particularly innovative and thought-provoking entries.

The workshop builds on experiences at Princeton University hosting a DH hackathon and “Playing with Data” events in addition to the lessons of the Collections as Data initiative. As the workshop is also intended to serve as a community-building tool, it particularly welcomes the participation of those new to DH. No particular digital or humanistic skills are required, and teams will be arranged to ensure even distribution of skills and experience.

 
12:00pm - 1:00pmWorkshop Lunch
Workshop lunches are for registered participants of the Inclusive Pedagogy Pre-Conference Workshop, the Collections as Data Jam, and the ACH Executive Meeting.
Duquesne University, Power Center, Shepperson Suite, 5th floor 
12:15pm - 1:00pmWorkshop Check-In
Participants for afternoon workshops only are welcome to check in at this event.
Duquesne University, Power Center, Shepperson Suite, 5th floor 
1:00pm - 4:00pm#WkshpPM1: The Manifold Scholarship Platform Workshop
Duquesne University, Union Building, Room 119 
 

The Manifold Scholarship Platform: A Hands-On Workshop

Matthew Gold1, Zach Davis2, Jojo Karlin1, Terence Smyre3, Krystyna Michael1

1The Graduate Center, CUNY, United States of America; 2Cast Iron Coding; 3University of Minnesota Press

This workshop will present the continuing development of Manifold ( http://manifoldapp.org ), an open-source scholarly communication and book publishing platform. Created by the University of Minnesota Press, The GC Digital Scholarship Lab, and Cast Iron Coding, and funded by the Andrew W. Mellon Foundation, Manifold aims to present scholarly publishing -- whether monograph, journal, Open Educational Resource-- in a new networked and iterative form that has strong ties to print and rich multimedia. Members from the Manifold team’s representing development (Cast Iron Coding), publishing (University of Minnesota Press), and digital humanities (GC CUNY), will walk participants through from installation through uploading and customizing projects within the interface. Participants will learn what Manifold is, how it works, how it integrates into existing university-press publishing workflows, and how to use it on their own for a variety of publishing and pedagogical needs.

 
1:00pm - 4:00pm#WkshpPM2: Emotional Labor and Digital Humanities Workshop
Duquesne University, Union Building, Room 109 
 

Emotional Labor and Digital Humanities

Laura Braunstein

Dartmouth College, United States of America

Emotional labor has become a recent topic of interest in digital humanities, as well as in other disciplines that address issues of labor, productivity, change, workplace relationships, and organizational psychology. Recent scholarship by Paige Morgan and others has argues that emotional labor is indeed central to DH. Digital humanities practice often involves new skills, unfamiliar technologies, re-thinking disciplinary assumptions or scholarly practices, and collaborative labor across disciplines, ranks, and positions of authority. It can generate anxiety, resistance, disorientation, and, for some, feelings of frustration, incompetence, resentment, or helplessness.

Librarians, technologists, graduate assistants, and contingent staff often serve as mediators or guides for students, faculty, staff, or other researchers as they discover new forms of scholarship or technologies. The emotional labor involved in making such transitions is often overlooked, both as a particular kind of skilled labor and as an instrumental element in the success of digital projects. Such specialized labor, while sometimes recognized in other professional contexts as a type of teaching, advising, mentorship, or counseling, is often taken for granted or disavowed by practitioners of DH. This may be because it isn’t a type of labor for which most scholarly professionals are specifically trained, or because there is a perception that such labor “isn’t my job.” The result may be missed opportunities to develop collaborations that are more productive, equitable, and meaningful. Alternatively, as Morgan has argued, “if emotional labor is ongoing, and acknowledged as work that deals with risk-focused, administrative, and scholarly decisions, then it can contribute to reframing the relationship between scholars and librarians as one of more equal partnership, rather than mere service provision.”

The goals of the workshop include:

  • Defining “emotional labor” and its role in DH.

  • Demonstrating how emotional labor impacts the relationships among practitioners of DH.

  • Inviting participants to share their own examples of emotional labor in their work-spaces or projects.

  • Experimenting with different strategies for performing emotional labor in the context of DH work, and assessing their effectiveness.

  • Reflecting on these experiences and examining their implications for future work in the area.

This workshop will also explore issues of race, gender, religion, and socioeconomic diversity as they apply to emotional labor in scholarly communities. Attendees at the workshop will be encouraged to provide some of their own contexts as well that may allow us to address specific questions of diversity in DH. By giving participants a space to share case studies and exchange information on projects in their own areas of study, we will work toward developing a professional network that will place emotional labor at the center of its conversations about DH.

Outcomes:

  • Attendees will define “emotional labor” and its role in DH and scholarly communication

  • Attendees will develop strategies for performing emotional labor in the context of DH, and assess their effectiveness

  • Attendees will assess needs and capacity for developing professional networks dedicated to understanding and supporting emotional labor in DH

Maximum participants: 20

 
1:00pm - 5:00pm#WkshpPM3: Immersive Pedagogy Workshop
Duquesne University, Gumberg Library, Room 408 
 

Immersive Pedagogy Workshop: How to Use VR for Teaching and Learning in the Humanities Classroom

Emma Ruth Slayton1, Jessica Linker2, Alex Wermer-Colan3, Chris Young4

1Carnegie Mellon University, United States of America; 2Bryn Mawr College, United States of America; 3Temple University, United States of America; 4University of Toronto, Canada

3D and VR technologies are becoming increasingly popular tools for teaching and learning in academia. The applications of these technologies are broad: they have been used to advance humanistic inquiry through immersive visualizations of spaces, artifacts, and data. To respond to growing interest in using these technologies in higher education, this workshop will guide participants through developing and using pedagogical material addressing critical and practical needs for teaching with 3D and VR in higher education.

The workshop presenters comprise a CLIR postdoctoral fellow inquiry group; in our various capacities at libraries and humanities centres across North America, we each regularly collaborate with students, librarians, and faculty to expand support for and use of emerging technologies for digital scholarship. We have each observed, and hope to resolve, a number of pedagogical challenges posed by training researchers and students to use 3D/VR as an effective tool, particularly for humanistic inquiry. Although some institutions offer robust technical support, what is lacking at the moment is integrated critical engagement of how these technologies can and do leverage data and analysis of recurrent and emergent humanities topics.

To address challenges in adapting immersive technology for pedagogical purposes, the workshop presenters received grant-funding from CLIR and the Mellon Foundation to host a workshop series at Carnegie Mellon University in June 2019, entitled “Immersive Pedagogy: A Symposium on Humanities Teaching and Learning with 3D, Augmented and Virtual Reality.” Our symposium will bring together librarians, educational technologists, and faculty to generate accessible, scaffolded pedagogical material that integrates scholarly inquiry with technical training. Teaching materials developed at this conference, which prioritize projects related to US Latinx, Latin American and Caribbean Studies, will showcase how 3D and VR technologies and data curation practices intersect with methodologies derived from studies of cultural heritage, minority archives, race and ethnicity, women of color feminist theory, community outreach, public humanities, and accessibility. To further develop and disseminate the lessons learned during our “Immersive Pedagogy Symposium,” participants of this workshop will contribute to explorations at ACH on emerging technology for digital pedagogy and scholarship by sharing the outcomes of this symposium, including the pedagogical material created.

The aim of our workshop will be to: 1.) discuss the materials created during the course of the conference, and the process by which they were developed, 2.) provide attendees with guidelines for best practices when using these technologies, 3.) disseminate information about software, tools, and languages available to incubate 3D/VR projects, and 4.) promote sustained, critical analysis of humanistic uses of 3D/VR technology in the participants’ digital scholarship initiatives. Participants will have the opportunity to work with experts to generate or assess their own plans to integrate immersive technology on their campuses. Specific topics may include: expanding critical understanding of the underpinnings and consequences of using these techniques for research or education, locating relevant immersive media for classes and disciplines, generating syllabi that enhances knowledge of 3D and VR tools, and developing lesson plans, designed to supplement and contextualize the immersive experiences.

 
1:00pm - 5:00pm#WkshpPM4: An Intro to Music Encoding Workshop
Duquesne University, Gumberg Library, Room 202 
 

An Intro to Music Encoding for Data Driven Scholarship

Anna E Kijas1, Sarah Melton2, Raffaele Viglianti3

1Boston College; 2Boston College; 3Maryland Institute for Technology in the Humanities, University of Maryland

Music encoding is a way to create machine readable data about music documents and it has many applications, including long-term preservation, computational analysis, digital editions, and digital publishing. This workshop is part of a multi-institutional effort to create a sustainable workflow for encoding music documents that can be used for music data-driven scholarship. This workshop will introduce new encoders to a straightforward workflow that will generate a MEI file using music notation software, create metadata in the MEI header section, and render a MEI file in a modern browser. While there are a number of music encoding standards available, this workshop will focus on encoding music documents according to the Music Encoding Initiative (MEI) guidelines. Participants will use open source tools during this workshop, including MuseScore notation software, Atom Editor, and Verovio. By the end of the workshop, participants will be able to generate a MEI file from a music document, create various metadata for a MEI file, know where to locate additional MEI resources, identify and select appropriate tools and software for working with MEI, and render a MEI file. This workshop will not provide in-depth coverage of the MEI guidelines, applying XSLT or schemas, optical music recognition (OMR), other encoding standards, or details of the encoding of the music notation itself, which in this case is left to automatic conversion. This workshop will be most useful to encoders who are new to XML and encoding standards and work with music or performance-related documents in cultural heritage institutions, or in areas such as metadata, preservation, digital humanities, music, and computational musicology.

Please note: this workshop will be most useful to attendees with a basic ability to read music notation.

 
2:00pm - 3:00pmBreak-Duq: Coffee Break
Coffee break is for registered participants of conference workshops and meetings only.
Duquesne University, Power Center, Shepperson Suite, 5th floor 
6:30pm - 9:00pm#Opening: Opening Reception
This event is a reserved event. Those who have registered will be confirmed at the door of the Museum for entrance to the event. There is no on-site registration at the Warhol.
The Andy Warhol Museum 
Date: Wednesday, 24/Jul/2019
8:00am - 4:00pm#Reg1: Registration/Check-In
Grand Ballroom Foyer A, Marriott City Center 
8:00am - 5:00pm#BookExhibit1: Book Exhibit 1
City Center A, Marriott City Center 
8:30am - 8:50am#Welcome: Welcome and Conference Opening
Grand Ballroom Foyer A, Marriott City Center 
9:00am - 10:30am#SA1: Embodied Data Paper Session 1
Session Chair: Quinn Dombrowski
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

XM<LGBT/>: Enacting the QueerOS

Abbie Levesque

Northeastern University, United States of America

In the Debates in the Digital Humanities 2016, Barnett et al. laid out the groundwork for a software version of Kara Keeling’s “Queer OS.” The digital humanities have recently been inquiring into the ways tools and systems think, and how those ways of thinking are affected by the cultures that created those tools and systems. If tools and systems are created to think like hegemonic groups do, then they re-inscribe the values of the hegemony onto their data. When Kara Keeling called for the initial “Queer OS,” she was using OS as a way to refer to a framework of thinking, not necessarily an actualized software. And notably, Barnett et al. did not actually build a QueerOS. In a research project on queer writing center practices, I built a coding language called XM<LGBT/> as part of the qualitative coding process. I have, with this coding, attempted to put into practice a qualitative research methodology for writing center research that uses a queer logic system and is coded with the LGBTAQ community’s interests in mind. That is, I have built a small enaction of the call for a QueerOS. I wish to discuss the applications of queer computer systems in rhetoric and composition research work, and the ways this differed from the traditional uses of qualitative coding. I also wish to discuss the limitations of the system and possible future directions for both queered methodologies and queer coding systems.



Tejidos Autómatas: Simbología Textil Indígena Latinoamericana desde los modelos de autómatas celulares

Iván Terceros

CIESPAL, Ecuador

El acto de programar se fundamenta sobre la idea de aceptar lo binario como motor constitutivo de la vida, una convención occidental referida al dialéctica, por la cual se entiende el mundo en; inicios y fines, encendidos y apagados, buenos y malos, unos y ceros. Si bien la intrepretación del mundo corresponde a estructuras sumamente complejas, no se extingue la binariedad como el acto convencional micro y fundacional. Esta es una reflexión fundamental para el estudio de la Computación Decolonial.

TejidosAutómatas, es un proyecto desarrollado en forma de talleres en los cuales los participantes de diversos campos y orígenes culturales, reflexionan sobre los condiciones políticas y culturales de la filosofía de la tecnología, como construcción hegemónica occidental, e intentando proponer otras formas reflexivas de la tecnología mediante los sistemas filosóficos indígenas, expresados en el estudio de la simbología de los tejidos indígenas como una fuente de inspiración de sistemas alternativos de codificación.

Para esta serie de experimentos, se hace uso de modelos de autómatas celulares, particularmente de el Juego de la Vida de John Horton Conway, como abstracción concreta de la aplicación de reglas definas a un sistema social, dentro de la binariedad, la cual debe posteriormente ser modificada desde los fundamentos filosóficos de diversos pueblos indígenas latinoamericanos (andinos fundamentalmente).

Durante un par de semanas, el taller se nutre de varios campos de estudio, fundamentos básicos de la teoría general de sistemas, teoría de sistemas sociales de Niklas Luhmann, reflexiones de la colonialidad del saber desde el pensamiento decolonial, estudios semióticos y antropológicos del diseño de tejidos indígenas, programación básica con P5.js, estructuras de cosmología indígena, modelos de autómatas celulares y finalmente sesiones de construcción de diseños hipotéticos de modelos sociales basados en la supuestos filosóficos indǵienas para ser probadas en modelos de nuevos autómatas celulares interpretación un tejido indígena digital.



The Ethics of (Digital Humanities) Algorithms: Toward a Code of Ethics for Algorithms in the Digital Humanities

Clifford B. Anderson

Vanderbilt University, United States of America

What is algorithmic ethics for digital humanists? This paper explores the nascent field of “algorithmic ethics”[1] and its potential for shaping research and practice in the digital humanities.

The ubiquity of computational systems in our lifeworld is bringing scholarly attention to the societal effects of algorithms. Ed Finn,[2] Hannah Fry,[3] Safiya Umoja Noble,[4] among others, have shown that algorithms are not socially neutral, illustrating how they reflect, shape, and reinforce cultural prejudices. How should digital humanists identify and categorize ethically complex algorithms?

Computer scientists use the so-called ‘Big O’ notation to represent the time and space complexity of algorithms. They classify algorithms, for instance, as constant, logarithmic, linear, linearithmic, quadratic, etc., aiming to understand how they scale with inputs. In essence, computer scientists categorize algorithms by abstracting from concrete details of implementation like the operating system, processor(s), and other empirical characteristics of the computing environment. Instead, they focus on the number of operations that algorithms take as they scale with inputs, considering the 'worst case' scenario to discern the upper bounds of their computational complexity.

Might digital humanists develop analogous notation for categorizing algorithms according to their potential social effects at scale? Should digital humanists ask a similar question when evaluating the ethical complexity of algorithms, namely, how algorithms might negatively affect human actors under 'worst case' scenarios as they scale? However, asking such a question requires digital humanists to retain and study the empirical context in which algorithms are deployed, a crucial disanalogy from the way computer scientists employ the 'Big O' notation to indicate computational complexity.

Drawing on the growing literature on algorithmic ethics,[5] this paper suggests ways of working toward a code of ethics for algorithms based on identifying potential 'worst case' scenarios at different scales in order to anticipate bias and mitigate social harm from the use of algorithms in the digital humanities.

Works Cited

[1] Felicitas Kraemer, Kees van Overveld, and Martin Peterson, “Is There an Ethics of Algorithms?” Ethics and Information Technology 13, no. 3 (September 2011): 251–60, doi:10.1007/s10676-010-9233-7.

[2] Ed Finn, What Algorithms Want: Imagination in the Age of Computing (Cambridge, MA: MIT Press, 2017).

[3] Hannah Fry, Hello World: Being Human in the Age of Algorithms (New York: W. W. Norton & Company, 2018).

[4] Safiya Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York: NYU Press, 2018).

[5] Brent Daniel Mittelstadt et al., “The Ethics of Algorithms: Mapping the Debate,” Big Data & Society 3, no. 2 (December 2016): 12, doi:10.1177/2053951716679679.

 
9:00am - 10:30am#SA2: Nuances of Data Roundtable
Session Chair: Chris Alen Sula
Marquis B, Marriott City Center 
 

Nuances of Data: What Can DH Contribute?

Chris Alen Sula

Pratt Institute, United States of America

Critics, both inside and outside of the digital humanities, have noted a range of ways in which data tends to encode, reinscribe, perpetuate, amplify, and even favor existing power structures—or, at the very least, that data is limited in its radical potential for dismantling those structures. These criticisms extend beyond databases themselves and include related processes of analysis and visualization. It is often suggested that digital humanists can offer special contributions to these discussions, in part because of their disciplinary perspectives and in part because their data provide particularly useful test cases: historical data, with its attendant ambiguities; data about people and their identities; data generated within or about social change; and so on. It is similarly suggested that scrutinizing this "small data" will benefit discussions of "big data" writ large in our world today.

This roundtable will begin with a broad framing of data challenges noted in the DH literature, followed by discussion with participants about nuances of their work with data and how those nuances might prove instructive beyond the (digital) humanities. We will consider both cases of success, in which participants have developed novel ways of working with data, as well as cases of blockage or failure, which perhaps point to limitations on what can (or should) be done with data. Participants should come prepared to discuss past and present examples of their work with databases, data collection, algorithms, analysis, visualization, data management, data governance, or other related topics.

 
9:00am - 10:30am#SA3: Space and Place Paper Session 1
Session Chair: Katherine Walden
Marquis A, Marriott City Center 
 

Engaging the Archive and its Absences: Lessons From New York City’s Nineteenth-Century Spanish-Language Press

Kelley Kreitz

Pace University, United States of America

This presentation considers the use of geospatial visualization as a means of recovering the Spanish-language press of the late nineteenth century in New York City. Although New York City served as a thriving Spanish-language publishing center in the nineteenth-century (and many of the leading institutions of that community held offices in the neighborhood surrounding Park Row during its heyday as the center of the popular, English-language newspapers of the city), much of the history of New York’s Spanish-language press during that period remains understudied. Moreover, with most of the sources for uncovering this history scattered (often spottily) across archives, few resources exist for engaging students with the early history of Latinx writing in New York City.

"Engaging the Archive and its Absences" draws on a mapping project, C19LatinoNYC.org, that I have been conducting with students in introductory Latinx literature courses, which involves plotting addresses found in archival sources to reimagine the community of writers, editors, printers, booksellers, who once led New York’s nineteenth-century Latinx press. I will consider cultural mapping as a research and pedagogical tool for confronting absences in the archive and for making history not just knowable, but also teachable in new ways that enable students to critique and confront structural inequality and systematic oppression. This presentation speaks especially to those who research and teach courses in Latinx Studies. It is also meant to spark interdisciplinary conversation, especially among those working in adjacent fields that must confront absences and omissions in the archive, including hemispheric studies, black Atlantic studies, and indigenous studies.



OpenGazAm: Digitization and Toponym Resolution for a Historical Gazetteer Of The Early Modern Americas

Rombert Stapel1, Ashkan Ashkpour1, Martin Reynaert2

1International Institute of Social History (Netherlands); 2Meertens Instituut (Netherlands), Tilburg University (Netherlands)

In 1797, Boston-based geographer Jedidiah Morse published the first edition of his momentous ‘The American Gazetteer’. It includes around 7,000 unique place name descriptions in the newly founded United States and in the European colonies in both North and South America, and the Caribbean.

The American Gazetteer provides a unique contemporary view of the Early Modern American contents. Its entries range from just a couple of words to several pages and contain basic information on the geographic location and administrative hierarchies of the localities, as well as descriptive notes. Much emphasis is placed on distances, navigability of waterways, types of traded commodities, climates, facilities, and so forth – all relevant for merchants seeking new fortunes.

The goal of the OpenGazAm-project has been to create a Linked Open Data Gazetteer that will be interoperable with the World Historical Gazetteer[1] and Pelagios.[2] The HGIS de las Indias, a recently finished GIS and Linked Open Data representation of an eighteenth-century gazetteer of the Spanish Americas, is already incorporated in both platforms.[3] It will be linked to Morse’s contemporary gazetteer, thus creating a valuable combined resource. Digital historical gazetteers such as the HGIS de las Indias and The American Gazetteer are indispensable in modern humanities research. Existing non-historical digital gazetteers, such as GeoNames, have much difficulty in identifying and disambiguating historical toponyms. Spelling variations, changing place names, and discontinued localities, are omnipresent in historical sources and hamper quick and easy identification of places.

In this paper, we will present the results of the OpenGazAm project, and focus on two of the main challenges in its creation: our approach to the digitization of the printed text (1) and an evaluation of toponym resolution techniques in order to correctly identify (2).

Digitization. No edition of the text exists. Therefore, we had to resort to OCR techniques in order to create a near-golden standard version of the text. We will present a short evaluation of different techniques. By far the best results were reached with the Handwritten Text Recognition toolkit Transkribus,[4] further enhanced by applying a novel toolkit for OCR post-correction, Text Induced Corpus Clean-up (TICCL).[5] TICCL is part of the Dutch national CLARIAH infrastructure for the humanities,[6] and specially adapted for this project.

Toponym resolution. A particular challenge is identifying the correct toponym, and disambiguating between similarly-named entities. The American Gazetteer contains for instance 29 Washington’s and 9 Trinidad’s. Therefore, we need to automatically extract contextual information mentioned in the toponym descriptions (county, state, province, ‘… miles S.W. from …’, ‘along the river …’) in order to successfully disambiguate places. However, such contextual information is also time-specific. For example, the system must recognize that ‘Huntsville in Georgia’ is now equal to ‘Huntsville in Alabama’. We have manually identified 200 places and linked them to existing (mainly modern) digital gazetteers. Here, we will evaluate different techniques for toponym resolution in Morse’s gazetteer and make recommendations for best practices.


[1]http://www.whgazetteer.org/

[2]http://commons.pelagios.org/

[3]http://www.hgis-indias.net/

[4]http://www.transkribus.eu/

[5]https://github.com/LanguageMachines/PICCL

[6]http://www.clariah.nl/



Changing Places: Using Spatio-Temporal Maps to Link Literary Texts with Movement

Anindita Basu Sempere

Université de Neuchâtel, Switzerland

We can study the relationship between a literary text and place in several ways, such as by close reading the text itself and prioritizing described places or by visiting real places that are mentioned in a text and doing scholarship in situ. In digital humanities, text and place have often been paired through GIS mapping of literary locations, for example “LitMap” by Dr. Barbara Hui, which maps extracts of W.G. Sebald’s novel Rings of Saturn. As Hui describes, computation allows us to “read literature spatially.”

With computation we can also add a time element so that texts can be read spatio-temporally. While we are accustomed to data visualizations that combine maps and timelines to present data in an experiential manner, such as population change over time, adding a time element to a literary map can evoke a sense of movement between places, which adds a new layer to literary mapmaking. By highlighting change or patterns of stability/instability, spatio-temporal maps link text to both place and movement.

In this paper, I will present two digital humanities projects that map literary archives spatio-temporally. “Summer of Darkness” is an iOS app about the summer when Frankenstein was written that comprises original letters, journal entries, and literary texts. “Mapping Bishop” is a web-based spatio-temporal map of Elizabeth Bishop’s texts from the early 1950s that is under active development. Both projects combine mapping, a textual archive, and a playback element to create an experiential or mimetic approach to literary scholarship. I will also discuss preliminary observations from my mapping of these archives.



An Experience of Digital Landscape: Iconography, Interaction, and Immersion

Jesse Rouse

UNC Pembroke, United States of America

The study of cultural landscapes often takes advantage of mixed methods approaches, incorporating qualitative description, quantitative measure, or both, to understand space and place. The separation between the qualitative and quantitative is often driven by the difference in the theoretical underpinnings or applied goals that guide the study’s attempt to understand this ‘real world’ place. When considering a digital landscape, studies tend to focus on the quantitative aspects of the representation, which is understandable as the landscape itself is now made up of nothing more than 1s and 0s. However, the person maneuvering through the digital landscape, as well as the writers and artists who created the landscape, likely do not think of the space in only a quantitative way, but instead focus on their experience of the place that is characterized in much the same way that someone experiencing a ‘natural’ environment would.

Through the majority of landscape studies there has been an emphasis on what can be physically seen, yielding descriptions that have revolved around aspects such as what is visually present (or absent), the potential for interaction, and the observer’s visual perception. The Japanese landscape architect, Tadahiko Higuchi offered an approach to consider the visual and spatial structure of landscape which is readily applicable to both real world landscapes and digital, or virtual, landscapes.

This presentation will show how the visual indices and composites suggested by Higuchi can be applied to both real and virtual landscapes in three examples. The first will show how these indices and composites can be used to interpret a modern natural landscape. The second will show how these indices and composites can be used to consider a virtual landscape within a video game environment. The third will show how Higuchi’s tools, which were originally used to plan landscape experiences, can be used to design landscapes for a virtual environment. These three examples will highlight ways we can approach both the interpretation and creation of our experience of landscape and how the separation of the real and virtual is not so great when looking at our environments.

 
9:00am - 10:30am#SA4: Up Close and Personal Roundtable
Session Chair: Aimée Hope Morrison
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Up Close and Personal: Ethical Social Media Research in a Distant and Big Data World

Aimée Morrison1, Philip Miletic1, Arun Jacob2, Eileen Mary Holowka3, Stormy Compeán Sweitzer4

1University of Waterloo; 2University of Toronto; 3Concordia University; 4Case Western Reserve University

This panel considers and proposes small-scale methodologies for ethical social media research. Normative “big data” and “distant reading” methods promote critical distance and objectivity as ideals. In our work together and individually, however, we find that these methods and these orientations to the source texts can be dehumanizing and exploitative towards the social media subjects under study. In our panel, we explore various alternatives to big data methods, or the re-working of these methods, to propose a practice of ethical, respectful, and productive “close reading” and “small data.” In other words, the panel describes and advocates for social media research that is up close and personal.

Philip Miletic's presentation will frame the ethical questions related to social media research in a DH frame as we see them, using examples of specific projects as well as a more general overview of the methodological and theoretical underpinnings of current practice. This will set the stage and offer context for the other papers to follow.

Arun Jacob's presentation will discuss how geofences are virtual perimeters established around target locations to transform locative media information into saleable alternative data. Jacob will be speaking about how geofencing is likely to reproduce existing societal discrepancies via data-driven discriminatory techniques reconfiguring state power in new immaterial forms.

Eileen Mary Holowka's paper will examine how from June 2018 until now, the hashtag #SeeMyInvisible has brought together a collection of stories and a small archive of visually rendered invisible illnesses. This presentation uses #SeeMyInvisible to reflect on how best to ethically analyze and bear witness to the small, but important, everyday life writing practices of those with chronic invisible illnesses.

Stormy Compeán Sweitzer will discuss how social media has changed both the social narrative and physical representation of what it means to be a “biker chick,” as well as how female motorcyclists discover and organize riding communities. Data gained through social media offer situated and naturalistic insight into how such online and digitally-supported offline communities operate.

 
9:00am - 10:30am#SA5: Mixed Media Paper Session
Session Chair: Brandon Walsh
Marquis C, Marriott City Center 
 

Resisting Canon and Colonialism in the Digitization of "Oriental" Manuscripts

Caroline Schroeder

University of the Pacific, United States of America

The past decade has witnessed a wave of manuscript digitization projects initiated by museums, libraries, and individual scholars. This paper will address digitization of some primary sources essential for the study of late antiquity and Byzantine history and religion. Many of these initiatives will advance the study of Greek and Latin texts, as well as Hebrew—the primary languages of the Christian canon and the early to medieval Christian tradition in the West. Research in Syriac, Coptic, and Christian Arabic, however, are essential for understanding the development of religion in the late antique and early Medieval or Byzantine periods. The digitization of their sources has lagged behind. Focusing specifically on Coptic manuscripts—the texts of early and medieval Christian Egypt—this paper will explore the role of colonialism in in the history of Coptic manuscript collections and archives and how to resist reinscribing both colonial epistemologies and traditional notions of “canon” after the “digital turn” in archival and manuscript studies.

Digitization has been heralded as a means of increasing access and availability of texts that may be inaccessible for various reasons, including the dispersal or dismemberment of the original archives or repository. Technology is seen as a possible means to reassemble these dismembered texts and archives, to reunite fragments of papyri and codices virtually online. It is also heralded as a way to save texts that still reside in the Middle East, in zones of political, military, or cultural conflict. Finally some scholars hope it will bring more exposure to traditions that up until now have been seen as marginal to the dominant Greek and Latin traditions. This paper will interrogate two premises: first, that digitization can “recover” or “reconstruct” an original, now dismembered ancient or medieval archive; second, that current digitization efforts are disrupting the dominant canonical paradigms in the study of late antique, Byzantine, and Medieval religious history. The paper will argue that digitization cannot fully repatriate, reconstruct, or save damaged or dispersed physical archives. But the digital can transform our relationships with the sources of early Christanities if we pay critical attention to the limits of the digital, so as not to reify colonial archaeological, archival, and canonical practices in the digital realm.

This paper will first discuss the original collection of Coptic manuscripts in the context of colonial occupation of Egypt, excavations in Egypt, and the antiquities trade. It will then examine the progress, possibilities, and potential problems of digitization initiatives at specific libraries and museums with significant Coptic collections: British Library, Vatican, Bibliothèque Nationale, Österreichische Nationalbibliothek, etc. The paper will also assess the efforts of digital humanities projects based outside of those particular archival locations.

The paper draws on methodologies from post-colonial digital humanities, Native American digital humanities (especially regarding issues of repatriation and digitization of cultural heritage), archival theory, Coptic Studies, and manuscript studies. I am requesting a 20-minute paper time.



Visual Scripting of Craft Techniques to Create Digital Humanities Tools That Can Contribute to Design Methodology

Lisa Marks

Georgia Institute of Technology, United States of America

Computers have been connected to craft since the punch card of jacquard looms inspired the first conditional machine, the Analytic Engine. This history persists through the fields of digital humanities, with a new push to document historic craft through 3D modeling. In this paper, methodology is discussed to use visual scripting and algorithmic modeling to not only document craft and its base mathematical systems, but to contribute to continuing traditions of Cultural Heritage Crafts.

In the current and unprecedented rate of endangered craft, it is important that we use the digital humanities to go beyond documentation, offering methodology to prevent extinction. Since craft resurgence is often sparked by small shifts in technique, giving makers a new piece of language in the lexicon of a craft can be invaluable. This is of even more importance in the face of the US pulling out of UNESCO, a current leader in the documentation of Intangible Cultural Heritage.

This paper presents the techniques utilized to model interactive interfaces for a variety of craft techniques, both documenting the geometry as well as creating new forms including 3-Dimensional bobbin lace, knitting with semi-rigid materials, and combining tatting with knitting in novel structures. It is noted that while computation plays a vital role, these new forms all leverage features that machines can not make, keeping the craft and the jobs created in the hands of the craftspeople.

I am requesting 20 minutes to present the techniques described in this paper along as well as select results from a new graduate course on these methods at an R1 university.

Keywords: 3D modeling, Algorithmic Modeling, Cultural Heritage, Craft, Design



PodcastRE - Saving and Studying New Sounds

Eric Hoyt, Jeremy Morris

University of Wisconsin, Madison, United States of America

Podcasting is just over ten years old as a media form and practice, but it has ushered in an explosion of amateur and professional cultural production. There are now over one million podcast feeds in over one hundred languages. There’s a podcast on almost every subject imaginable, from popular shows like Serial and Radiolab to lighter fare like the wrestling podcast Wrestlespective or shows that cover a social issues like sexuality, identity, race or politics (e.g. Strange Fruit, This Week in Blackness, etc.). We are in the midst of what many are calling a “Golden Age of Podcasts”; a moment where the choice for quality digital audio abounds, and where new voices and new listeners connect daily through earbuds, car stereos, or office computers. The audience keeps growing as well, exceeding 60 million American listeners last year.

Yet despite the excitement over this vital media form, and despite the plethora of content being produced, the sounds of podcasting’s nascent history remain mystifyingly difficult to analyze. There are few resources for anyone interested in researching the form, content, or history of podcasts and even fewer tools for preserving and analyzing the sonic artifacts being produced during this golden age of audio. What today’s podcasters are producing will have value in the future, not just for its content, but for what it tells us about audio’s longer history, about who has the right to communicate and by what means. We may be in a “Golden Age” of podcasts but if we’re not making efforts to preserve and analyze these resources now, we’ll find ourselves in the same conundrum many radio, film or television historians find themselves: writing, researching and thinking about a past they can’t fully see or hear.

In response to these problems, we are leading a data preservation and analytics initiative for podcasts. Supported by the University of Wisconsin-Madison and the NEH Office of Digital Humanities, PodcastRE (short for Podcast Research and openly accessible at http://podcastre.org) is the largest publicly oriented research collection of its kind: over one million audio files and metadata records from over 5,000 different podcast feeds (and growing daily). In the first half of our presentation, we will discuss technical and philosophical questions of archiving podcasts. We will reflect on the irony that one of our best methods for saving this born digital medium is tape (LTO) and address other preservation challenges of this form (for example, in an era of dynamic ad placement, which version and how many different versions of a podcast should be saved?). In the second half of the paper, we will showcase the data analytics and visualization features that the PodcastRE team has developed (two of which are already available at http://podcastre.org/analytics). Through analyzing podcast durations, sonic frequencies, and metadata at scale, we can detect over-represented and under-represented topics and voices within this rapidly evolving medium. This paper will be of interest to archivists and scholars working with sound, media, and contemporary digital culture.



Changing Lanes: A Reanimation of Shell Oil’s Carol Lane

Melissa Suzanne Dollman

University of North Carolina at Chapel Hill, United States of America

From 1947 to 1974 Shell Oil Company sponsored a public relations program that engaged single and married women drivers. They especially targeted married women who helped plan leisurely road trips for their families, and single “gals” who wanted to see the country. Over twenty different women portrayed its figurehead, the pseudonymous Carol Lane. As Shell’s women’s travel expert, Lane spoke to women’s clubs, professional associations, and station owners’ wives; “penned” a newspaper column, a book, and travel guides; and starred in sponsored films and made radio and television appearances. Through speaking engagements and live demonstrations she enticed women to drive, economically pack a single suitcase, organize road trips for their families, and amuse children while traveling. Shell Oil also honored civic-minded women with a traffic safety award in her name. I am researching each individual Lane performer’s biographical details to understand how they integrate into and inform the amalgamated biography. Furthermore, Lane was part of an entire network of people who were contacted by this public relations effort. As such, she presents a specific test case, defined by time and geography, with which to explore a unified campaign for engagement with women by a multinational corporation, specifically Shell Oil.

I will present on the current status of my digital dissertation during which time I will perform a walk-through of the site and screen a clip from a video essay chapter in progress. I will also discuss the early stages of implementing the prosopographical hybrid method that guides my own, and in future, other’s research into Carol Lane as a composite and a network. On an ArcGIS platform I am developing my eventually public-facing digital dissertation whereby I present data (3000+ magazine and newspaper articles, books, biographical) I have collected, input myself, and am utilizing in a number of ways including data visualization, videographic criticism. I look forward to input on the efficacy of a variety of multimedia factoids I am utilizing and offering such as maps, census tables, and raw, searchable, related data in an (currently) Airtable database.[i]


[i] “About Factoid Prosopography,” Factoids: A Site that Introduces Factoid Prosopography, https://factoid-dighum.kcl.ac.uk/.

 
9:00am - 4:00pm#Install1: Installations: Birding the Future and The Cybernetics Library
Scholars will be present from 10 am to 1 pm to answer questions regarding these installations.
Salon 6, Grand Ballroom, Marriott City Center 
 

Birding the Future

Krista Caballero1, Frank Ekeberg2

1Bard College, United States of America; 2Independent Artist

Birding the Futureis an artwork that explores current extinction rates by specifically focusing on the warning abilities of birds as bioindicators of environmental change. The installation invites visitors to listen to endangered and extinct bird calls and to view visionary avian landscapes through stereographs, sculpture, and video.

Birds provide a unique window into the entanglements of our time. Unrestricted by human-imposed borders, approximately 5 billion birds migrate every year thereby linking cultures, countries, and ecologies and revealing issues collectively shared. It is also estimated that a third of all bird species will have disappeared by the end of this century. Declining bird populations in practically all habitat types signal profound changes over our entire planet, changes that affect our ecologically-bound cultural identities. Birding the Future poses three questions in response to this crisis: What does it mean that we can only see and hear extinct species through technology? What might happen as the messages of birds are increasingly being silenced? How can traditional ecological knowledge be combined with technological advances to surpass what any one way of knowing can offer?

Calls of endangered birds are extracted to create Morse code messages based upon tales, stories, and poetry in which birds speak to humans. These messages are combined with unmodified calls of extinct birds, which act as a memory of the past and underscore technological reproduction as the only means to hear certain species. Using a real-time control algorithm (Pd) the projected extinction rate is scaled to the duration of the exhibition by decreasing the density and diversity of bird calls.

A series of stereographs offer a loose narration through the soundscape. Popular from the mid-nineteenth century through the early twentieth century, the stereoscope has been chosen as the viewing instrument for its potential to heighten perceptual awareness and provide a historical link to human impact on the environment. The viewer’s gaze wanders back and forth between foreground and background, and by doing so continuously shifts one’s point of view within the frame. In this way the stereograph plays with the act of looking and the viewer is challenged to consider how the filters through which one looks then translate into ways knowledge is constructed. Composite photographs of real and imagined environments connect regional issues with a global perspective. On the back, textual analysis explores the complexities of our more-than-human world via poetry, data and other relevant habitat and behavioral information. To date there are six series: Queensland Australia, Arabian Peninsula, Norway, Mid-Atlantic USA, Frankfurt, and a series focused on laboratory birds.

For ACH 2019 we propose an interactive day-long installation that will include stereographs from each series and the sound installation described above. Birding the Future is highly adaptable and has been installed nationally and internationally in multiple types of configurations based upon particularities of the site. Artists will supply technical equipment and work with conference organizers to determine location.

To view artwork, please visit: https://www.birdingthefuture.net

Dependent upon space and interest, the following video could also be installed: https://vimeo.com/238204874



The Cybernetics Library: Revealing Systems of Exclusion

Sarah Hamerman1,2,3, Melanie Hoff1,4, Charles Eppley1,2,6, Sam Hart1,2, David Isaac Hecht1,5, Dan Taeyoung1,5,7

1Cybernetics Library; 2Avant.org; 3Princeton University Libraries; 4School for Poetic Computation; 5Prime Produce Apprenticeship Cooperative; 6Fordham University; 7Columbia University GSAPP

We propose a 4-day installation of a physical library collection, digital interface, and software simulation system. We are a research/practice collective that explores, examines, and critiques the history and legacy of cybernetic thought via the reciprocal embeddedness of techno-social systems and contemporary society. This installation’s intention is to examine and expose to users patterns of systemic bias latent within those systems and their use. The collection will be housed in custom-built, secure furniture and made accessible to all attendees of the conference.

Our collective is comprised of members from a diverse set of backgrounds and practices, including art, architecture, technology, publication, librarianship, gender studies, media/cultural studies, cooperatives, fabrication, design, simulation, queer studies, and more. We work on the project independent of institutional affiliations, but have had numerous successful collaborations, and were the organizers of an independent but highly successful conference, from which our ongoing project emerged.

From this outsider position, our project seeks to refigure and make more accessible the relationships between people, technologies, and society. The project has been manifested through activities such as community-oriented artistic installations, reading groups, workshops, and other public programs. The project also incorporates ongoing development of tools, platforms, and systems for enhancing, deepening, and extending engagement with the knowledge it organizes and to which it provides access. The project aspires to support its collaborators and users by serving as a connecting node for disparate communities that share intellectual or activist goals for exploring and advancing art, technology, and society.

The first version of the software simulation system used cataloging data to form associations between the usage histories of users of the library system, as well as linking content from works accessed during the initial conference to the topics presented by the speakers (in the context of a multi-layered visual representation). Another system, part of an installation at a program around the theme of "uncomputability", prompted users to participate in the construction of a collective poem by scanning in books from the collection which had meaningful associations for them. Another highly interactive implementation allowed users to engage their practices of sharing knowledge through metaphors of gardening: cultivation, care, attention, and community.

Our installations have been featured by The Queens Museum, The Distributed Web Summit by The Internet Archive, The School For Poetic Computation, Prime Produce, The Current Museum, vTaiwan, and Storefront for the Commons.

While the specific implementation of the installation for the ACH conference is still in preliminary stages of development, we are building on the themes of direct engagement, and collective, emergent explorations of structures of knowledge that can reveal hidden assumptions and biases latent in our approaches to technology and society. Based on our history of successful, memorable installations and collaborations, we are confident that this installation will contribute a valuable critical, conceptual, and technological resource the conference. We hope to produce an ecology for new collaborations, unexpected encounters, and deeper explorations of the themes and methods of the conference, and would be happy to be able to provide more detail soon.

 
10:30am - 10:45am#Break1: Break
Those with Poster and Demonstration contributions should set up during this break.
Grand Ballroom Foyer A, Marriott City Center 
10:45am - 12:15pm#Demo: Demonstration Session
City Center A, Marriott City Center 
 

Quick Red Flag Check Tool

Laurel Zuckerman

Na, France

Museums, collectors, auction houses, dealers, and art historians have long expressed frustration with the difficulty of verifying the provenance of artwork against a large list of looted items, red-flagged individuals, victims or key words. This paper demonstrates how to use the free web-based document analysis software Voyant-Tools as a potential solution to this problem. A series of step by step slides and videos will be presented with the aim of introducing participants to the tool and enabling them to apply it to their own documents, databases and special cases.

In a series of proof-of-concept tests conducted in 2017-18, publicly available data sets concerning auction sales and museum collections were loaded into Voyant-Tools and analyzed automatically against 1000 last names drawn from the Art Looting Investigation Unit Final Report 1946 and saved in a reusable list. Data files were loaded “as is” without any additional formatting, classifying, cleansing, or harmonization of format. Voyant-Tools immediately created an interactive Word Cloud summary of red flag names in the provenance, as well as, at the granular level, the detailed text of each mention, essential for disambiguation and analysis. Flagged data were exportable and shareable. Furthermore, a whitelist, once created, could be named and shared for reuse by other users on other corpuses.

Successful tests with Voyant-Tools suggest that the identification of red flags in provenance can be performed quickly and inexpensively and, most important, by anyone. This could remove an important resource barrier for cash-strapped museums and collectors, as art historians need not be involved in this initial "gruntwork", but can be mobilized later in the process, for knowledge-intensive steps like disambiguation (identifying the individual referenced by the name), setting research priorities, and performing in-depth archival research on the flagged items. One of the side benefits observed was transparency, not only in the data but in the process itself.

The initial results were presented at the Art Crime Conference in Amelia, Italy in June 2018. Based on feedback, it appears that the main barrier to adopting the provenance red flag check tool is lack of familiarity and training. The purpose of the proposed workshop is to introduce the participants to Voyant and to ensure that anyone interested has sufficient knowledge to successfully use this powerful tool to check provenance texts, documents or data sets on their own.

Further information on the demo is available:

1) Video: How to check for Red Flag Names with Voyant Whitelists https://youtu.be/KhXGcLBTUqM 2) Slides for the Red Flag Check Demo https://docs.google.com/presentation/d/1PjVR_GDJKCtTvFcc_MtkzqZWg3Shu_FyKf_wj7Q8eQk/edit?usp=sharing 3) Date Sources for the Red Flag Check Demo https://docs.google.com/document/d/16amV18LHTs8gymdv3UGSnLk7JdivWdW21_hHP8R81zE/edit?usp=sharing


CoAuthOR: Collaborative Authorship with an Opinionated Robot

Cody VanZandt

JPMorgan Chase & Co., United States of America

The authoring of text has always been - and will continue to be - mediated by technology. Recent advances in natural language processing and machine learning, however, have fundamentally altered the character of this mediation. Text messaging applications predict our next few words. Commercial “spell-checkers” provide nuanced advice on our style and tone. Email services even craft complete responses to our emails. Increasingly, these computational interventions in our writing are too complex for all but the most sophisticated users to fully understand.

To focus attention on these complex interventions, I present an interactive demonstration of an outrageously intrusive web-based text editor. The key feature of this editor is a sophisticated, opinionated, and obstinate AI that edits documents in real time without the consent of its human collaborator. Built from supervised and unsupervised machine learning technologies, this AI does not content itself with grammatical corrections. Indeed, it tweaks the document to conform to its own inscrutable stylistic preferences and generates new text via a pre-trained neural network. Before launching the text editor, would-be human co-authors can configure the intensity and character of the computational interventions (there will be AI pre-trained to mimic Ernest Hemingway and other famously stylized authors), but once the writing begins, there is no turning back. The “undo” button is turned off.

I intend this interactive experience to be enlightening, amusing, and frustrating in equal parts. Furthermore, I hope it encourages participants to reflect on both the perils and opportunities of human-computer (collaborative?) text composition. For better or worse, our word processing tools are becoming cleverer and our bestselling authors (vis-à-vis Robin Sloan) more willing to dabble in natural language generation. If we are to be prepared for the future of composition, then we must meet these peculiar technologies on their own terms and inside their own text editors.



The Orignal Mobile Games: Playable Game History on Mobiles

Stephen Jacobs

Rochester Institute of Technology, United States of America

The Original Mobile Games is an app for hand-held, mobile platforms that features digital emulation of up to 27 handheld puzzle/maze games originally produced from the late 1800’s- mid 1940s. The app is a co-production of The Strong National Museum of Play, the Rochester Institute of Technology and educational games studio Second Avenue Learning, Inc.

The games were selected from the collections of The Strong and the App features brief historical profiles of each game and photographs of the actual items in the museum’s archives. The first game featured in the App, Pigs in Clover was the “Angry Birds“ of its day. The factory produced 8,000/day and was still 20 days behind in orders. An informal tournament for best time held between five US senators resulted in newspaper articles and a political cartoon. A satirical article from a Chicago paper claimed the game had brought life in the U.S. to a standstill. While life hadn’t stopped a wave of knockoffs followed and an enduring model of gameplay had arrived. These “dexterity games” or “ball-in-maze” puzzles feature a wide range of game play and themes. Pop culture events of the times like the birth of the Dione Quituplets, the launching of the Queen Mary, the international competition to reach the North Pole and more were some of the many moments celebrated in these games.
This paper will detail the initial efforts by RIT students and faculty to emulate the games, the process by which RIT and The Strong decided to productize the work and select the games, the decision to work with an independent commercial studio to help “polish‘ the App for commercial release and the lessons learned throughout.



Fostering Open Scholarly Communities with Commons In A Box

Matthew Gold1, Charlie Edwards2, Jody Rosen2, Kathleen Fitzpatrick3, Paul Schact4, Kashema Hutchinson1

1The Graduate Center, CUNY, United States of America; 2New York City College of Technology, CUNY; 3Michigan State University; 4SUNY Geneseo

Fostering Open Scholarly Communities with Commons In A Box

Commons In A Box (CBOX)(https://commonsinabox.org/)was developed by the team behind the CUNY Academic Commons, an academic social network for the 25 campuses of the City University of New York (CUNY). Built using the WordPress and BuddyPress open source publishing platforms, CBOX simplifies the process of creating commons spaces where members can discuss issues, collaborate, and share their work. The original version of the software, CBOX Classic, was developed with support from the Alfred P. Sloan Foundation, and powers sites for hundreds of groups and organizations worldwide, who use it to create social networks where members can collaborate on projects, build communities, publish research, and create repositories of knowledge.

Over the past two years, the Commons In A Box team has partnered with the New York City College of Technology, CUNY (City Tech) to create a new version of CBOX based on City Tech’s OpenLab (https://openlab.citytech.cuny.edu/), an open digital platform for teaching, learning, and collaboration that has been used by over 27,000 members of the City Tech community since its launch in 2011.

The result, CBOX OpenLab, offers a powerful and flexible open alternative to costly proprietary educational platforms, enabling faculty members, departments, and entire institutions to create commons spaces specifically designed for open learning.

Funded by a generous grant from the NEH Office of Digital Humanities, the CBOX OpenLab initiative seeks to enhance humanities education – and public understanding of humanities education – by enabling the work of students, and faculty, and staff to be more visible and connected to the outside world. It also seeks to deepen engagement between the digital humanities and pedagogy by providing a process by which digital humanities practitioners can contribute software (plugins) to the project.

During this session, participants will share their experiences of fostering open scholarly communities with CBOX. These include: the Humanities Commons (https://hcommons.org/), a trusted, nonprofit network serving more than 15,000 scholars and practitioners in the humanities; the Futures Initiative (https://futuresinitiative.org/), which pursues greater equity and innovation in higher education through student-centered pedagogies, graduate student preparation, research, and advocacy; and KNIT, a digital commons for the UC San Diego (https://knit.ucsd.edu/). We will introduce the CBOX OpenLab platform, illustrating its use cases with examples drawn from City Tech’s OpenLab. Finally, we will engage attendees in discussion of the benefits and challenges of open scholarship, pedagogy, and community-building, exploring how they might adopt CBOX at their own institutions and contribute to its future development.

The panel includes graduate students, alt-ac professionals, and faculty members drawn from institutions across the country, who bring a wide range of perspectives to the discussion, and have deep experience in working hands-on to build open scholarly communities.

The session argues that it is vital for educational institutions to lead the way in putting free, open-source software in the hands of our students, faculty, and staff and empowering them to create and customize vibrant, attractive spaces where they can share their work with one another and the world.



Wikibabel: Procedural Knowledge Generation using Epistemology, Encyclopedias, and Machine Learning

Zach Coble

New York University, United States of America

Wikibabel is a digital art project that examines shifts in contemporary epistemology through an alternate version of Wikipedia. The site, a searchable database that is aesthetically and functionally similar to Wikipedia, is created with a process that uses machine learning and natural language processing to analyze the entirety of the existing online encyclopedia for its linguistic and structural style, and then creates new articles based on those patterns. The project employs parody and satirical critique to explore how the conflict between the Wikipedia's quest for "neutral point of view" and changes in credibility brought about by the social web struggle to convey complex and controversial topics within the rigid article template’s Harvard Outline format.

Wikibabel reinterprets a common practice in game design called procedural content generation, which refers to the programmatic generation of game content, such as game levels graphics and textures. When applied to knowledge rather than backdrops, Wikibabel creates pages that are grammatically correct, although they would find a home in most encyclopedias. However, as Aristotle noted, nature abhors a vacuum, and when reading a Wikibabel article, the reader takes cues from its appearance as an encyclopedia to fill in the missing meaning. As a digital sculpture that strips down the encyclopedia to its bare essentials - the aesthetic cues, the article template, and the linguistic style - Wikibabel challenges western society's emphasis on epistemology based on logic and pragmatism with results that are amusing, confusing, and if you are not paying close attention, entirely believable.

The project was my 2018 MA thesis project at the digital arts program at the Interactive Telecommunications Program (ITP) at New York University, and was deeply informed and guided by my experience as a librarian working in digital humanities. The site is accessible at http://wikibabel.world and is a creative approach to questions I regularly address in my professional work.



Seeker: A Visual Analytic Tool for Humanities Research

Heather Corrinne Dewey2, Beomjin Kim1, Jeffrey Malanson1, Benjamin Aeschliman1

1Purdue University Fort Wayne; 2Indiana University-Purdue University Indianapolis

Seeker is a new digital humanities tool for document analysis and visualization. It is the product of an interdisciplinary collaboration between Computer Science and Humanities scholars. Seeker is designed for exploratory search of a large volume of data and offers multi-tiered interfaces and data management features that allow users to locate, contextualize, and visualize the terms and concepts most relevant to their research, to see the correlations between those terms and all other terms appearing in the document set, and to identify the most important passages and documents that merit further study. Seeker analyzes a document set (that can include thousands of documents) to identify the most commonly used terms, including their overall frequency and the number of unique documents within which they appear. Users can also search for specific terms, for which the program will identify the location, and a list of all associated terms based on the frequency with which they appear in the same paragraph and the same sentence, and the average proximity of those terms to the user-selected term. These results are displayed statistically as well as visually. Seeker allows the user to assess the overall content of a large document set as well as the specific context in which user-selected terms appear.

This dual approach, as well as filters that allow users to quickly categorize a large document set in user-defined ways organize a document set into multiple categories (e.g. by chronology or author), can also be marshaled to support more focused analytical work,enables users to perform a variety of analyses, including the identification of relevant themes; the comparison of multiple authors; and an assessment of historiographical, linguistic, and conceptual change over time. Seeker is unlike existing digital tools because scholars can use the program to determine which documents from a large set merit investigation, while most existing systems are geared toward organizing and analyzing a set of sources that has already been manually screened by the user.



Designing an Original Video Game in Academia

Patricia Seed, Jessica Kernan

University of California-Irvine, United States of America

This paper will highlight the differences between designing a video game for industry and designing one in an academic environmen. The authors of this paper-a successful video game professional and an academic— successfully collaborated and completed an educational game on an historical topic. The game won a prize at a recent international competition. The challengers faced by industry and academia are distinct, and we will highlight the challenges of each, providing an introduction to the video game design process.



The MV Tool: Embodying Interdisciplinary Research

Jenny Oyallon-Koloski, Michael Junokas

University of Illinois at Urbana-Champaign, United States of America

The movement visualization tool (mv tool) is a motion-capture playground that drives the mv lab, an embodied research environment to observe and analyze human movement patterns as rendered in cinematic space. The mv tool has two modes, each of which serves complementary research purposes. The capture mode allows users to record physical mover and virtual camera data, with the aim of progressively building a movement visualization database and relevant metadata. The interactive mode, which we will demonstrate, engenders an embodied research space for both rigorous and playful figure and camera movement experimentation. Real-time interaction with analytical concepts from Laban/Bartenieff Movement Studies (LBMS) and cinematographic frameworks is a central goal of this research, allowing participants a better understanding of their own movement patterns and the theoretical infrastructures shaping this exploratory space.

The mv tool’s interactive mode is operated by two agents: one moving in a Kinect-driven motion-capture space (the moving agent) and one operating the software and, when applicable, manipulating camera settings in the digital environment (computer/camera agent). A screen relaying live, interactive visualizations of an abstracted body “skeleton” allows the mover to see their movement pathways. Through a set of calibration movement sequences, the computer agent can visualize their movement interacting with various movement paradigms drawn from Laban Movement Analysis’ taxonomy of human movement and Rudolf Laban’s categorizations of where the body is moving in relation to its environment (Space).

The mv tool’s authors are also interested in studying how the camera moves in cinematic space, and the tool therefore allows both the moving agent and the computer/camera agent to manipulate the camera’s position relative to the mover in conjunction with cinematographic paradigms of camera movement. Camera movement in the mv tool can be restricted according to Laban’s categorization of movement directionality and pathways, allowing the abstracted skeleton and the frame to shift together in myriad ways. How does our observation of human movement change when the camera is still while capturing a movement phrase versus moving in parallel or in counterpoint—in any of the three dimensions—to that movement? The foundational concepts from LBMS serve as guiding principles for the tool’s creation, but we expect the development of this work to challenge and ultimately strengthen the empirical sustainability of the Laban material.

The mv lab participants are particularly interested in this tool’s ability to study historical research questions; beyond the experimentation with movement frameworks from Laban Movement Analysis, we seek to re-create examples of choreographed figure and camera movement in cinema to isolate those stylistic elements from other characteristics of film form, the complexities of mise-en-scène and sound, in particular. Camera movement’s range of motion in the mv tool can therefore be adapted and limited to better study historical cinematographic constraints, for example, by restricting the types of camera movement based on technological availability from a particular period. The accessibility and multifunctionality of the mv tool allows it to serve many academic purposes, from increased kinesthetic intelligence to the development of computer vision protocols for moving image analysis.



Scholarly Publishing with Manifold

Matthew Gold1, Jojo Karlin1, Liz Losh2, Jacque Wernimont3, Carole Stabile4

1The Graduate Center, CUNY, United States of America; 2College of William & Mary; 3Dartmouth College; 4University of Oregon

This panel brings together authors, editors, publishers, and educators exploring the Manifold scholarly platform as a space for publishing digital scholarly monographs, journals, archival materials, and open educational resources. The Manifold Scholarship project, an open-source platform funded by the Mellon Foundation (http://manifoldapp.org), invites new networked and iterative forms that have strong ties to print and support rich digital multimedia publishing and post-publication annotation, highlighting, and resource annotation.. The presenters will outline how the choice of Manifold platform has impacted resource creation, curation, and engagement. Across a range of spaces -- the online publication of a book series; the publication of a collected edition, the creation of an OER repository for a university system; and the formation of a repository of archival materials of authors from underrepresented communities -- this panel will highlight how the Mmanifold platform has fostered new spaces for scholarly publishing.

Presenter 1 will provide an overview of the Manifold platform, locating its history in the open access movement and the turn towards web-based annotation of scholarly materials. This presentation will cover the origins of the project in the _Debates in the Digital Humanities_ book series and demonstrate some of the features of the platform, while also pointing to future directions that the project will undertake.

Presenters 2 and 3 will discuss how a platform can support the intersectional, feminist scholarship of the Reanimate publishing collective, particularly in terms of media accessibility. The Reaminate Project, part of the Manifold Digital Services pilot program, focuses on the publication of archival work by women working in media and engaged in activism from the 1930s to 1950s sheds; it sheds light on untold stories of the influences of race, gender, class, and other axes of identity and oppression on women in media. However, much of this writing has never been published and the market forces on academic publishing are structural obstacles to their recovery. This presentation will describe how the Manifold platform is being used to present these archival materials.

Presenters 4 and 5 will discuss the publication of _Bodies of Information: Intersectional Feminism and Digital Humanities_. Taking intersectional feminism as the starting point for doing digital humanities, _Bodies of Information_ is diverse in discipline, identity, location, and method. Presenters 4 & 5 will discuss the translation of the collected edition from print object to online publication, highlighting how the project highlights issues of materiality, values, embodiment, affect, labor, and situatedness.

Presenter 6 will discuss Manifold as a space for creating Open Education Resources. Building Manifold projects with other graduate teaching fellows at the CUNY Graduate Center (teaching across many CUNY campuses, Baruch, Queens, Hunter, John Jay), and for the presenter's own class at Brooklyn College, This presentation will describe the process of creating texts that can be used and annotated by students across multiple departments, colleges, and universities.



XR in the Digital Humanities

Micki Kaufman1, Lynn Ramey2

1CUNY Graduate Center, United States of America; 2Vanderbilt College, United States of America

Computing capabilities for rendering high-quality three-dimensional graphics have progressed remarkably in recent years, largely in response to competition in the gaming and defense industries. While public awareness and engagement with XR (Extended Reality), which comprises virtual reality (VR) and/or augmented reality (AR), has risen sharply, scholars are taking a measured and thoughtful approach. Engaging with the new technology, scholars remain meta-critical about how high-speed computational capabilities like XR can effectively represent the multiple dimensions in their digital humanities research. An increasing number of multidimensional projects by digital humanities scholars focus on the modeling and simulation of real, historical physical spaces, and/or the articulation of imaginary or data-derived spaces for pedagogy and research in the humanities. A common thread of the use of three-dimensional representations and techniques is that they are at once both extremely complex and stunningly intuitive, both to render and to interpret. The ability for DH to flourish while comprising such internal contradictions suggests the capabilities of multidimensional technology to distill and refine the essential points of complexity by articulating them in those dimensions. In this manner, scholarship in XR seeks to reveal the underlying essence of DH projects by employing rich, deep and immersive experiences in pedagogy, data visualization, modeling and simulation.

This proposal is envisioned as a walk-up booth/room with one-to-a few tables and/or headsets set up, allowing visitors to experience DH content in virtual reality. The content should demonstrate research and pedagogical content, 360° video and other forms of immersive content for the humanities. One panelist will demo the exploration of three-dimensional interactive spaces for data visualization and storytelling, another will present content on VR and embodiment to understand medieval textual transmission, and there may be additional in-perseon presenters. However, irrespective of the number of ‘panelists’ in person, the panel can include content provided by scholars who are unable to attend. The technical limitations may require this content be comparatively simple (360° video vs interactive, for example) but it would allow more scholars to have ACH members and attendees engage with their work remotely. If accepted the scope of the effort can and will be modified based on the available time frame and the presenters and/or sources for hardware and content that are identified.



A Digital Game for a Real-World French Course

Rachel Faerber-Ovaska

Youngstown State University, United States of America

In this poster presentation, I describe the classroom implementation of a hybrid digital and face-to-face learning game in two face-to-face undergraduate French courses. It is of interest to instructors considering game-based learning and blended learning.

Previous classroom research indicated some of my French language students felt isolated, overwhelmed, and confused about how to learn a foreign language. Foreign language anxiety (Ewald, 2007; Levine, 2003; Macintyre, Burns, & Jessome, 2011) and lack of language learning strategies (Chamot, 2005; Tran & Moni 2015) have been shown to impair classroom language learning. Furthermore, upon completion of the two-semester foreign language requirement, some students lack language-learning strategies to support ongoing, independent maintenance and development of their cultural and linguistic skills. Drawing on Gee’s (2003) game learning principles, I created a game to address these problems in my context.

Created with free digital tools and resources, this game's main objectives are to support and augment the learning of French language and culture, and to promote student acquisition of effective language learning habits and strategies. To win the game, structured as a multi-week race with weekly progress check-ins, diverse teams of student players select French learning activities, carry them out (outside of the classroom), and finally document them on the digital game board shared with the class. The activities teams may select are designed to promote student-led out-of-school learning, peer collaboration, and co-regulation of learning, with an eye to improving and extending learning outcomes. All activities involve learner interaction with physical or virtual francophone cultural and linguistic resources.

The game’s effects are evaluated via learner survey responses and feedback, learner artifacts, and instructor observations. Excerpts from the completed set of game activities, and the basic digital game board, are available online for viewing. Implications of the study are discussed in terms of improving classroom learning and engagement via digital pedagogy application, incentivizing student-led learning, as well as practical observations for effective creation and implementation of similar learning games in other courses.



The Caselaw Access Project: A Complete Data Set of United States Caselaw

Jack Cushman

Harvard Library Innovation Lab, United States of America

This session presents the Caselaw Access Project (https://case.law), a complete collection of all precedential court decisions decided in United States jurisdictions between 1658 and 2018. The data is available as structured text and metadata, either by individual API calls or bulk download.

CAP is a significant data set for digital humanites, representing the entirety of United States common law. Setting aside the legal significance of caselaw, CAP also represents some 6.5 million instances, across three centuries, of a human describing a moral/ethical dispute and its resolution.

The session will present the practical use of the CAP data set and early applications, as well as the history of its creation, lessons for construction of similar data sets, and considerations and limitations in its use.



Psychasthenia 3: Dupes

Victoria Szabo1, Joyce Rudinsky2

1Duke University, United States of America; 2University of North Carolina, Chapel Hill, United States of America

Pyschasthenia 3: Dupes is a Unity-based videogame art project that explores the hard problem of how we can retain awareness of the narrow-casted nature of our everyday lives in the face of ubiquitous data-collection, analysis, and digital remediation of everyday life. The game draws upon challenging workplace relationships and gamified assessment environments, revealing the ubiquity of data shadow construction, the erosion of personal privacy, and the amplified power of the external instantiation of a avatar self. Dupesis set in a dystopic, yet banal, workplace environment, where every interaction, whether “in person” or online, is logged and judged against a series of internal evaluation factors. These success factors are in turn revealed at the end of the game in the form of a comprehensive Success Report, which resembles a credit report in its presentation and measures, and which forecasts what your ultimate workplace fate will be.

The seeming premise of the game is that you as the user must complete an HR personality test before the end of a gameworld workday. However, you are continually interrupted with other demands. Over the course of the day you visit the company shrink, attend a staff meeting, stop by the communal water cooler, and are summoned for a meeting with the boss, following by a dispiriting trudge back to your basement cube. During these side trips, your interactions with archetypal co-workers are secretly logged, the interactions themselves playing a critical part in building up your “success” profile. Each character reflects a different workplace archetype: The Psychotic, The Artiste, The Narcissist, The Celebrity, The Sophist, The Bombast, The Charismatic, The Dominator, The Ingenue, The Shopper, the Sycophant, the Melancholic, and the Egoist. Interactions with each character reveals his or her core attributes, with your responses increasingly limited as the day goes on. During the endgame, these archetypal figures recombine into a modern day tarot, augmenting and illuminating the success index ostensibly compiled from the formal test. The system reveals the characters representing your spiritual twin and your nemesis, with a numerical Success Index derived via the OCEAN Five Factors of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism - but here re-imagined as Gullibility, Grinding, Gladhanding, Subjugation, and Internalization.

The revealed estrangement of the holistic individual from authentic human experience is predicated on the assumption that nothing within the workplace remains outside the evaluatory system. “Human” indirection within the game reveals the extent to which a gamified, logged quotidian experience becomes subject to exploitation and summary judgements reappropriated into an evaluatory matrix. Dupes also ultimately also complicates our ever-more-entangled relationships with computer mediated communication by positioning the user in the uneasy position of not being sure whether they themselves are only individuals playing a game, or if they are being logged and judged through their game interactions with a larger purpose in mind. The subject position of the user as a Dupe puts him or her back into the endless regression of surveillance and recuperation, making us as the game’s creators also inherently complicit with the system.



Co-creating Affect: Towards an Ontology of Joy

Nikki Stevens1, Molly Morin2, Liz Lessner3, Jason Barrett-Fox2

1Arizona State University, United States of America; 2Weber State University; 3George Mason University

Inspired by the literature and practices of posthumanism, feminist technoscience, and agential, relational, and sensorial objects, this project offers an initial inquiry into the deliberate and collaborative production of joy. Like knowledge, joy is communal, contextual, evolving, relational. Scholars like Rosi Braidotti and Katherine Hayles problematize the primacy of hierarchical, anthropocentric perspectives on relational networks. Inspired by these feminist/posthumanist approaches, we are moved to ask how joy can be a “valid” product of human-object entanglements. This experimental approach takes frameworks from STS and posthumanism and remixes them to place joy as a first-order product (opening the door for other affective experiences to also be valid productions of human-material entanglements, not to mention future sites of and partners to knowledge production).

Scholarly networks develop and transmit knowledge as their primary product, but sculptural, interactive objects create a platform for the affective, illogical, and uncanny to be developed and translated in ways that support transmission within social, material and, importantly, scholarly networks. This project proposes that the interaction between art and technology might unseat the hegemony of “knowledge” as the primary product of scholarly engagement.

Art exists in the liminal space between rational and affective experiences. Its role as a boundary object shifts visibility, provokes affective experience, translates textual, tactile, symbolic, and visual information simultaneously. Plastic art media both respond to stimuli and create new propositions in their material form. By introducing new responsive means, networked and algorithmic technologies extend art’s capacity for active participation in cultural production. As two artists working with technology and interaction, a software engineer, and a rhetorician, we embrace the non-linear, phenomenological power of physical interaction with objects and harness it in the production of joy. We borrow from the art world’s poetic, performative, and tangible discursive practices to create space for a theory of joy.

This demo offers an interruption - of the ideas of scholarship as knowledge production, of the unidirectional format of many sessions - inviting attendees to join us in creating an entangled, embodied, collective experience of joy. Joy as institutional concern now resides mainly in the entertainment sector and the research on joy is most often used in service of capitalism and enterprise (Wall Street sentiment analysis, Facebook quiz/online games as thinly veiled surveillance apparatus, etc.) We ask what knowledges may come when we center joy as necessary to the human inquiry the academy seeks to foster. This demo will take the form of a participatory sculptural assemblage of joy. As boundary objects, works like this might allow us to reflect on the multivalent experience of joy and begin to structure a theory of joy as integral to scholarship. We invite you to contribute to this initial inquiry through an accessible, participatory, feminist, and ultimately human(e) co-production of joy. While participants are invited to bring along their own materials and artifacts, diverse means for embodied joy collection will be available on site.



How to Keep Reading: An Interactive Panel on "Mediate: A Time-Based Media Annotation Tool"

Joel Burges, Emily Sherwood, Joshua Romphf, Solveiga Armoskaite, Darren Mueller, Patrick Sullivan

University of Rochester, United States of America

Media literacy is one of the most pressing concerns for research and teaching in the humanities due to the centrality of multi-modal content—images, sounds, and text—in our culture. From film and television to video games, music videos, social media, music, and podcasts, multimodal content is ubiquitous in our everyday lives. Mediate is a web-based platform through which researchers and teachers can pursue scholarly inquiry and curricular development that enhances media literacy about time-based media. It allows users to upload video or audio, generate automated markers, annotate the content on the basis of customizable schema, and produce real-time notes. Once the annotation process is finished, users have a wealth of preserved data and observations about the media; these often result in data visualizations in addition to traditional print analyses. The utility of Mediate ranges from students—and, indeed, computers—learning to close-read time-based media narratively, visually, and sonically to scholars working individually or collectively to reevaluate central questions at both microscopic and macroscopic scales in the history of moving images or sound recording. While Mediate was developed for film and media researchers, we have expanded our use cases to include scholars and courses in linguistics, literature, music history and theory, data science, and visual and cultural studies.

During this presentation, we will discuss the development and use cases for Mediate in media studies, linguistics, literary studies, music theory and history, and data science contexts, and provide an interactive demonstration of the current prototype in which audience members can use the tool. We will thus show how Mediate enables scholars and students to read in the following ways: to annotate film and television in a way that yields data--and deep understanding--about the formal tendencies of both mediums across time; to analyze grammatical structures in advertising as they relate to images, soundtracks, and possibly gestures; to unpack how the vocal inflection of an individual singer relates to the lyric content of a particular song; and to teach computers how to recognize sound effects and visual devices that in some sense require the human ability to read. Through these cases, we intend to make an argument for how Mediate reveals the persistent necessity for humanist forms of reading that have come under increasing attack intellectually and politically.

 
10:45am - 12:15pm#Poster: Poster Session
Grand Ballroom Foyer A, Marriott City Center 
 

Novels in the News: The Reprinting of Fiction in Nineteenth-Century Newspapers

Avery Blankenship, Ryan Cordell

Northeastern University, United States of America

The Viral Texts project (http://viraltexts.org) addresses the practice of cutting and pasting in the nineteenth-century press broadly, uncovering a range of widely-circulated news, “information literature,” poetry, and miscellany that constituted the bulk of newspaper content during the nineteenth century. However, our broad lens has obscured the circulation of fiction, though the newspaper was an essential vehicle this genre. Whether through the publication of short stories, serialized novels, or excerpts, or through the extended quotations that appeared in reviews, the newspaper was many nineteenth-century readers’ primary fiction medium. This paper describes a new Viral Texts experiment using existing collections of nineteenth-century fiction as seed corpora, enabling us to identify only newspaper reprints drawn from the genre. This paper outlines initial experiments using the Wright American Fiction archive (http://webapp1.dlib.indiana.edu/TEIgeneral/welcome.do?brand=wright) of nearly 3,000 American novels and story collections published 1851-1875 (also the period of greatest strength in our collected newspaper corpora) to identify the stories or novel chapters that circulated most widely in newspapers, and to analyze how these viral fictions remap the intersections of literary culture and mass media in the period.

The approach we outline here accounts for what fiction suffused everyday reading in the period as well as how those texts were used. Sometimes entire stories or novels would be reprinted in the press, but more often particular chapters or excerpts circulated unmoored from their source. Newspapers sometimes embedded segments from fiction within other kinds of texts (e.g. sermons, speeches) or to emphasize particular editorial points. Such uses are particularly salient from a literary-historical perspective, as they offer insight into which characters, scenes, or themes most resonated with editors and readers, or what kinds of cultural narratives emerged around texts: sometimes aligned with their canonized perception but sometimes not. For instance, preliminary experiments indicate that while chapters twelve, thirteen, and sixteen of Harriet Beecher Stowe’s novel, Uncle Tom’s Cabin, were reprinted thirty to thirty-one times, other chapters might only be reprinted once or twice. The most reprinted chapters of Uncle Tom’s Cabin include what would become the novel’s most iconic scenes—George and Eliza Harris’ armed stand against slave catchers, for instance, or Uncle Tom’s horrific beating—but also shorter scenes of slavery’s horrors, such as a child taken from his mother at auction. A close examination of these and similarly “viral” chapters illuminates the popular interests of the general public—both the elements of fiction they found compelling and the issues they used fiction to address. Analyzing canonical texts also points us toward similar patterns that emerge in non-canonical texts—Female Life Among the Mormons, for example, or A Crown from the Spear—which early experiments show were reprinted far more often than more familiar literature. To think across our corpus, in closing we will describe topic modeling and vector space analyses of reprinted chapters’ content that helps surface the themes and textual features that drove readerly interest in popular newspaper fiction.



The Odors of Mediation: A Case Study in 3D Printing

Jeffrey Moro

University of Maryland, College Park, United States of America

Digital media studies has long relied on techniques of vision, from its roots in cinema studies to contemporary strategies of data visualization. However, the field's recent orientation toward big data, infrastructural systems, and planetary ecology have challenged this focus on vision, as scholars grapple with the project of rendering such large scales within the human perceptual field. How might digital media studies tuned to an infrastructural or environmental key benefit from expanding its sensory toolkit, particular by attending to the "lowest" or most "chemical" sense: smell? Smell is massively understudied and under-theorized in the media studies tradition, even as it's unavoidably present in media writ large, from the fetishized smell of old books to the ozone outgas of an overheated motherboard. In this paper, I sketch a methodology of smell for media studies and apply it to a single digital technology: the 3D printer. In doing so, I ask how smell enriches and refines the analysis of technical objects and digital cultures. 3D printers heat and extrude filament according to digital shapefiles, forming objects ranging from small trinkets to prosthetic limbs. Even more so than its actual functionality, the 3D printer has become one of the more potent cultural symbols of techno-innovation and the "maker" movement writ large. In operation, 3D printers release a range of noxious gases ranging from the pleasant-smelling to the absolutely wretched. I argue that these smells serve a dual function, both as toxicities that require management with ventilation and as material signifiers of a participation in a broader techno-culture. In particular, smell reveals this culture as not solely confined to affectively "cool" Western maker culture, but the racialized manufacturing practices displaced into the developing world now returning to the West in miniature through the 3D printer. Attending to smell then connects material specificity to affective experience, in turn becoming a useful sense for engaging questions of pollution and toxicity that dog research into media manufacturing.



Mapping an Archipelago of Influence

Corey D Clawson

Rutgers University, United States of America

Archivepelago is a project visualizing the transmission and translation of notions of sexuality and gender by mapping networks of queer writers and artists (and early sexologists), bringing into relief the communities that developed through these networks. The project draws upon finding aids and biographic data, charting connections between these figures ranging from their correspondence to the works dedicated to and translated by one another. The project is intended to act as a resource for the public to understand the forces underpinning queer diaspora while encouraging scholars to rethink our conceptions of artistic influence beyond the misogynistic, heteronormative notions presented 35 years ago in Harold Bloom's The Anxiety of Influence.

In addition to drawing upon scholarly work on archipelagoes (Manuel Guzman, Michelle Stephens, Island Studies Journal), this project draws upon Digital Humanities projects visualizing and thinking about archived correspondence such as the Republic of Letters and Olive Schreiner Letters Online as well as conversations about recovering and amplifying history via Digital Humanities projects. This presentation will serve as an overview of the project and reflect upon its current phase (development of a graph database and prototyping) while inviting audience discussion regarding methods, theory, and how scholars can better understand and model queer literary influence.

Ultimately, Archivepelago’s main features would be 1) interactive network maps demonstrating relationships in terms of correspondence, translations of one another’s works, and shared demographic or psychographic characteristics such as religious affiliations; 2) interactive geographic maps depicting the migration of individual writers and artists, demonstrating emerging communities in metropolitan centers such as Paris, NYC, Algiers, and Mexico City; and 3) an online exhibit outlining key concepts drawing on well-known moments and figures (e.g., Langston Hughes, Gertrude Stein).



From Document to Data: Prosopography and Topography in the Tax Rolls of Medieval Paris

Nathan A. Daniels

Johns Hopkins University, United States of America

This poster presents on the experimental stages of a project to create a digital edition of the tax rolls of medieval Paris. Levied on the city by King Philip IV between 1292-1300, and again in 1313, these documents provide a wealth of information about the individual people, demographics, and topography of the city. However, despite their importance, they have never been systematically published or digitized, and all analysis between them must currently be done by hand. This project, inspired by the Henry III Fine Rolls Project (finerollshenry3.org.uk) and the Map of Early Modern London (mapoflondon.uvic.ca), proposes to render the seven extant rolls in TEI/XML, where personal, occupational, topographical, and financial data are dynamically brought together and opened up for searching, cross-referencing, mapping, tabulating, and exporting. Although the entries in the rolls themselves seem simple at first glance—organized by parish and consisting at most of a name, occupation, and amount of tax owed—actually encoding them prompts a variety of questions for the inexperienced editor. Many of these relate to the conventions of digital editions: whether the edition should reflect the layout of the manuscript; how to handle corrections, errors, and abbreviations; and how to approach uncertainty—especially when identifying individuals between multiple tax rolls. Others relate to underlying technologies and frameworks: including detailed relationships (familial, occupational, geographical) is desirable and adds interoperability within the semantic web, but also means working with complex ontologies and data models. This poster explores these questions and others, and engages with the early concerns and pitfalls of creating a digital edition.



Reimagining Romance: "The Legend of Korra" Critical Fandom Practices

Cara Marta Messina

Northeastern University, United States of America

Current fan studies research merges sociopolitical arguments and rhetoric studies, complicating how fans engage with texts and the values of these engagements. Scholars have articulated what “critical fandom” practices (Lothian) look like in terms of identity and representation (Ebony Elizabeth Thomas & Amy Stornaiuolo; Paul Booth; andre carrington). Texts like Fifty Shades of Grey, originally a Twilight fanfiction, demonstrate fans’ desires to explore diverse types of intimacies and relationships; the published Fifty Shades of Grey novels, as other fanfictions have done, reinscribes heteronormative notions of intimacy, sexuality, and gender. This presentation examines how fandom literacy practices can resist harmful hegemonic narratives around race, sexuality, and gender.

Online fan communities such as Archive of Our Own (AO3)–a fanfiction database–prioritize fan interpretations of a text over the text itself, which makes these spaces important to study and celebrate when thinking about critical fandom practices. I focus on one fandom in particular: The Legend of Korra (TLoK), a popular Nickelodeon children/young adult television show. What makes TLoK a crucial cultural text is that the main character, Korra, is a bisexual woman of color, which already resists mainstream narratives around race, sexuality, and gender. I have collected and scraped over 8,000 TLoK fanfictions published on AO3 with approval from the Organization for Transformative Works. Using this corpus, I will examine fan representations of intimacy, sexuality, and gender by cross referencing computational text analysis with the “Ratings” and “Relationship” metadata. For example, how is intimacy portrayed in General Audience as opposed to Mature? In TLoK fan community, the average General Audience fanfic word count is around 3000, while the average Mature fanfic word count is around 20,000; this discrepancy suggests that there are generic patterns that fanfiction authors follow. By pairing these Ratings with the Relationship tags chosen for the texts, I will also examine what patterns occur in Mature texts that center around particular relationships and identities; for example, how is the relationship between Korra/Asami (two women) portrayed as opposed to Korra/Mako (a woman and man)? By doing this work, I hope to create a methodology that centralizes fan literacy practices in order to demonstrate how critical fandoms reimagine intimacy and romance.



What’s in a Face? Examining Historical Trends Through the Faces of a Mass Media Publication

Kathleen Brennan1, Ana Jofre1, Vincent Berardi2, Aisha Cornejo2

1SUNY Polytechnic Institute, United States of America; 2Chapman University, United States of America

The role of popular print media in broader social, political, and cultural landscapes has been widely studied in the field of digital humanities. Many of the existing, large-scale studies, however, have focused on text analysis rather than image analysis. This paper describes an interdisciplinary project that analyses images of human faces from the complete Time magazine archive. Our project combines quantitative and qualitative methods to study the images, as well as design methods to make our results broadly accessible. Our research team includes two faculty members, a postdoctoral researcher, and undergraduate and graduate students.

The first section of the paper provides an overview of our project: our goals, a description of our methodology, and results to date. In the second section, we discuss our rationale for focusing on Time magazine: it has been a mainstay publication for many decades, and has a well-documented corporate history. While it is widely held in libraries across the US these holdings are primarily in the form of microfiche, which limits public accessibility. The final section outlines the unique aspects of using human faces as a conduit into the Time archive as well as into the broader social-historical context. For example, exploring changes in representations of gendered or racialized faces can capture both Time’s coverage and how it may have been a driver of events.

We attempt to shed new light on how and why to study this kind of archive, and what such research can offer both academics and the general public in terms of understanding the role of a publication like Time magazine in broader socio-political contexts.



Designing a Community Based Digital Archives Project and FYE Course Module

Sally A. Everson, Juliet Glenn-Callender, Levette Morris, Ohmar Morris

University of The Bahamas, Bahamas, The

I plan to present the early planning stages of a developing a community-based digital archival project parts of which are embedded in a first-year required composition course. In each semester of the Writing and Rhetoric course, research projects were assigned that aligned with themes being targeted for possible development of digital archives: Junkanoo and sites of cultural memory. The community digital archives project team includes an English professor (PI), Librarian (Co-PI), Assistant Librarian, Technology support staff (2), and a local practitioner (member of the Junkanoo group) who is also a professor of Chemistry. Community stakeholders have been identified and met with including the owner of a defunct Junkanoo museum (and its contents), the leader of a Junkanoo group and one of the originators of the festival on the island, and the chair of the local Junkanoo Committee. The local newspaper (which does not have digital archives) has also been identified as a possible partner. Community members and leaders of Junkanoo have expressed an interest and desire to preserve the history of the Junkanoo festival on the island and to help educate the youth and general community about its value as a cultural tradition given recent decline in interest and participation among the younger generation. Fall 2018 was the first iteration of student research projects to determine the interest and viability of developing archives in these thematic areas and identify possible collaborators. Students were required to use primary sources held in the university library special collections, the public library, the local newspaper (paper) archives, or conduct interviews with relevant persons in the community. These research papers were then used as sources for students to publish blog entries on Wordpress websites for each topic/theme. Models, resources and input for proceeding with this project are sought as members of the project team are new to digital humanities approaches. In addition, although both the PI and Co-PI are Caribbeanists, both are foreigners, which presents additional challenges to a community-based project, and planning for sustainability.



The Digital Studies 101 Website: Developing and Using An Un-Textbook

Lee Skallerup Bessette1, Zach Whalen2, Brenta Blevins2

1Georgetown University, United States of America; 2University of Mary Washington, United States of America

When designing a course, one early decision instructors make is what text or texts to assign. Whether textbooks, individual essays, videos, or other materials, assigned texts provide students with a content introduction before class discussion, present an additional perspective on a topic, and offer sources for later reference, among other potential benefits. However, textbook come with other limitations and deleterious effects, including cost (Moxley; Munson) and ideologies (Welch; Johnson-Eilola). In an era of rising tuition, fees, and student debt, educators (Smith & Casserly; Morris-Babb & Henderson) and states have argued in favor of open-source textbooks as one means of mitigating higher education financial impacts to students. Long before our own state developed a law addressing textbook costs and encouraging the adoption of low or no-cost open education resources, our department created a common website resource for all Digital Studies (DGST) 101 instructors and students. We found using the digital medium to communicate about digital studies topics not only a cost-savings measure, but an ethical approach to digital education for instructors, students, and the subject matter itself.

Using ethics as our touchstone, our panel discusses the historical and on-going development of our DGST 101 web resource, the benefits and challenges of an institutionally-created resource for the instructor-content developers, the advantages and concerns for students, and discuss how instructors and students use the website within the context of DGST 101 courses. Throughout the presentations, the panelists present a discussion of ethical impacts of balancing academic freedom across multiple instructors, developing sustainable content attuned to a rapidly changing digital context while negotiating issues such as supporting commercial, ad-based resources, long-term maintenance questions, issues of digital permanence/digital ephemera, and web labor issues.

Speaker 1 addresses the history of, intent, and development of the website. The speaker describes how the website matches the programmatic ethos.

Speaker 2 addresses the development of resources for the website and an instructor perspective on using the website.

Speaker 3 describes student uses of the website and the ethics behind those uses surrounding student choice, digital accessibility, and opportunities for discussion about fair use. While the term “academic freedom” generally appears in discussion of faculty research inquiry and instructional content development and presentation, we find the Digital Studies 101 website reconfigures academic freedom for students. Rather than students all focusing on the same materials, students choose from a curated list of web materials to develop their own expertise and to develop their own projects.

As Digital Humanities begins to more critical interrogate DH pedagogy, we hope that this model can provide an inspiration and model to use in a variety of formats and digital approaches.



Minority Representation in the Foreign Language Classroom: Teaching Languages Through Digital Engagement

Sumor Ziva Sheppard

Huson TiIllotson University, United States of America

In the texts used for Spanish Basic Language programs, cultural conversations often reinforce negative stereotypes and/or further marginalize minority groups. These texts ostracize students while also giving a skewed, Eurocentric/mestizo, inaccurate and monolithic view of the Spanish-speaking world. In my classes, I have changed this by employing monolingual voices of protest and change into the classroom. We studied Costa Rica, for example, but through the lens of the Afro-Costa Ricans fighting to be recognized on their census and exploring their daily reality in their native country.

My proposed 15 minute digital presentation would discuss methods for foreign language instructors to effectively use primary sources in their classes to build interactive digital databases their students can use to engage the culture of the target language through the minority lenses most textbooks lack. This is imperative, as students then can recognize the similarities f the struggles and landscape of all the Americas.



Creating a Spatial History of London’s Public Drinking Fountains

Lisa Spiro

Rice University, United States of America

As Emma Jones notes, “Everyday demands for water in the city have greatly shaped the design, use and experience of our built environment” (78). In 1859, temperance advocates and social reformers began constructing public drinking fountains across London, hoping to offer the poor unpolluted water and a free alternative to alcohol. Soon they also built cattle, dog and sheep troughs to address animal welfare needs. Drawing from the detailed records of the Metropolitan Drinking Fountains and Cattle Trough Association (MDFCTA), I analyze how a private charity developed an important part of the public health infrastructure in Victorian London. By mapping the locations and attributes of these fountains and troughs, and by combining that information with other data sources such as economic and health records, we can examine what factors influenced their placement, including population size, level of poverty, rates of disease, and proximity to churches and parks. The location of fountains and troughs seems to reflect the MDFCTA’s desire to reach large numbers of people and animals, its pragmatic decisions in negotiating with government officials, the influence of donors, and its associations with the temperance, parks, and animal welfare movements. This project demonstrates how researchers can employ Geographical Information Systems (GIS) and data visualization along with archival research to discern patterns in the development and decay of urban infrastructure.

This work is supported by the Rice University Humanities Research Center’s Spatial Humanities Initiative.

Works Cited

Jones, Emma M. Parched City. John Hunt Publishing, 2013.



Viaggio alla ricerca della conoscenza: Dalla faccetta R alle faccette R4 dei FAIR PRINCIPLES per l'identificazione del Digital Cultural Heritage

Nicola Barbuti, Stefano Ferilli

University of Bari Aldo Moro, Italy

L’identificazione del Digital Cultural Heritage, riconosciuto ufficialmente dall’UE nel 2014, passa necessariamente dalla risoluzione delle rilevanti criticità relative a conservazione, stabilità, sostenibilità, fruibilità e riusabilità delle risorse digitali nello spazio e nel tempo.

Diventa urgente e indispensabile evolvere l’approccio corrente al digitale, oggi ancora inteso esclusivamente quale mediatore di valorizzazione di patrimoni culturali analogici, verso una sua ridefinizione quale facies culturale identitaria della contemporaneità.

Allo scopo, una questione di primo livello da affrontare con urgenza è l'affidabilità delle risorse digitali e, in particolare, dei metadati.

Quasi tutte le collezioni digitali oggi fruibili, siano esse digital libraries, o archivi digitali, o audio-video, sono condizionate pesantemente dall’inaffidabilità, essendo incoerenti, non interoperabili, non preservabili proprio perché le risorse sono state generate con scarsa attenzione ai metadati e ai loro contenuti descrittivi.

Per rendere più affidabili i dati digitali, proponiamo che la R: Re-usable dei FAIR Principles sia ridefinita quadruplicandola in R4: Re-usable, Relevant, Reliable and Resistant. Questi requisiti, infatti, conferirebbero ai dati digitali il valore di Cultural Heritage, in quanto li renderebbero sostenibili e permanenti

L’approccio metodologico sopra delineato è stato applicato sperimentalmente nel progetto di digitalizzazione dell’Archivio storico della casa editrice G. Laterza & Figli, in pubblicazione nella Puglia Digital Library della Regione Puglia[1]. I risultati mostrano che la distinzione tra computational artifacts culturali e prodotti digitali “di consumo” risiede nei metadati descrittivi, e in particolare nella correttezza delle proporzioni tra:

- configurazione quantitativa: è il rapporto bilanciato tra scelta e quantità di elementi e attributi della struttura del metadato ed esaustività di informazioni/conoscenze sulla risorsa e sul suo ciclo di vita da fornire nelle descrizioni;

- configurazione qualitativa: è la scelta equilibrata del livello informativo/cognitivo da conferire a ciascuna descrizione e all'insieme delle descrizioni che rappresentano la risorsa e il suo ciclo di vita, mediata in relazione alle possibili variabili delle esigenze di conoscenza degli utenti sia contemporanei che futuri.

Lo schema di metadati è stato co-creato avendo quale riferimento lo standard METS-SAN, integrato con metadati presi da altri standard basati su ontologie e linguaggi sia semantici che concettuali a completare la struttura definitiva.



Topics and Terms: Differential Vocabularies in Composition/Rhetoric Doctoral Dissertations

Benjamin Miller

University of Pittsburgh, United States of America

In this poster presentation, I identify vocabulary used to discuss related subjects in several thousand doctoral dissertations in Rhetoric, Composition and Writing Studies (RCWS) – a field whose hybrid and shifting names illustrate the risks of assuming easy transparency of terms. Drawing on LDA topic models of full-text dissertations in the field, followed by similarity clustering among topics, I examine both divergent associations of terms shared across content clusters and synonyms that associate distinctively with different topics within clusters. Consideration will be given to the role of institutional histories and methodologies in contributing to the way these vocabularies are distributed.

This project follows on calls for research into graduate student identity and training in RCWS (Brown et al.) and into more empirical grounding for claims about the nature of the field (Haswell; Phelps and Ackerman), including more grounding in digital research (Johnson).



College Your Way: The Language of Marketing in Contemporary Commerce and Higher Education

Erik Simpson, Megan Tcheng

Grinnell College, United States of America

This poster comes from a collaborative team: one of us is a Professor of English with an interest in quantitative analysis of textual data; the other is an undergraduate English major and Neuroscience concentrator with work experience in our institution’s offices of Communications and Admissions.
We have compiled a dataset of text from college and university websites in five institutional categories: community colleges, regional colleges and universities, and national colleges and universities. Our investigation explores relationships between institutional type and public-facing marketing language. This approach reveals how concepts such as exclusivity, success, and choice migrate across market sectors. How, we ask, does the great menu of higher education in the United States present its options to prospective students? And moreover, how does the language used by these institutions present their core values?
Some institutions speak of quality; some speak of excellence. Across the board, the websites present a narrow vision of the academic fields doing research. The institutions of two sectors are much more likely than the others to speak of what “you,” the student, will do as an undergraduate, and the schools of one sector were more markedly more likely to speak of diversity. Drawing on these and other cases, our poster presentation will address patterns in the marketing of higher education that have been difficult to perceive from our positions within the academy.



Towards Computational Analysis of Survivor Interviews about Holocaust and Massacre

Lu Xiao1, Steven High2, Liangqin Jiang3, Hao Yu4, Robert Mercer4, Jumayel Islam4, Wenchao Zhai1, Jianyi Liu1, Qingyao Yu1, Yuyu Ko1

1Syracuse University, United States of America; 2Concordia University, Canada; 3Nanjing University, China; 4University of Western Ontario, Canada

Eyewitness accounts of survivors of past mass violence are valued for their ability to educate and commemorate, to bring perpetrators to justice or counter denial, for societal reconciliation, and to contribute to social or intergenerational regeneration (Field, 2006). Personal stories are also valued for their moral or ethical force. According to Kay Schaffer and Sidonie Smith, this retelling is “one of the most potent vehicles for advancing human rights claims.” (Schaffer & Smith, 2004, p.1). Often, these stories become known to the public in the form of survivor interviews conducted by historians.

These survivor interviews are also valuable research data. Traditionally, qualitative content analysis and hermeneutics are the main methods in analyzing these interview data, and the focuses are a better understanding of that historical period and discovering the impact of the mass violence on survivors and their families. With the fast development of computer and information technologies, digital environments are now central conduits for the global circulation of these stories, which allows first-person testimony to be increasingly used in human rights research and advocacy. The amount of interview data increases significantly, which may make it impractical to solely rely on traditional qualitative methods to analyze them. In addition, information technologies may be leveraged to foster proposing new research questions that require the computational analysis of the interview data and contribute to the relevant research communities by bringing in new research perspectives and theoretical insights. To illustrate this aspect, we will discuss two ongoing projects in this panel that aim at developing information technologies to afford new research perspectives on survivor interviews about the Rwanda Genocide and the Nanking Massacre.

Rwanda Genocide Survivor Interviews –These interviews were collected for the Montreal Life Stories project (2006-12), for which Prof. Steven High was Principal Investigator. Five hundred Montrealers displaced by large-scale violence were interviewed using the “life story” approach, resulting in more than 2500 hours of video-recorded interviews. We have been developing tools to foster information retrieval with the data, identify interviews of similar topics, and analyze tensions in the interview process. We leverage machine learning, natural language processing, and information visualization techniques in these activities.

Nanking Massacre Survivor Interviews – Survivors of the Nanking Massacre were interviewed multiple times in the last eighty years: 1946, 1986, 2000, and 2017. In this project, we explore new research angles in the analysis, such as the change of interview topics over time to reflect the different historical period and the effects of translation and language analysis tool in understanding the interviews.

In this panel, we discuss these two survivor interview research initiatives and progress, focusing on how the data characteristics and research contexts interact with the choice and development of computational technologies. In other words, the primary goal in this panel is to discuss with the audience on issues related to how to leverage computational techniques in analyzing sensitive interview data like survivor interviews. The secondary goal is to report our progress in the two interview projects seeking feedback from the audience.



Visualizing Uplift: Women of the Early Harlem Renaissance (1900-1922)

Amardeep Singh

Lehigh University, United States of America

“Women of the Early Harlem Renaissance: African American Women Writers, 1900–1922,” aims to digitize and annotate a limited array of primary texts, mainly poetry, and present these materials as a digital archive in the Scalar platform. The project aligns with what Kim Gallon has referred to as a “technology of recovery,” which is one of the core principles bridging African American literary studies and the digital humanities. The project uses Scalar’s visualization and tagging structures to explore stylistic, thematic, and social relationships among a small group of writers, as well as to explore the conversations these writers were having with established writers and editors. Several key themes have begun to emerge as the project has developed. The first of these is the confrontation with American racism, which impacted African American communities intensely in the 1910s; and these poets document the emergence racial violence and the response to that violence. Second, these poets show how the role of African American motherhood was evolving in the early years of the twentieth-century, in part because of the stresses of raising children in a racist society. Third, the black Church became strongly connected to the movement for social justice at this time; and several, if not most, of the writers in this archive explored Christian themes in their anti-racist writing.

For this poster session, I will demonstrate the site I have been developing and give attendees an overview of the site's use of semantic tags and visualizations built around those tags (focusing on the three themes mentioned: racism, motherhood, and the black Church). I will also demonstrate I am developing -- such as geotemporal representations of this community of writers.



Documents to Data: The Evolution of Approaches to a Library Archive

Rebecca Sutton Koeser, Rebecca Munson, Joshua Kotin, Elspeth Green

Princeton University, United States of America

In Digital Humanities we speak of moving from “documents to data.” In many projects, this is literal, a process of extracting information or turning text into tokens suitable for computational analysis. For the Shakespeare and Company Project, it entailed a conceptual shift from thinking of archival materials as texts to be encoded and described, to thinking of them as data to be managed in a relational database.

This project is based on the Sylvia Beach papers, held at Princeton University, which document the privately owned lending library in Paris frequented by notable writers of the Lost Generation. Materials include logbooks with membership information and lending cards for a subset of members with addresses and borrowing histories.

This poster will present the history of a multi-year project in three phases, each with benefits, difficulties, and stakes. The evolution of the project demonstrates the development of our thinking as a team as we moved toward a public-facing site designed for a broad audience. In the first phase, we encoded content from the library using TEI/XML, an approach commonly employed for documentary editing. The choice of TEI/XML fit the initial aims of the project, but even rich transcription did not offer the opportunity to fully connect the people, places, and books referenced. Consequently, the second phase was dedicated to designing a custom relational database to model the world of the library by explicitly surfacing different types of connections. The third phase required migrating data from the TEI/XML to the relational database, a lengthy process that exposed inconsistencies in the encoding, but also gave us an opportunity to eliminate redundant and unsynchronized information. The conversion process highlighted the benefits and the difficulties of both systems in pursuing similar research questions. A TEI corpus and a relational database both support querying and making connections, but a database is designed for explicit connections, which makes it easier to identify and group member activities with individual people across multiple data sources. Both approaches require technical expertise, resulting in barriers to non-technical team members working with the data. We found the relational database to be more inclusive for project members: we built and progressively refined a web-based interface that was easier to use than oXygen XML editor, and provided on-the-fly data exports in familiar formats such as CSV. Team members could then do their own analysis (without learning query languages such as SQL or XQuery) and, as a result, had more meaningful engagements with the data.

To illustrate the history of the project, this poster will include sample images of the archival material. It will provide a diagram that maps the transition of the data from its location across multiple XML documents to a relational database. The poster will present examples of the data work enabled by and insights gained since conversion to a relational database. Finally, it will include visuals from the public-facing web application now in development which will eventually provide researchers and the public access to the world of this library.



Models of Influence: Analyzing Choreography, Geography, and Reach Through the Performances of Katherine Dunham

Antonio Jimenez-Mavillard1, Kate Elswit1, Harmony Bench2

1University of London, Royal Central School of Speech and Drama, United Kingdom; 2The Ohio State University, United States of America

This poster session focuses on the ways in which we model and interpret influence in the project Dunham’s Data: Katherine Dunham and Digital Methods for Dance Historical Inquiry. The overarching project explores the kinds of questions and problems that make the analysis and visualization of data meaningful for dance history, pursued through the case study of 20th century African American choreographer Katherine Dunham. We have thus far manually curated datasets representing 10 years of her performing career from undigitized archival documents, with the goal of representing over 30 years of her touring and travel. At ACH, we focus on models of what we describe as traces of “influence” in and around dance touring, and reflect on the development of scalable digital analytical methods that are shaped by approaches to embodiment from dance, critical race theory, and digital cultures. Modeling influence offers means to elaborate ephemeral practices of cultural transmission in dance, and the dynamic relations of people and places through which Dunham’s diasporic aesthetic developed and circulated, in dance gestures, forms, and practices.

The poster session will address three core challenges in our work to date on representing and better understanding influence around Dunham’s extensive career: 1) how tours build on each other over the years to open up new touring destinations, thus expanding Dunham’s geographic reach; 2) how Dunham’s travels inform the content of her repertoire in terms of movement vocabulary and style; and 3) the impact of performers coming into and out of the company, and the dance information they bring with them and take when they leave. The poster session focuses on the choices made in statistical, spatial, and network analysis to develop historical arguments regarding the patterns and implications of Dunham’s company repertoire and travels. In so doing, this presentation offers an important set of tools for demonstrating a choreographer’s legacy in an ephemeral medium.



Visualizing Citations in Digital Humanities Quarterly's Biblio

Gregory Palermo

Northeastern University, United States of America

In 2017, a team of researchers and developers at Northeastern University secured a Digital Humanities Advancement Grant from the NEH to continue the development of Biblio, a centralized bibliographic resource for Digital Humanities Quarterly, ADHO's international, open-access journal. In addition to streamlining the process of encoding article bibliographies for DHQ’s authors and editorial staff, the project intends to open up DHQ’s archive for citation analysis, via an interface on the journal’s website. As it is currently being developed, this interface will query dynamic citation data from Biblio’s BASEX database via an XQuery API, rendering it in XHTML for in-browser viewing, as well as making the data available for export in formats that include XML and JSON. Moreover, I am working on a part of the interface that will allow users to visualize, using d3, the field landscape of digital humanities as represented (or not) in the journal’s networks of citation.

The proposed poster reports on preliminary citation analysis research I am conducting, as part of the Biblio grant, towards imagining the possibilities for this interface to map the epistemic “geographies” of digital humanities. This research builds on work in digital humanities that attempts to visualize networks of digital humanities research and scholars in the field’s publication record. It also imports methods from a number of fields — including bibliometrics/scientometrics, science and technology studies, and writing and rhetoric — used to cluster and visualize knowledge domains and the borders drawn between them. Most importantly, it draws on the recent work of other digital humanists using data visualization with a feminist orientation, or for the purposes of social justice, in order to imagine possibilities for what these scholarly networks could be, in addition to describing them as they currently are.

The visualizations I will present will compare networks of co-citation in the DHQ corpus to the corpus of citations of DH journals indexed in the Web of Science (WOS). I will draw these networks using bibliometric parsing and network analysis packages in python, detecting communities within them with techniques that include Louvain Modularity clustering and HDBSCAN clustering. I seek to represent, here, some of the traditions of DH scholarship that we narrate in our field’s many historiographies; I further hope to surface potentially absent traditions and attend to the representational shortcomings in the work that DHQ authors cite, which tends to be Western and even North-American centric. The eventual goal of my dissertation project is to show how visualization can be used to represent citation and citation analysis as methods we use to do what Julie Thompson Klein, in her book Interdisciplining Digital Humanities, calls epistemic "boundary work.” The interface we are building will be one means of performing that work, with the goal of transforming the field’s landscape. On behalf of the Biblio team, I will be looking for feedback from conference attendees on what they would like to be able to see and do with the interface we are building.



Between Advocacy, Research, and Praxis: A Critical Reassessment of the Open Access Discourse

Setsuko Yokoyama

University of Maryland, United States of America

Within the field of digital humanities, open access is often considered a virtue that upholds practitioners to strive for making primary resources, scholarly monographs, and other educational materials publicly available. A case for adopting open access is often justified by the fact that research may be publicly funded in the first place, or in an effort to keep the humanities enterprise relevant beyond academia. What is seldom discussed, however, is that open access is a historical concept, bearing particular ideological biases. Kimberly Christen, for instance, critiques the access discourse, regarding the culturally appropriate access protocol she adopted for the content management system Mukurtu. In her “Does Information Really Want to be Free?” (2012), Christen argues how even a seemingly benign concept such as “public domain” has been used to exploit indigenous communities, further perpetuating the legacy of global colonialism. Taken up on the call of Christen to critically investigate the open access discourse, my poster examines how such concepts as “piracy” and “gatekeeping,” too, are not as black-and-white as open access advocates might have one believe. Using the Digital Frost Project as a case study, the poster invites my fellow special collection librarians, editors of electronic editions, digital humanities project coordinators, grant officers, and others, to consider how the otherwise prolific open access discourse may be ill-equipped to foster collaboration necessary for the development of an online platform. In addition to the unattended exhibition, I will deliver multiple 10-minute long walkthroughs of my argument during the poster presentation session.



Limits of the Syuzhet Package and Its Lexicons

Hoyeol Kim

Texas A&M University, United States of America

There is always a risk when conducting digital analyses, since programming tools often contain flaws. Therefore, it is important to consider the limits and usages of different programming tools to avoid faulty methods. Sentiment analysis has historically focused on product reviews, such as those of movies, hotels, cars, books, and restaurants, rather than on sentiment in literature. The Syuzhet package, however, is aimed at providing a proper tool for sentiment analysis in literature. Syuzhet is dictionary-based, drawing upon the Syuzhet, Bing, Afinn, and NRC lexicons. In my poster, I display several example analyses of literary texts using Syuzhet in order to reveal the limits of the package and its lexicons. For instance, Syuzhet draws upon simplified vectors of sentimental words in order to analyze the emotional flow of texts, creating subjectivity problems with the four lexicons employed in the package. In addition, Syuzhet does not properly deal with negators, amplifiers, de-amplifiers, and adversative conjunctions/transitions. Lastly, I demonstrate how to use the settings in Syuzhet in order to avoid creating faulty results. By showing the limits and usages of Syuzhet and the lexicons it draws upon, I can detail what digital humanists should consider when conducting sentiment analysis and contribute possible improvements for the package and lexicons.

Keywords: TextMining, OpinionMining, SentimentAnalysis, Syuzhet, Lexicons



"Make an Idol of the Hoe": Tools in 19th-Century American Garden Literature

Emilia Anna Porubcin

Stanford University, United States of America

“Personal garden writing” (PGW) evolved in the 19th century as a niche genre of literature that intimately connected the individual reader to instructional horticulture. PGW intertwined a great deal of opinion with its instruction, offering readers easy insight into a writer’s worldview. Often this worldview advocated community in the garden, as PGW writers prided themselves on making the garden accessible to laypeople, but their reliance on small gardening tools might have either served or counteracted that purpose: while tools barred economically and physically disadvantaged classes from the garden, they also united common home gardeners in their attempts to find “pleasure or profit or health.” Using digital humanities tools to analyze language about tools in PGW could help quantify the role that gardening tools served in American community-building in the 19th century.

A 5-work sample of PGW was used to understand the valence and frequency of language surrounding tools in this genre. After the texts were digitized, they were searched to identify the tools mentioned. The texts were then cleaned with various tm_map functions in R, and then 214 Key Words in Context (KWICs) were collected to identify all “tool mentions” across the texts. Two kinds of sentiment analysis were performed on these KWICs, using the bing and nrc lexicons, which depict the positive / negative valence and more detailed emotions of input text, respectively.

Bing sentiment analysis shows that the language about tools in personal garden writing is more often and more strongly positive than negative, with the most positive KWIC having a Bing score of 4 and the most negative KWIC having a Bing score of -3. The positive scores of KWICs for ”tool mentions” suggests both PGW writers’ embrace of tools in the garden and their tendency toward positive language in general. NRC sentiment analysis shows that the sentiment most strongly associated with tools in PGW is trust. This result speaks to the compacts of the garden: between workers and plants, or writers and readers, exist covenants to protect the land and its laborers.

PGW demonstrates how, as symbols of trust, tools built community in American gardens. In 19th-century search for kinship, gardeners were right to “make an idol of the hoe.”



Linguistic Infinitesimals

Jonathan Scott Enderle

University of Pennsylvania, United States of America

Vector-space representations of meaning have become central to modern natural language processing techniques. But there is still not much theoretical justification for their success. Why can some aspects of meaning be captured in these linear structures? Perhaps more importantly, what aspects of meaning cannot be so captured?

This poster presentation will summarize some preliminary research that may help answer these questions. Rather than regarding word vectors merely as useful outputs of a black box learning algorithm, we can regard them as measurements of the possible infinitesimal change in meaning that a word could undergo. This way of thinking about vector-space semantics can be translated into a precise mathematical form, which can be used to derive new algorithms for generating word vectors. After introducing this theoretical framework, I will briefly describe a simple algorithm based on it, and show its performance as compared to other familiar word embedding models.

Finally, I will offer a few reflections (and invite feedback) on the broader implications of these ideas. In addition to providing a richer richer theoretical basis for understanding the behavior of word vectors, this framework may suggest new connections between quantitative and intuitive modes of reading. To show one possible direction for future work, I will discuss a way to reframe the concept that Jacques Derrida calls “iterability” in terms of linguistic infinitesimals, repetition, and nonlinearity (in the strictly mathematical sense). This reframing may not be perfectly true to Derrida’s ideas, but will, I hope, show unexpected overlaps between these seemingly very different paradigms of reading.



Using Webscraper.io to Scrape Digital Humanities Websites

Michael Roth, Heather Froehlich, Cynthia Vitale

Penn State University

Proposed Topic

Omeka CMS is a system that allows users to organize a wide range of digitized artifacts and related metadata on the web and present them in collections, exhibits, or a combination of both. While Omeka is beneficial for creating digital projects, it lacks a robust technical infrastructure for exporting public data in an accessible way, either through API’s or metadata exports. This project involved methods of scraping information from Digital Humanities projects built in Omeka. The information harvested included the title of the project and its overall description, in addition to the names and description of any collection or exhibit. Other pertinent information about creating the project was also extracted, including organizational affiliation and contributors. In this poster presentation I will discuss Webscraper.io, a Chrome extension, which is used primarily for extracting information from e-commerce sites, but can be utilized for any published website. This browser extension uses the underlying HTML code generated by Omeka and the browser’s developer tools to select different elements on the web page and then make a copy of the information into a CSV file. This allows Digital Humanities scholars to curate a metadata collection of web-based projects. This type of web scraping does not require any administrative access to projects, allowing users to scrape websites that no longer have such access.

Time Requested

20-30 minute poster presentation

Audience

The most relevant groups are those within the Digital Humanities community who use or administrate online content management systems. Librarians may find the scraping methodology important for curating online content.



“Not Exceeding Ten Miles Square”: A History of Washington DC’s Rectangular, Nondescript, Document Boxes

Kyle Jon Bickoff

University of Maryland, United States of America

In this poster proposal, I exhibit my ongoing research from my dissertation work on early document storage containers used at libraries and archives. My current work focuses on knowledge infrastructures (an ACH 2019 topic of interest)—my research sits at the intersection of media archaeology and critical information studies. I draw on theorists writing on media storage including Wendy Chun, Shannon Mattern, and John Durham Peters. In my project, I present an untold history of cardboard storage containers adopted in the World War II period at US libraries and archives, including the Library of Congress and National Archives. Specifically, I focus on those units that originate in the Washington DC area, which were the most successful box designs and were adopted rapidly at nearby federal records centers. I build on my previous research, which traces the rise of paper storage boxes as a replacement for steel archival boxes (due to wartime steel shortages and high costs). I visually present patent drawings of the Woodruff File container, designs for the paper Hollinger document box, and screen-captured images of the Library of Congress developed BagIt digital container. Across these three containers, I unpack the affordances of each design that led to their success. I argue that above all, a more economical storage container was needed in each case, where the box manufacturing site’s close proximity to the seat of federal government and an abundance of records centers only further advanced each box’s adoption.



Technological Histories in Software Studies

Joshua L. Comer

University of Louisiana at Monroe, United States of America

Scholars in software studies tell varied and wide-ranging technological backstories to explain the impact of computing on society. Since 2009, the MIT Press Software Studies series has provided a reasonable, though not necessarily representative, sample of the field with eight single- and multiple-author books that address different topics in software studies from a variety of approaches. While the series' stated focus is on “today’s software culture,” its authors cite technologies from various periods of time to illustrate differences and relationships that characterize our current cultural situation. In this poster, I analyze and visualize the chronology of technologies represented throughout the series to identify how that body of scholarship crafts technological histories of computing. My analysis and visualization demonstrate several patterns in the chronologies of the series. The authors assign definite dates to certain technologies, often properly named examples of hardware and software. The authors more ambiguously position other technologies, like television, as predecessors or successors to the properly named technologies or exemplars of periods of technological development. By overlaying the specific dates and less specific chronological relationships assigned to technologies across the series, my visualizations show differences in the amount and specificity of consideration given to different technologies, the number of technologies clustered in certain periods of time, and the historical relationships between technologies in the books. In identifying how technological history is constructed over the course of the series, I argue that software studies can focus on technological influences on computation that receive inconsistent scholarly consideration in the humanities.

 
12:15pm - 1:45pm#Lunch1: Lunch Break
Those with Posters should remove their posters by 1:45. All leftover posters will be discarded.
 
1:45pm - 3:15pm#SB1: Food and Games Paper Session
Session Chair: Arun Jacob
Marquis A, Marriott City Center 
 

On Essay Games: The Diary, The Documentary, and the Satire in Independent Video Games

Nicholas O'Brien

Stevens Institute of Technology, United States of America

As video games communities continue to mature and nurture pockets of avant-garde practices, I propose an emergent practice of game development/design called essay games.

Similar to the essay film and video genres of avant-garde cinema, essay games explore how the medium can tell critical, research-based narratives about political, cultural, or interpersonal themes. I will discuss how essay games have three primary design strategies that distinguish their production from traditional independent games: the diary, the documentary, and the satire.

The diarist approach uses personal narrative as a gateway to openly discuss fraught political or societal inequities, or tensions between an individual’s experience and the way that experience is misrepresented in culture at large. Looking at games by Nina Freeman, Auntie Pixelauntie, and Ryan and Amy Green can provide clear examples of diarist essay games that use personal narrative in order for their story to speak to issues of larger scope.

The documentary is perhaps the most widely used approach in essay video games and has the most immediate connection to traditional approaches in film and video. Artists Chris Marker (a mainstay within essay film studies), Harun Farocki, and Hito Styerl are prime examples of utilizing documentary approaches in essay film and video production. Likewise the work of Navid Khonsari (and iNK stories), Studio Oleomingus, and Molle Industria (Paolo Pedercini) apply documentary approaches to essay game making.

Satire is a powerful essayistic approach for alleviating the cynical side of daunting political and social forces. Games like Universal Paperclips by Frank Lantz, Layoff by Tiltfactor Games, The Game: The Game by Angela Washko, and the work of Robert Yang all use satire to create essay games with a touch of biting humor.

By comparing contemporary essay game designers to examples in more traditional essayistic mediums, I will establish how these works (and their developers), differentiate their practices from other experimental voices. These voices use similar interactive-storytelling methodologies, but their subtle differences require further examination and discussion.

Far from attempting to define a new genre, my paper is instead an attempt at shaping an emergent practice within an evolving medium. By outlining the shape of these emergent practices, my intention is to differentiate them from more “generic” independent gaming communities that merely use the medium for commercial or entertainment purposes.

Highlighting these games and their makers will hopefully further conversations around the so-called “power of videogames” to immerse players in unfamiliar, challenging, and thought-provoking environments. To that end, essay games of this often use these three approaches in order to confront mainstream “standards” and “conventions.” Their efforts, and my paper, outline the ways that this medium can intervene and contribute nuanced discussion in current discourses around the most pressing political, social, and cultural issues of our time.



Octodad: Dadliest Catch and Cultural Impairment Through Game Spaces

Andrea Marie Medina

University of Florida, United States of America

The cultural significance of videogames is undoubtedly growing at an unprecedented rate. The rapid development of visual-interactive technologies, such as augmented reality and virtual reality, is at least in part due to the rising interest in the unique immersive experiences which videogames offer. This paper focuses on the immersive qualities of videogames as a vehicle for, what I call, imagined embodiment. I place emphasis on the player-avatar relationship to examine the potential for players to experience a sense of cultural impairment vis-à-vis the game space.

In this paper I focus on the videogame Octodad: Dadliest Catch to demonstrate that the disabled body has been culturally established as an object of impairment. I examine the avatar, Octodad, who possesses bodily capabilities different from the culturally normative human body, to illustrate that it is not the avatar but the game space which represents the challenge in gameplay. Ultimately, I show that spaces privilege able-bodied people and perpetuate social disability. Among other scholarship, I refer to Rosemarie Garland-Thomson’s work on feminist disability studies to complicate the relationship between player and game space by highlighting the fundamental obstacle of the game, which suggests that the main character, Octodad, does not belong in socially normalized spaces. Based on Garland-Thomson’s understanding of disability as “a cultural interpretation of human variation rather than an inherent inferiority, a pathology to cure, or an undesirable trait to eliminate,” I employ Octodad to showcase how social spaces are constructed in ways that are disadvantageous to disabled citizens.



Del sabor Pacífico y sus tradiciones migratorias

Alejandro Rojas

Banco de la República, Colombia

Este trabajo analiza la actualidad de la cocina del Pacífico colombiano en la ciudad de Bogotá. Para esto toma como objeto de estudio un grupo de seis restaurantes, ubicados en la carrera 4a con calle 20, que se han convertido en un referente de la cocina del litoral en el centro de la ciudad. La investigación compara los menús de dichos restaurantes con las recetas descritas en el estudio del Ministerio de Cultura, Saberes y sabores del Pacífico colombiano, y así establece el porcentaje real de platos típicos que hay en las cartas de estos locales para responder la pregunta de investigación: ¿Qué tan cercanas son las recetas tradicionales del Pacífico colombiano con los menús de los restaurantes ubicados en el centro de Bogotá? La recolección de datos se logró mediante visitas y trabajo de campo, no solo en la zona de estudio descrita, sino en otros lugares de Bogotá, así como en la ciudad de Buenaventura y a través de fuentes gubernamentales. De esta manera, se logró el análisis no solo de la oferta de cocina del Pacífico, sino de un fenómeno de migración hacia la ciudad de Bogotá producto de la violencia y la falta de oportunidades. Los resultados finales arrojaron cifras por debajo de las esperadas en cuanto a la cantidad de recetas originales ofrecidas en los restaurantes analizados.



Cooking While Black: What Do Food Blogs Tell Us About Our Racist Past and Present?

Molly Mann

St. John's University, United States of America

Emma Dunham Kelley-Hawkins’ novel Four Girls at Cottage City (1898), Malinda Russell’s Domestic Cook Book: Containing a Careful Selection of Useful Receipts for the Kitchen (1866), and Erika Council’s food blog Southern Soufflé (2012-present) are texts that differ in form, genre, purpose, and period. Read together, these works, all three of which have received relatively little critical attention, however, help piece together a historical and cultural framework for contemporary views of Black women, food and professionalized labor, a subject which itself has received less attention, critically, than white women and the professionalism of their domestic labors. By reading works that are historically and generically different, and that therefore fall outside traditional literary studies of canonical works and discrete time periods, we can begin to understand the works that have always fallen outside of those categories, and that, indeed, defy category altogether. Russell’s cookbook, the first attributed to an African-American woman in the United States, and Council’s food blog belong to genres that are just coming into critical attention within the fields of archival studies and media studies. I situate my readings of these more overtly food-related texts in relationship to a literary work to show that literary culture and domestic-culinary culture of the U.S. from the nineteenth century to our current moment shares concerns about bodies, their differences, what they consume, and what kinds of spaces they occupy. These three exemplary texts are centrally concerned with questions of how to resist an embodied racial logic that seeks to categorize and value various forms of women’s domestic labor according to the bodies that perform it. Kelley-Hawkins, Russell, and Council all address, in their works, questions of who consumes and who is consumed within the context of U.S. cultural history and its long-held, violently deployed misunderstanding of race.

 
1:45pm - 3:15pm#SB2: Expanding Communities of Practice Pedagogy Showcase
Session Chair: Lisa Marie Rhody
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Expanding Communities of Practice Through Digital Humanities Research Institutes

Lisa Marie Rhody1, Kalle Westerling1, Rico Chapman2, Andrea Davis3, Dianne Fallon4, Amy Gay5, Daniel Johnson6, Rafia Mirza7, Sarah Noonan8, Alicia Peaker9, Rosin Torres-Medina10, Jojo Karlin1, Patrick Smyth1, Stephen Zweibel1

1The Graduate Center, CUNY, United States of America; 2Clark Atlanta University; 3Arkansas State University; 4York County Community College; 5Binghampton University, SUNY; 6University of Notre Dame; 7Southern Methodist University; 8St. Mary's College; 9Bryn Mawr College; 10Evangelical Seminary of Puerto Rico

This alternative format session takes an intentionally interactive approach to sharing the Expanding Communities of Practice (dhinstitutes.org) project, an NEH-funded institute and scalable model for improving access to learning DH research methods. We will present our open-access curriculum and offer recommendations for replicating DHRI institutes across organizational and geographic contexts. Ideally 75 minutes long, the session is divided into 3 parts: an overview [20 minutes], demonstrations [35 minutes], and a Q&A [20 minutes].

DHRI is a community-driven network of DH institutes designed to meet humanists where they are. Recognizing that austerity measures have constricted travel and budgets for faculty / staff development, DHRI invited applications from those tasked with DH community-building to participate in a 10-day “train the trainer” institute, providing participants with training, a model curriculum, dedicated support, and funding. We received 134 applications for 16 available seats. Our session will make desired resources more readily available to those who could not attend.

In June 2018, participants (dhinstitutes.org/participants.html) from various humanities organizations (community colleges, HBCUs, museums, libraries, research universities, and liberal arts colleges) attended the DHRI at the Graduate Center, CUNY (dhinstitutes.org/schedule.html). The curriculum (dhinstitutes.org/june_2018_curriculum.html), based on GC Digital Initiatives Digital Research Institute (cuny.is/gcdri) and largely developed by the GC Digital Fellows (dhinstitutes.org/faculty.html), includes an open-source guide (available on GitHub) that foregrounds foundational technical knowledge translatable to many DH research projects and represents of our belief that equitable, inclusive DH learning environments assume no prior technical knowledge and value all participants’ experience and domain expertise.

During 2018-9, participants will lead local institutes based on the DHRI model and prepare white papers with lessons learned. Our ACH session will make available 2 year’s culminated effort: a revised curriculum and guides and recommendations for implementation based on participants’ local institutes and feedback.

In part one of the session, the Project Director and Institute Coordinator will overview DHRI’s pedagogical values—building community-centered approaches to learning. They will introduce, DHRI’s core faculty and fellows and local institute leaders.

In part two, interactive demonstrations by DHRI participants present local institutes with sample curriculum modifications, required resources, staffing, and logistics through show and tell, reflections, video interviews, and qualitative surveys. DHRI faculty and fellows will demonstrate the curriculum, reflect on a “train the trainer” approach, and share documentation. Conference goers will circulate the room, interact with faculty and participants, and submit questions.

Part three’s moderated discussion will be framed by questions collected from ACH attendees and include DHRI participants, faculty, and the audience. The session addresses needs of administrators who want to offer DH research training, emerging DH scholars who seek to learn new skills, and experienced DH practitioners seeking to participate in our growing network of institute leaders.

 
1:45pm - 3:15pm#SB3: Minimal Computing Roundtable
Session Chair: Purdom Lindblad
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Minimal Computing: On the Borders of Speculative Archives

Alexander Gil1, Purdom Lindblad2, Toniesha Taylor3, Marisa Parham4, Setsuko Yokoyama2

1Columbia University Libraries; 2University of Maryland; 3Prairie View A&M; 4Amherst College

The need to bear the weight of the Anthropocene on the same shoulders as we bear the weight of racial and colonial/capitalist violence requires an interrogation and fundamental transformation of how we work. Minimal computing as a banner has attracted a group of diverse thinkers and practitioners to think through and design within material and ethical constraints. Broadly defined, minimal computing is the interpretation and implementation of systems that reduce computation to address needs and costs around social and environmental justice.

At their best, archives and digital humanities center voices that have been obscured through negligence or violently silenced from mainstream narratives. In the face of increased criminalization of and violence towards people of color, immigrants, our planet, and many other horrors of our time, we feel a renewed sense of urgency to surface, highlight, and empower narratives from marginalized groups as a tool for social and environmental justice. These narratives themselves cannot be divorced from the material realities of their vehicles, and we are called upon to generate new modes of understanding around production itself that do not recreate exploitative power dynamics: speculative archives through and through. What then are these tools, approaches, and best practices for this kind of work? How do we think through them?

This roundtable takes as place of departure both Alex Gil’s assertion workers in the humanities share the goal of renewal, dissemination, and preservation of the scholarly record within increasingly hybrid and global futures, as well as Anne Gilliland and Michelle Caswell’s work with the archival imaginary, meaning attending to those absent (perhaps missing, destroyed, or theorized and wished-for) records as resistance to dominant notions of evidence. Minimal computing at the borders of speculative archives offer a model for imagining and implementing world(s) otherwise, takes seriously the need for creators to understand and own the tools and practices for liberatory work.

The first portion of the proposed 1 hour roundtable will ask panelists to discuss how minimal computing in pursuit of speculation enable us to mark out sites of disjuncture within our analytical and methodological approaches; how reclaiming the means of production and dissemination can disrupt the standard research cycle. For the second half-hour, the discussion will open up to leverage the collective expertise and experience within the room through a modified version of the fish-bowl model (no changing seats) in order to account for time.

 
1:45pm - 3:15pm#SB4: Overcoming Challenges and Breaking Down Barriers Roundtable
Session Chair: Rachel Lynne Starry
Marquis B, Marriott City Center 
 

Overcoming Challenges and Breaking Down Barriers: Digital Scholarship Support Within and Beyond the University

Rachel L. Starry1, Jennifer Isasi2, Heidi Dodson1, Chris J. Young3, Alex Wermer-Colan4, Emma Slayton5

1University at Buffalo; 2University of Texas at Austin; 3University of Toronto; 4Temple University; 5Carnegie Mellon University

This session takes the following questions as its focus: What are the major challenges to building and sustaining digital scholarship (DS) support infrastructures across and beyond universities? How do factors such as institutional hierarchies, multiple campuses, and library/departmental resources impact the needs and effectiveness of DS networks? Are there barriers to supporting digital research and pedagogy that transcend institutional context, and how are we engaging audiences beyond the walls of the university?

The session follows a roundtable format, beginning with brief introductions by six panelists who will share some challenges and opportunities they have witnessed for different types of institutional infrastructures supporting DS in North America, before opening the conversation to the audience. Presenters include early career digital scholars whose perspectives as recent PhDs inform their experiences across a wide range of institutions, from academic libraries to humanities institutes and non-profits. The majority of presenters are currently based at research universities where scaling support for DS poses a major challenge, and all bring a concern for labor equity and social justice to this discussion, as public humanities and contingent labor have become critical issues in this field.

The conversation will draw on the diverse experiences of participants in supporting digital work in different institutional contexts and will be enhanced by the contribution of audience members’ perspectives. The session’s primary goal is to highlight strategies for overcoming the challenges inherent to the work of building networks, communicating with diverse audiences, and fostering interdisciplinary collaboration. We anticipate that issues such as researcher silos, diversity and accessibility, and working across multiple physical and disciplinary spaces arise regardless of institutional context, and we hope this conversation will raise awareness of potential resources for supporting DS across higher education.

The first presenter will address supporting DS in a library with rare, multilingual primary sources that are generally unavailable to the communities whose history they contain; alongside digitization and data curation, the team is developing pedagogical materials to increase collection discoverability and community access. A second panelist will discuss some challenges related to connecting project-based digital work and DS practitioners across multiple campuses, libraries, schools, and departments at a university where individual departments have historically initiated digital humanities (DH) scholarship. The third presenter will address public engagement by comparing DS experiences at a community non-profit and a research library; both institutional contexts share the challenges of leveraging resources, building networks, and communicating the value of DS work. A fourth speaker will describe the politics of running a decentralized DH network across multiple campuses, dozens of departments, and several research libraries while leveraging pre-existing strengths for future scholarship and capacity-building. The fifth panelist will discuss the challenges of building networks between DS centers, special collections, digital initiatives, and subject specialists, focusing on issues of contingent labor and hierarchy in sustaining DS. The final presenter will address building support around digital scholarship within a university where departments and research are siloed, and consider how promoting GIS and data visualization can connect different departments and researchers.

 
1:45pm - 3:15pm#SB5: Visual Culture Paper Session
Session Chair: Lauren Tilton
Marquis C, Marriott City Center 
 

Exploring the Correlations Between Graphic Elements in Picasso's Poetry

Luis Meneses1, José Calvo Tello2, Enrique Mallen
3

1Electronic Textual Cultures Lab, University of Victoria, Canada; 2University of Würzburg; 3Sam Houston State University

Picasso started writing poems in April 1935 during a period of personal crisis. However, even before this period of turmoil took place, Picasso had already been fascinated by language from the time of his cubist experiments. In fact, his poetry is not only fascinating as a form of communication from someone who is primarily known for his plastic output, it is also puzzling for anyone researching the interconnection between language and writing, i.e. verbal and graphic signs.

In his poems, Picasso tried to expand the expressive power of language by concatenating words in unordered strings. Another interesting linguistic feature in Picasso’s poetic manuscripts is the addition of numerous bracketed strings which are purposely left differentiated as footnoted “afterthoughts” to the text. They are presented as blurbs which the reader may then choose to read at the point where they are inserted or may continue reading the original text, ignoring these later additions. Other graphic elements we find are hyphens, blotches, area coloring, underlining, etc.

Given the nature of the graphic elements, we outlined Picasso’s poetry as a Visually Complex Document —which as in the different “planes of consistency” of collages, presents distinct layers of text and images that constitute integral parts of the document’s representation (Audenaert, 2008). Following this premise, at the TEI 2018 conference we presented our solution for linking TEI encoding, digital facsimiles and specific zones in Scalable Vector Graphics. In this abstract we propose to take this approach one step further and explore the impact that the graphic elements in Picasso’s poetry may have on the text.

Our study will address three research questions: First, do these graphic elements occur in similar contexts in the text? Second, do these graphic elements interact with each other? And third, is there a correlation between the type of graphic element and the position they occur in? For instance, do blotches occur primarily before or after specific lexical categories (verbs, nouns or adjectives)? Our research will address these questions as we continue to explore the correlations between Picasso's poetry and his plastic output.

References

Audenaert, N.(2008). Patterns of Analysis: Supporting Exploratory Analysis and Understanding of Visually Complex Documents. IEEE Technical Committee on Digital Libraries http://www.ieee-tcdl.org/Bulletin/v4n2/audenaert/audenaert.html (accessed 26 March 2018).



Computational Analysis of Digitized Images from the Roman de la Rose Digital Library

Kristen Mapes

Michigan State University, United States of America

The Roman de la Rose Digital Library (RDL) (https://dlmm.library.jhu.edu/en/romandelarose/) serves as a resource for the study of the most popular secular work of the European Middle Ages by providing access to 146 digitized manuscripts in IIIF format and with additional datasets describing the manuscripts in the corpus. I have been working with RDL data to create an interactive visualization platform for exploring codicological and location information. This next phase of the project employs computational image analysis to explore the digitized images themselves in conjunction with and in comparison to the codicological data provided by the RDL. Applying new methods to this corpus will open new avenues of research for medieval studies scholars interested in the history of the book, illustration transmission, and more.

I will use Imageplot software (http://lab.softwarestudies.com/p/imageplot.html) to explore issues of image saturation and brightness. Such calculations will be put in comparison with the codicological dataset provided by the RDL. One would expect manuscripts with a higher median brightness to reflect manuscripts with larger borders and/or to have few to no illustrations, for example. Finding evidence to challenge or support codicological analysis will further research understandings of this well-studied corpus.

The Distant Viewing Toolkit (https://www.distantviewing.org/) (DVT) will allow analysis of facial and object detection across the digitized manuscripts in the corpus that have at least one illustration. The RDL has datasets of Illustration Titles and Narrative Sections, providing insight into characters and their frequency across the many variations of the corpus. Bringing the RDL datasets into comparison with algorithmic analysis of characters may reveal new understandings of the consistency of figures in illustrations.

On the whole, this work seeks to bring a robust dataset of digitized manuscript images into contact with multiple image analysis approaches to test their use in medieval manuscript analysis. I hope to spark new conversations among medieval studies scholars, historians of the book, art historians, and scholars interested in computer vision and computational image analysis.



The Chinese Iconography Thesaurus: A Digital Art History Project

Hongxing Zhang1, Yi-Hsin Lin1, Jin Gao1,2

1Victoria and Albert Museum, United Kingdom; 2UCL Centre for Digital Humanities, United Kingdom

Context

The ever-growing volume of digital images made available online by cultural institutions has fuelled the increasing need to implement metadata strategies that can optimise access to digital content. Traditionally considered a methodology rooted in European art history, iconography has been historically employed in taxonomies to index and access images related to European art. Because of the lack of alternative models for documenting non-European artefacts, Chinese art objects housed in European and North American collections have often been catalogued according to Eurocentric classification principles and categories.

The Chinese Iconography Thesaurus (CIT) project presents a unique opportunity to create an alternative classification scheme deeply rooted in the specificity of Chinese art, with the potential to foster dialogue between the studies of Chinese art and European art. It is a multidisciplinary pilot project that brings together sinology, art history, and digital humanities to create the first thesaurus of Chinese iconography. CIT will be a valuable research tool that enhances the accessibility and understanding of Chinese art.

Aim and Goal

With team members based in the Asian Department of the V&A Museum, the CIT project was launched in 2016. It aims to create indexing standards that will facilitate access and interoperability of Chinese digital images across collections. On one hand it will provide professionals in museums, libraries and image archives with a controlled vocabulary that will allow the improvement of cataloguing practices for Chinese collections. On the other, an online database of Chinese art images indexed with the CIT terminology will deliver a dynamic and open-ended platform to explore the Chinese conceptual world. Conceived as an intuitive and user-friendly interface, underpinned by academic rigor, the online image database will enable a wide spectrum of users to effectively retrieve subject information across collections.

The project output will be a structural bilingual (Chinese-English) thesaurus with its core vocabulary comprising ca. 8,000-10,000 Chinese concepts. The main body of terms will be extracted from key pre-1900 Chinese sources, especially from the titles inscribed on the religious and secular paintings in the imperial collection formed by the Qianlong Emperor (r. 1735-1796). The list of the sources also includes widely referenced taxonomies, dictionaries and encyclopaedias, such as Iconclass, Art and Architecture Thesaurus, and National Palace Museum Taipei Subject Codes.

The project data is planned to be released open access in October 2019 (provisionally), along with a searchable image database that contains images of selected objects from the V&A’s Chinese collections as well as other institutional partners, e.g. the Metropolitan Museum of Art, the National Palace Museum in Taipei. Both outcomes will be made available through a dedicated website.



Lightning Talk: A 3D Model and Exemplum of a Fifteenth-century Italo-Byzantine Reliquary

Justin Garrett Greenlee, Victoria Valdes

University of Virginia, United States of America

This 5-minute lightening talk delivered by (2) collaborators relates to the creation of a 3D print of an Italo-Byzantine staurothēkē (a work of art that is a reliquary, or container, for relics of the True Cross from the crucifixion of Christ). Our model is based on photographs taken in the Gallerie dell’Accademia in Venice, Italy, which we then extrapolated to three dimensions using topographical features and the ability to hand-render in Rhinoceros. The talk is directed to digital humanists interested in the software, skills, and machinery needed to bring a 3D print to completion. It will also be of interest to those who would like to construct a 3D model but lack direct access to their object of interest and/or a 3D scanner.

Our talk begins with a historical introduction to the work of art with a focus on the reliquary as a layered object that was created in the fifteenth century in the city Constantinople. It was here that the core components of a gold cross and a surrounding wooden tablet were produced with subsequent interventions taking place in Italy, where the object was outfitted with ornamentations around a secondary frame and attached to an elaborate silver processional handle. The logic that guided successive renovations to the reliquary is one of accumulation and the massing of sacred material and we chose to tell the complex history of the object by rendering it as a 3D print in five parts. These components can be disassembled and reassembled by a potential handler and in five minutes we will discuss how these revisions and other acts of interpretation make our model an exemplum—or an object about the reliquary—more than a replica.

Keywords: 3D modeling, 3D printing, Rhinoceros

 
3:15pm - 3:30pm#Break2: Break
Grand Ballroom Foyer A, Marriott City Center 
3:30pm - 4:30pm#SC1: Infrastructure and Capacity Building Roundtable
Session Chair: Leah Weinryb Grohsgal
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Infrastructure and Capacity Building for Sustainable Digital Projects

Leah Weinryb Grohsgal1, Karen Cariani2, Sarah Kansa3, Alison Langmead4, Katherine Walter5, Sarah Lepinski1

1National Endowment for the Humanities, United States of America; 2WGBH Media Library and Archives; 3Alexandria Archive Institute (Open Context); 4University of Pittsburgh; 5University of Nebraska-Lincoln

For decades, technology has enabled the production of digital scholarship and the preservation of cultural heritage collections. At increasing rates, researchers, librarians, archivists, and technical experts have developed groundbreaking projects, techniques, tools, and collections to answer new questions and advance entirely new fields of inquiry and access. Now, because of technical and social changes and the sheer volume of digital work being produced, the durability and sustainability of these projects and products are of critical concern. This ACH 2019 roundtable discussion panel will address key challenges and questions in building digital infrastructure, capacity, and sustainability.

The panel will be composed of experts in the humanities, libraries and archives, and technical development. Their experience in the humanities ranges from archaeology to literature to history to material culture, with a wide array of methodological specializations including data editing, digital publishing, archives and special collections, metadata, and digital humanities. The panel’s chairs represent a national funding agency interested in the needs and opportunities in the field of digital infrastructure and capacity building.

The session will address issues, questions, and strategies associated with building both long-term infrastructure and shorter term capacity for doing digital projects. Discussion topics will encompass the preservation and access of data and digital collections; sustainability with and beyond grant funding; collaboration between subject and methodology experts; responsibility for maintaining digital projects as they mature; upgrading aging code and infrastructure; and scalability to different institution sizes. All of the above topics—and likely any solutions or plans for digital infrastructure and capacity—involve both technical and socio-cultural questions, which we will explore in the session.

The session chairs will moderate the panel, posing questions to the panelists and facilitating a roundtable discussion. The audience will also be invited to ask questions and share of experience. The overarching aim of this session is to inspire conversation and exchange about major issues and challenges in this field. Since the panel is being convened by a major funding body in the humanities, we hope to gain insight into priorities, challenges, and paths forward in digital infrastructure and capacity building. Finally, we see this as an important opportunity to explore how digital sustainability specifically pertains to the humanities. While some issues in the larger field of digital sustainability will overlap with information technology, STEM, and social sciences concerns, others will be unique to the humanities and our institutions of cultural heritage and scholarship.

 
3:30pm - 4:30pm#SC2: DH at SLACs Roundtable
Session Chair: Jonathan David Fitzgerald
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

The Obstacles & Opportunities of Doing DH at Small Liberal Arts Colleges

Jonathan D. Fitzgerald1, Deborah Breen1, Mackenzie Brooks2, Maria Sachiko Cecire3, Nabil Kashyap4, Rachel N. Schnepper5, Anelise Hanson Shrout6

1Regis College, United States of America; 2Washington and Lee University; 3Bard College; 4Swarthmore College; 5Wesleyan University; 6Bates College

Historically, large research universities have played host to the most visible work in the Digital Humanities, yet small liberal arts colleges have in recent years seen an increase in both interest and participation in the field. It has been six years since Bryan Alexander and Rebecca Frost Davis raised the question “Should Liberal Arts Campuses Do Digital Humanities?” in the 2012 edition of Debates in the Digital Humanities, and this year--at the inaugural Association for Computers and the Humanities conference--is a good time to revisit that question and to affirm Alexander and Davis’ argument in favor of such engagement. The obstacles that they identify remain, but so too do opportunities to further integrate digital methods into undergraduate pedagogy and research. The “products” created by digital humanists at small liberal arts colleges may not be as high profile as those of larger institutions, but the dynamic processes that result from the expertise and interests of faculty and staff, as well as collaboration with students, facilitate an educational impact that hews closely to the heart of liberal arts education.

This proposed roundtable invites faculty and staff members from small liberal arts colleges to come together to discuss the work that is taking place on their campuses, as well as to share suggestions and strategies for doing digital humanities beyond the research university. We propose a 1-hour roundtable session featuring two facilitators and five presenters who will each offer opening remarks of approximately 5 minutes each, followed by an open discussion among speakers and the audience. Among the topics that will be discussed by the presenters are: undergraduate-focused research, teaching, and curriculum design, including the development of courses and minors across different disciplines and located in different parts of the institution; collaboration, including beyond the classroom and institution to explore ways of refracting engaged scholarship; and infrastructure and staffing, including the varied roles of faculty and staff and how their labor interacts with both resourcing and intellectual issues within an institution’s particular culture. We envision an audience made up mainly of faculty, staff, and students from liberal arts colleges as well as job seekers interested in working at small liberal arts colleges.

 
3:30pm - 4:30pm#SC3: Visual Resources to the Rescue Panel
Session Chair: Jasmine Burns
Marquis A, Marriott City Center 
 

Visual Resources to the Rescue: Supporting the Humanities with Digital Tools

Jasmine Burns1, Carolyn Lucarelli2, Ann Graf3

1Cornell University, United States of America; 2Penn State University, United States of America; 3Simmons University, United States of America

Although the role of visual resources professionals has shifted with the emergence of digital technologies, the core values have remained in tact. As the slide library becomes a relic of the past, these spaces and services remain central to supporting research in the humanities. Images are an important source of knowledge, and visual resources professionals are uniquely positioned to provide guidance in finding, cataloging, and using visual materials. This session will present three aspects of the visual resources profession that reflect these values: providing research support, participating in departmental collaboration, and the analysis of digital/visual collections.

Speaker 1 will present a broad discussion of the framing of images as research data, and the ways in which information professionals can support this practice. The terminology that defines “research data” in STEM does not readily apply to the humanities. As a result, humanities researchers have a tendency to resist the term, claiming that they do not produce research data and therefore have no need for data management strategies or workflows. The goal of this presentation is twofold: to define “research data” in a humanities context through a discussion of the ways in which humanities researchers create and aggregate image collections, and to address the processes by which information professionals can play an active role in shifting the treatment and perception of images as research data.

Speaker 2 will focus on her participation in a graduate seminar in Digital Art History for the Penn State Department of Art History and her subsequent efforts to transition the department’s Visual Resources Centre (VRC) into a Digital Art History and Scholarship Lab. Speaker 2 will describe the ways in which she plans to position the VRC as a hub where faculty, staff, and students can explore digital resources, consider the possibilities and challenges of new technology, consult with others about project development and digital research, and collaborate on innovative research and teaching projects. Key to this initiative is the establishment of a graduate assistantship in Digital Art History. Speaker 2 will discuss the role of the DAH assistant and share the pilot project that resulted from this new program.

Speaker 3, has examined about 250 websites dedicated to the documentation and sharing of graffiti art/street art, specifically looking at how curators of such sites are organizing their photographic collections based on groupings for different facets (places, dates, artists, styles, etc.). Speaker 3 also interviewed a number of curators to discuss their organizational methods and the labels they use to describe works. Additionally, she has done research comparing graffiti art style language in graffiti zines with available graffiti and street art-related terminology found in the Art and Architecture Thesaurus (AAT). Speaker 3 will present this work, including the the lack of overlap between the AAT and language used by the graffiti art and street art community that her research showed in 2016, and the subsequent addition of nearly all the missing terminology two years later.

 
3:30pm - 4:30pm#SC4: The SpokenWeb Panel
Session Chair: Tanya E. Clement
Marquis B, Marriott City Center 
 

The SpokenWeb: Collaborative Approaches to Literary Historical Study and Digital Development

Jason Camlot4, Tanya E. Clement1, Jonathan Dick2, Adam Hammond2, Sean Luyk3

1University of Texas at Austin; 2University of Toronto; 3University of Alberta; 4Concordia University

Since the introduction of sound recording technologies in the 1890s and the 1950s with the introduction of portable tape recording, writers and artists have been documenting their performances. Yet, most of these audio archives remain inaccessible or in peril of decay, or, if digitized, are still largely disconnected from each other. The SpokenWeb partnership is developing a coordinated and collaborative approach to literary historical study, digital development, and critical and pedagogical engagement with diverse collections of spoken recordings from across Canada and beyond. These approaches include 1) new forms of historical and critical scholarly engagement; 2) digital preservation and aggregation techniques, asset management and infrastructure to support sustainable access; 3) techniques and tools for searching, visualizing, analyzing and enhancing critical engagement; and 4) innovative ways of mobilizing digitized spoken and literary recordings within pedagogical, performative and public contexts. This panel comprises four SpokenWeb collaborators who will speak for 10 minutes each on these four approaches.

Speaker 1: A rationale of audio texts that takes into consideration how spoken word recordings can be understood in terms of feminist approaches for considering developing information infrastructures can help us reimagine the role audio can play in feminist DH scholarship. Using extant recordings in the Anne Sexton Collection at the Harry Ransom Center, including personal tapes of Sexton's therapy sessions, Speaker 1 reconsiders how a rationale of audio textuality helps us understand the nature of feminist DH in textual and literary studies more broadly.

Speakers 2 & 3: Critics of modernist literature have long noted the divergent uses of dialect, employed by some modernists (generally, white) as a liberating means of challenging linguistic norms, while confronting others (generally, non-white) as a constraining hinderance. In this presentation, Speakers 2 & 3 analyze recent audio readings of T. S. Eliot’s The Waste Land and Jean Toomer’s Cane to explore what tools of computational sound analysis such as Gentle and Drift can tell us about contemporary performance of modernist dialect.

Speaker 4: Digital humanities researchers who engage with digital literary audio have unique requirements for access and preservation systems. Available systems are ill-equipped at meeting their needs for metadata and content aggregation, digital asset management, and digital preservation. In this presentation, Speaker 4 discusses the development of a metadata scheme and repository system for preserving, presenting, and engaging with digital literary audio as part of the SpokenWeb project.

Speaker 5: This paper approaches literary audio collections in terms of the relationship between sound and signal, and how digital, visual sound signals may invite us to focus on the sounds we are trained not to hear. In curating a selection of audio from the visible signal, this paper asks where the digital sound signal takes us in listening to a documented literary event, and hypothesizes that it leads us past semantics, past speech, to a new audible and conceptual encounter with voice.

 
3:30pm - 4:30pm#SC5: Analysis of Visual Corpora with Deep Learning Panel
Session Chair: Laure Thompson
Marquis C, Marriott City Center 
 

Analysis of Visual Corpora with Deep Learning

Laure Thompson1, Taylor Arnold2, Peter Leonard3, David Mimno1, Lauren Tilton2

1Cornell University; 2University of Richmond; 3Yale University

Neural networks have revolutionized computer vision, and are beginning to be applied in humanities contexts. There are significant practical difficulties in working with these methods, but also exciting opportunities. Can we repurpose tools developed in other contexts to answer humanities questions in creative new ways? What can we do with these new technologies?

We can now access massive new digitized image collections, but it is difficult to analyze them. Unlike text, which can be broken into independently meaningful words, pixels are only meaningful in their original context. Deep learning models, specifically convolutional neural networks, have started to bridge this gap. Neural networks are not a panacea, however, and come with computational and interpretive challenges.

This panel presents three case studies, in which scholars use neural networks to investigate large corpora of visual materials. In addition to showing how these methods are being used to address humanities questions, we also discuss the computational and explanatory challenges of working with neural networks. These case studies are:

1. Formal elements of moving images. This paper shows how face detection and recognition algorithms, applied to frames extracted from a corpus of moving images, are able to capture many formal elements present in the media. Locating and identifying faces makes it possible to algorithmically extract time-coded labels that directly correspond to concepts and taxonomies established within film theory. Knowing the size of detected faces, for example, provides a direct link to the concept of shot framing. The blocking of a scene can similarly be deduced knowing the relative positions of identified characters within a series of shots.

2. Machine-reading the avant garde. We used neural networks to transform page images from modernist journals into numerical representations ("computational cut-ups"). We then used those cut-ups as input for classifiers to answer two separate questions: which pages contain music, and which pages are from a Dadaist journal? The successes and failures of these computational cut-ups illustrate the workings of the neural network, and allow us to question the boundaries between established categories.

3. Visual clustering for collection-scale analysis. Scholarship often proceeds by finding surprising connections between apparently different materials, but searching for connections between images has been difficult. In this paper we present a method that uses pre-trained convolutional neural networks and new methods of dimensionality reduction to create a navigable space of visual similarity for large image collections. Advanced WebGL programming allows the user to explore and interact with these semantic clusters among hundreds of thousands of images in a web browser.

The panel is for anyone interested in computational approaches to visual and material culture. While this panel is not a hands-on tutorial, it will be accessible to those unfamiliar with image processing. We will highlight open-source code and tutorials for applying all three of these approaches to new corporal.

 
4:30pm - 4:45pm#Break3: Break
Grand Ballroom Foyer A, Marriott City Center 
4:45pm - 6:15pm#SD1: Pedagogy Paper Session 1
Session Chair: Matthew Gold
Marquis A, Marriott City Center 
 

Genesis: General Education, Pedagogical Experiment, and Institutional Change

Vika Zafrin, Jason Prentice

Boston University, United States of America

In fall 2018, Boston University began implementing a new general education curriculum across its schools and colleges. Rather than prescribe a particular slate of courses, this curriculum requires students to develop certain essential intellectual capacities. Several of these capacities — including digital/multimedia expression, quantitative reasoning, and a five-part intellectual toolkit — can be fostered by bringing digital humanities tools and methods into the classroom.
In our presentation, we will report on a two-part project designed to fulfill the digital/multimedia expression requirement and the learning outcomes of the university’s Core Curriculum, a decades-old program in which students explore classic works in the humanities, social sciences, and natural sciences. Part one of our project is an instance of a new course being piloted in Spring 2019, in which students engage deeply with a single text from the Core Reading List and create a digital remediation. Part two is a website on which this remediation -- as well as projects from other sections focusing on other Core texts -- will be collected as the beginning of an online library of student work, with current and future students as the primary audience.
These intersecting projects have raised pedagogical, organizational, technical, and ethical questions new to our institution, if not the field. Briefly describing the local history and current state of DH support infrastructure building, we will outline:
* how a senior lecturer and digital scholarship librarian went about creating and co-teaching the course, building on and departing from similar teaching done elsewhere;
* our work with staff, students, and colleagues at a neighboring institution to introduce basic DH pedagogy into a large foundational program with a daunting variety of academic interests represented;
* and our collaboration with the IT division and individuals from outside the university to create the bones of a website for housing spring semester projects and to plan the larger web development project with an eye toward sustainability.
This work coincides with large-scale changes at BU Libraries (new leadership, a strategic planning process getting under way) and with conversations about what a solid DH support infrastructure might look like. Both efforts are relatively new in practice, though informed by years of advocacy. We’ll touch on how we’ve approached working in the midst of significant institutional change rife with both possibility and uncertainty, offering some tricks we’ve found that have helped us keep going.
We will weave into our presentation the ways in which recent field discussions of pedagogy, DH infrastructure building, and librarianship have informed our work. We’ll discuss the relationship between student work and digital scholarly editions, present the results of this multifold experiment, and describe the support and the challenges we’ve encountered. In particular as regards challenges, we will address our approach to presenting a canonical Western text — in our case, the Book of Genesis — while being mindful of its ideological uses and interpretations over time. We will hope to get feedback from a variety of conference attendees: undergraduate instructors, librarians who teach, students, and higher ed administrators.



The Risky Mediation of Archivists: Teaching DH on Digitized Archives

Jewon Woo

Lorain County Community College, United States of America

My presentation demonstrates the political presence of archivists in both past and present in preserving historic documents, by introducing my teaching experience with DH tools. In my African American Literature class for the last two years, students have explored the history of Black Ohioans through first-hand research on archive, interview, digitized documents, and DH tools. Their outcomes reveal that they not only see (supposedly invisible and nameless) archivists’ intervention in preserving the past but also find themselves archivists who attempt to (re)invent the past through DH tools. Without naively upholding the unachievable claim on “objectivity,” the students could interpret archive critically by visualizing the archivist who gave textual authority to preserved materials. In addition, in the process of publishing their studies online, the students envisioned themselves as digital archivists who are visible in investigating and shaping their findings, and further in reproducing them digitally.

By taking the students’ digital projects as case studies, this presentation aims at discussing the politics of digitizing the past as capta, especially with a focus on race and gender. For example, how do we let a silenced runaway female slave speak of her story by visualizing her presence in the white-male-dominant document? What if our expectation for breaking her forced silence replace her unheard voice at the end, so that a digital project about her in fact legitimizes her silence once again? How does a digital archivist engage in activism against racism and sexism by not only preserving but also vitalizing the slave from the obscure records? This discussion can allow us to rethink how to conduct humanistic research on the political aspect of digitized materials, which often reveal archivists’ neglect of the racist and sexist practice on archive.



"So Near While Apart": Correspondence Editions as Critical Library Pedagogy and Digital Humanities Methodology

Francesca Giannetti

Rutgers University–New Brunswick

This paper describes two library-led text encoding projects involving correspondence collections. The first, a documentary edition of personal papers held by Peter Still, a former slave, was conceived as an independent research project involving the participation of two undergraduate research assistants; the second, based upon letters to and from the Rutgers College War Service Bureau (1917-1919), has been designed as a two-week text encoding unit in a proposed undergraduate course on digital humanities. These two projects, both featuring the letter as their object of study, are compared and contrasted as models of data and process, affording reflections on the overlapping concerns of the library instruction and digital humanities communities of practice. I propose viewing text encoding projects, particularly those that focus on lesser known creators or on life documents such as letters, as a means of accessing both critical library pedagogy and digital humanities methodology. By developing such projects, librarians address a number of collection and instruction related objectives of the library, while offering a valuable introduction to a set of methods that are of increasing importance to undergraduate education. Furthermore, these projects may be conducted at smaller scales, by reusing and adapting methods and software shared by the digital humanities community, thereby limiting reliance on institutional partners for technology and infrastructure support, which may not be forthcoming in under-resourced institutional contexts.



Lightning Talk: Centering Black DH Pedagogy in a First Year Seminar Course

Tatiana Bryant

Adelphi University, United States of America

This lightning talk presents a brief case study centered on designing and teaching a first year university seminar course on Black digital humanities at a PWI. Teaching traditional first year students with little-to-no exposure to Black Studies (the majority are STEM majors interested in the technology aspects of the course) required that this digital humanities course serve as a gateway into Black Studies for students interested in it who lacked the opportunity to study it previously (as well as students with no interest at the outset.) This course integrates humanities curricula into STEM education, so that issues of diversity, equity, and inclusion and engendering empathy (for example, consider current conversations around combatting algorithmic bias) are tackled by exposing students to diverse histories and stories.



Encoding Working Lives: Modelling an Undergraduate DH Research Project on Archival Moravian Ego-Documents

KATHERINE FAULL, Diane Jakacki, Carly Masonheimer, Jess Hom, Marleina Cohen

Bucknell University, United States of America

Funded by an institutional grant from the Mellon Foundation, a team of undergraduate students, faculty, and staff is researching the relationship between the pre- and early industrial revolution in Great Britain and the lives and belief systems of the working class populations who were members of the Moravian Church. The primary sources used are archival ego-documents from collections in London and Fulneck, Yorkshire.

Using a custom-built interactive digital platform that allows for the integrative searching of document metadata, the display of the digital image, and a transcription desk to create a digital text of the document, viewers can read the transcribed memoirs online and also create custom corpora to be further marked up. This collaborative research project significantly develops students’ intellectual training and investment in a DH project. The methodology employed for this investigation of Working Moravian Lives is based on TEI -encoding of specific entities within these ego-documents (people, places, institutions, emotions) in order to build up a personography, a gazetteer, and a sentiment dictionary of Early Modern English Evangelism. Such an encoding of entities allows the team to ask questions of the texts about the relationships between sentiment and work, sentiment and place, sentiment and people.

This paper explores two aspects of the DH project. First, how does changing the role of undergraduate students from hourly paid research assistants to collaborative researchers change motivation, investment, and critical DH inquiry into both the method and substance of the project? Initially, all three students were being paid hourly to do individual work (transcription, data analysis) vital to the overall project. However, with the institution of a more comprehensive research-team model, the students’ roles as researchers are redefined: they now have direct input into the project’s design, planning, discussion, and execution. Each student focuses on a particular aspect of the project in the expectation that she will ultimately develop her own research questions that can be pursued as part of the larger project. Through close mentorship, student collaborators learn new skills, such as TEI-compliant P5 XML entity markup, taxonomic and ontological development, and metadata management, as well as further transcription and project design.

Second, the entities marked up and extracted by the students are used to examine the relationship between work and emotion in the Moravian congregations of the West Riding of Yorkshire in the 18th century. Although historically, the Moravian Church in Great Britain, like the Methodist Church of the Wesleys, has been seen as consisting of primarily members of the working classes whose artisan and laborer skills were fundamentally transformed by the advent of large-scale production in the North of England (Yorkshire and Lancashire), there has been very little work to date on the English-language memoir collections in the UK. Thus, access to these Yorkshire “ego-documents” provides the research team with a treasure-trove of new material, written by the working class members of the church.

 
4:45pm - 6:15pm#SD2: Designing Inclusive Information Systems Roundtable
Session Chair: Cara Marta Messina
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Designing Inclusive Information Systems: From Theory to Change

Amanda Rust1, Cara Marta Messina2, Dorothy Berry3, Emily Drabinski4, Karen Li-Lun Hwang5, Raffaele Viglianti6, Scott Young7

1Northeastern University Libraries, United States of America; 2Northeastern University, United States of America; 3Harvard University, Houghton Library, United States of America; 4Long Island University-Brooklyn, United States of America; 5Metropolitan New York Library Council (METRO), United States of America; 6Maryland Institute for Technology in the Humanities (MITH), United States of America; 7Montana State University Library, United States of America

Most current systems for the acquisition, housing, and care of cultural objects have deeply colonial implications. Technology tends to embody existing inequities and enforce differential hierarchies based on power. In addition, digital information workers across many areas increasingly seek to preserve and provide access to the voices of disenfranchised and marginalized communities -- communities that have been at best ignored by the cultural heritage and higher education fields and, more likely, actively harmed.

Providing digital access to the collections of these groups foremost requires genuine, responsive partnership. It also requires that the technical and information systems through which we engage community contributors and participants be equally responsive to diverse cultural circumstances and needs. To fulfill the promise of community archives partnerships, cultural heritage practitioners -- be they solo librarians at a historical society, scholars focusing on recuperative digital history, archivists in a large public library, or digital exhibit curators at a museum -- must also work towards more inclusive information systems.

This roundtable will facilitate a lively discussion on the nuts and bolts of developing of inclusive information systems, with a particular focus on partnerships with underrepresented groups. By centering information systems, we focus on issues such as the harm caused by cataloging standards that classify living, breathing people as “illegal aliens”, or data models that enforce strict gender binaries such as “woman or man” when human experience is much broader. The roundtable moderators have explored the strategies and resources needed to create more inclusive information systems through a two-year project focused on technical systems in libraries, archives, and museums. Moderators will first frame the discussion by sharing the results of that project, including areas for future research.

One of the core outcomes of that project was a series of case studies focused on real-world scenarios in the use and design of information systems. A group of these case study authors will then dig into the concrete details of developing inclusive systems by presenting and analyzing their case studies, which cover topics such as metadata creation and aggregation for African American archives, user experience design with Native American Youth, marked and unmarked knowledge in library catalogs, linked open data for Asian American art history, and minimal computing for sustainability and expanded access.

Through these case studies, we will map out areas of commonality in both what makes creating more inclusive systems possible, and what makes it difficult, leading us to a better understanding of the potential points of impact in cultural heritage practice. Attendees will leave with a greater understanding of theoretical work across disciplines such as library and information science, digital humanities, archival science, and computer science. Attendees will also leave with a greater understanding of the design decisions that are made in the everyday practice of creating inclusive information systems, and how they might create more inclusive information systems in their own work.

 
4:45pm - 6:15pm#SD3: EcoCritical Digital Humanities Panel
Session Chair: Ted Dawson
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

EcoCritical Digital Humanities, Or How to Save the Planet

Amanda Starling Gould1, Max Symuleski2, Ted Dawson3, Craig Dietrich4, libi rose striegl5

1Duke University, United States of America; 2Duke University, United States of America; 3University of Maryland, College Park; 4Occidental College; 5UC Boulder

We in digital humanities and media studies like to use environmental metaphors. We talk of “media ecologies,” “atmospheric media,” and “cloud computing” without ever actually rooting their referents in their fleshy (and dirty) materials. Maxwell, Raundalen, and Vestberg have suggested that such metaphors of the environment obscure the relationship of digital media to the material world, enabling utopian discussions about virtual environments at the precise moment in which the real environment is in crisis. The emerging field of Ecocritical DH (EcoDH) seeks to maintain a focus on the material world within the digital humanities. Located at the nexus of environmental humanities and digital humanities, EcoDH mobilizes a range of tools and critical constructs, using digital methods to investigate environmental issues and ecocritical frameworks to re-materialize digital issues. EcoDH thus offers new horizons for digital work while challenging digital humanities to investigate its own practices and metaphors.

Our panel features scholars and practitioners from various institutions and backgrounds discussing the existing place of, and future possibilities for, EcoDH. Speaker 1 will present on immersive experiences of infrastructure and how these can both reveal and obscure the materiality of the digital. Speaker 2 dirties the digital humanities through metabolic thinking and outlines the urgency of cross-disciplinary EcoDH as a model for enacting deeper engagements with our digital-environmental entanglements. Speaker 3 will discuss his work combining permaculture with network culture by creating software that drives non-hierarchical systems such as Scalar and ThoughtMesh. Speaker 4 will explore the political ecology of maintenance as it relates to digital objects, digital infrastructures, and life-cycles of computational hardware. libi will address media archaeological and techno-revitalization practices in relation to obsolescence and convenience culture.

To demonstrate EcoDH at its best, our presentation will be a hybrid intervention that puts into practice our theoretical intentions.

 
4:45pm - 6:15pm#SD4: Digital Textuality Paper Session 1
Session Chair: Alexander Gil
Marquis B, Marriott City Center 
 

Deploying a Digital Edition Using Minimal Computing Principles

Avery Jacob Wiscomb, Steven Gotzler, Daniel Evans

Carnegie Mellon University, United States of America

This talk presents a new online digital edition of Karl Marx’s Capital Vol. 1 built using the lightweight markup language Markdown and “Ed,” a Jekyll theme developed by Alex Gil and others. Called MARXdown, our edition of the text incorporates group annotations using hypothes.is, which fosters community building by enabling sentence-level note-taking and discussions layered on top of an easy-to-read interface. Specifically, MARXdown supports asynchronous group readings and crowdsources contributions from students and faculty at other institutions. Beautifully rendered, the edition brings together the original English translation of Marx’s text from 1887 with extant scholarly sources and external media to create a multi-layered annotated edition of the text.

We will demo the edition, discuss implementation and workflow, and offer suggestions for ways to build similar “Ed.”-style projects in the classroom or with the public. We argue that the use of lightweight digital technologies to write, publish, and review scholarship, especially when based on minimal computing principles, empowers oftentimes marginalized voices to contribute to scholarly or reading editions of texts meant to last. We suggest that others deploy critical open-source editions to the public, especially for key texts in decolonial, indigenous, Black studies, cultural and critical ethnic studies, and intersectional feminist interventions.



Lightning Talk: Using Smartphone Architecture as Narrative Structure. Franz Friedrich's "Zeitreiseführer"

Katharina Tiwald

University of Applied Arts, Vienna, Austria

In 2015, the renowned German publishing house Fischer made an app available on its website: "25052015. Der letzte Montag im Mai. Ein Zeitreiseführer" by Franz Friedrich ("The last Monday in May. A Time Travel Guide"). Friedrich's app is a novel and still just that: an app. No hard copy edition of it is scheduled to appear. Friedrich's novel makes unique use of smartphone architecture as a backbone for the story line, thereby marrying form and content: the app, following a first-person narrator walking through Berlin on a spring day in 2015, is targeted at a tourist arriving from the future.

In this lightning talk, I will highlight in what way "novels-as-apps" differ from hyperfiction and more or less collaborative forms of writing connected to the digital age, such as chat fiction apps and "facebook novels" like "Zwirbler", initiated, coordinated and lastly published as a book by Vienna-based author Gergely Teglasy. Authorial control and reader guidance through narrative sequencing are key concepts in defining that difference, further leading to considerations regarding editorial processes and placement on the literary market (implying economic considerations).

I hope to be able to look into Friedrich's creative and practical path towards publishing this work (an interview with him and his editors is scheduled for early 2019) and outline further narrative possibilities in novels-as-apps. In European literary tradition, novels are regarded as the single most open genre for experiment. Smartphone architecture with its rhizomic structure should provide an excellent playground for just such experiments.

Since this project is a dissertation at a University of Applied Arts, if time allows I will present plans for a novel-as-app in progress that will take a different approach towards using app structure as background to its story line. I chose a historical personality as my main character to demonstrate how a new medium can illustrate a future-oriented mindset without the main character necessarily hailing from the future (or even the present). This character's approach towards political and literary innovation will allow me to make use of narrative (and maybe cross-genre) possibilities as outlined in the first section of my talk - again, if time allows.



SF Nexus: A Comprehensive Corpus of Speculative Fiction for Non-Consumptive Research

Rikk Mulligan1, Alex Wermer-Colan2

1Carnegie Mellon University, United States of America; 2Temple University, United States of America

Literary studies uses canon to limits a corpus. The SF Nexus seeks to be as inclusive as possible in curating speculative fiction to include a range of science fiction, fantasy, horror, and related subgenres. Although critics and fans dispute the historical or literary merit of specific works, in general scholarship continues to skew Anglocentric, white, male, and heteronormative. Paul Kinkaid’s “On the Origins of Genre” (2003) disrupts more traditional definitions of genre and canon, particularly those that align SF with the works of the Golden Age era as white, male, college-educated and, by implication, straight. John Reider’s “On Defining SF, Or Not: Genre Theory, SF, and History” (2013) and Gary K. Wolfe’s Evaporating Genres (2011) further question both canon and genre as a tool for reading rather than an absolute. These critics emphasize that SF has never been easily defined and has always had writers who were not white men.

Performing genre analysis at scale using quantitative methods enhanced with computational tools for textual and cultural analytics offers a more inclusive approach. However, those who apply digital methods to contemporary literature are confronted with two further obstacles: 1) copyright restrictions, and 2) what Margaret Cohen called “the great unread” (a term borrowed by Franco Moretti). These challenges limit access to copyrighted works, especially out-of-print, mass-market materials, even in Hathitrust's corpus. While the Google Books project has created an extensive digital research corpora, Ted Underwood and others warn it is missing a lot, limiting research.To study SF as a genre requires quantitative methods as the number of published works increased exponentially after the mid-twentieth century. Enlarging the available SF corpus will help bring into relief works and authors that were underappreciated when first released and create a more robust field for scholars.

The two authors of this paper propose a twenty-minute presentation to explain our development of a comprehensive corpus of copyrighted SF for non-consumptive research using online tools for data analysis. The first stage of this project involved creating a reproducible workflow within a university library to grow a long-term digitization project of over 1000 duplicate copies of special collection materials. We are working to expand the project by building a network to digitize special collections containing pulp genre fiction. Texts will be added by web-scraping the Internet Archive, Project Gutenberg, and Luminist.org. We will discuss our efforts to create standard practices and policies for the exchange of copyrighted materials between researchers for various purposes of disaggregated data analysis and exhibition.

Currently available digital resources for science fiction trend towards online exhibits for genre analysis. We seek to link siloed projects together to comprehensively supplement such databases as the Corpus of Historical American English. While our materials will be ingested into Hathitrust, we are also curating datasets for computational generation of textual qualities as metadata, from n-grams to word embeddings. By bringing together a diverse, under-explored SF corpus, applying tools for genre analysis and creating interactive points of engagement, we provide new ways for a users to discover this material.

 
4:45pm - 6:15pm#SD5: Promoting Digital Humanists Roundtable
Session Chair: Seth Denbo
Marquis C, Marriott City Center 
 

Promoting Digital Humanists: Scholarly Societies and Academic Careers

Seth Denbo1, Sarah Levine2, Elizabeth Losh3

1American Historical Association; 2American Academy of Religion; 3William and Mary, Modern Language Association

Scholarly societies in the humanities have set about creating guidelines for how digital scholarship should be evaluated for hiring, promotion, and tenure. By encouraging openness to new methods and formats, and promoting intentionality in both individual and departmental responses to innovation, these guidelines have been developed to ensure that scholars in the digital humanities are able to obtain the appropriate credit and recognition for their work.

With greater numbers of scholars in our disciplines doing research that derives conclusions through digital methods and also publishing their outcomes in non-traditional formats (either through preference or necessity) it is time to explore and assess the efficacy of these guidelines. Are the efforts of these societies making a difference, and if so how? This roundtable will include representatives from several large humanities associations — American Academy of Religion, American Historical Association, College Art Association, and Modern Language Association — talking about their guidelines and how they are being used by individual scholars to support their job, tenure, and promotion applications. Associations have written into these documents recommendations for departments as well as, in some cases, institutions. Asking questions about the effect these recommendations are having, and how we can increase their uptake will be vital to the continued development of digital scholarship in humanities disciplines.

 
6:30pm - 8:00pm#WalkingTour: Downtown Public Art Walking Tour with the Office of Public Art
This event is a reserved event. Those who have registered will be confirmed at the start of the walking tour.
 
Date: Thursday, 25/Jul/2019
8:00am - 4:00pm#Reg2: Registration/Check-In
Grand Ballroom Foyer A, Marriott City Center 
8:00am - 5:00pm#BookExhibit2: Book Exhibit 2
City Center A, Marriott City Center 
9:00am - 10:30am#SE1: Race and Data Paper Session
Session Chair: Carolina Villarroel
Marquis C, Marriott City Center 
 

What Historians Can Learn from Machine Learning and Vice Versa: The Case of the Civil Rights Movement

Nico Slate

Carnegie Mellon University, United States of America

This paper will examine the history of the civil rights movement as a case study at the intersection of history and machine learning. My key question is this: what would it mean to understand a social movement as a process akin to machine learning? I will begin by asking a more traditional question (from the perspetive of academic history): what role did learning play in the civil rights movement? From Brown v. Board to the Little Rock Nine to the University of Mississippi, efforts to integrate educational facilities produced many of the most famous crises of the Civil Rights / Black Power era. In addition to chronicling such conflicts, historians have explored radical approaches to antiracist education, such as the Highlander Folk School, the Citizenship Schools, the Freedom Schools, and the community schools run by the Black Panthers. What remains unclear is the relationship between the goal of increased access to education and the methods of movement activists. What does it mean to understand the civil rights movement as itself a form of education? How can such a lens help us rethink where and how the movement was thought and learned?

At the core of my paper is the educational impact of the civil rights movement on activists themselves, what Bernard Lee called “the university of the movement.” While some scholars have tracked the transmission across generations of what sociologist Charles Payne calls the “organizing tradition,” others have recognized the disjunctures between different generations of activists (the work of Tomiko Brown-Nagin is exemplary in this regard). My paper will contribute to these debates by bringing original archival research on the civil rights movement into conversation with recent developments in the field of machine learning and related disciplines in the psychology and cognitive science of education. While the public persists in seeing the civil rights movement as the work of Martin Luther King, most scholars see a multifaceted struggle involving a variety of actors and organizations. Yet it remains unclear how ideas and knowledge flowed within the movement, and between the movement, its adversaries, and the general public. How can advances in machine learning provide novel approaches to the role of learning within the civil rights movement? And vice versa, how can a close study of a social movement offer a new vantage point on machine learning?



Validating Machine Learning Systems in the Humanities: Bayesian Explorations of the Encyclopedia Britannica from 1768-2010

Aaron Mauro

Penn State, United States of America

In March of 2012, the Encyclopedia Britannica ceased printing paper editions of its handsomely bound reference books. The Encyclopedia Britannica, first published in Edinburgh, Scotland in 1768, remains the oldest English language encyclopedia in continuous production, but it will be updated only through its online offering in the following years. In the era of community based online encyclopedias like Wikipedia, now is an interesting time to reflect on the content of the complete print run of the Encyclopedia Britannica. This also represents an interesting moment to examine how past systems of defining “general knowledge” have worked to shape societal prejudices, beliefs, and assumptions. This paper will demonstrate how Natural Language Processing with Python can be used to track the evolution of popular conceptions of race and racialization across all 15 editions of the Encyclopedia released over its 244 year history. In particular, the presentation will describe the development of a Bayesian racial classifier and how it has been used to discover racialized passages relating to numerous topics, including music, agriculture, and religion. A critical step in using machine learning techniques relates to validating the results for users by offering a glimpse into the underlying system. Machine learning techniques are often misunderstood by audiences as either a panacea and deliverer of plain truth or a highly suspicious and untrustworthy method for understanding data; the project site, generalknowledgeproject.network, works to counter this perception by hosting a racial classification tool to help its audience gain a first hand experience of the classifier with the goal of validating the methods used. In addition to hosting interactive visualizations, the site contains descriptions of processing methods, a companion bibliography, as well as the source code for our analysis.



AI Interpretation of Violence Against Women in 20th Century Border Fictions

Francesca Vera, Michaela Coleman

Stanford University, United States of America

The challenges that machines face when interpreting figurative language are a significant impediment to applying computational techniques to many significant themes in literature. Figurative language is commonly used, for instance, in depictions of violence. To overcome this obstacle, this project aims to utilize artificial intelligence techniques so that we can better understand the ways in which depictions of violence towards women in literature have changed over the 20th/21st centuries. Our team is focused on narratives centered around the US-Mexico border, although there is great potential to apply this study to other types of literature/text and other time periods. Our first step is to build a comprehensive corpus of border literature within our chosen time period including authors such as Roberto Bolaño, Carlos Fuentes, Sandra Cisneros, and others. This corpus serves as the body of study for our experimentation.The text is accurately labeled with details that help one discern emerging relationships between time and salient features based on the model's performance. After finalizing the data source, the project adopts existing technical methods commonly used to understand nuances and qualities in bodies of text: Bayesian analysis of underlying character features; latent Dirichlet allocation in topic modeling (for which we might look at topics like “Women” and “Violence,” among others), and natural language inference techniques for text comprehension. Through this complex textual analysis, we derive inputs for a machine learning model that identifies and highlights instances of violence towards women in literary works as outputs. A feature analysis of the model provides insights into patterns of this type of violence, especially how they are represented in text. By incorporating state-of-the-art technology in socially relevant humanistic inquiry, our research encourages immediate application of AI for Digital Humanities analysis and pursues to achieve a model more attuned to varying levels of subtextuality.



Creating Models of Influence at the Intersection of Dance and Digital Humanities: Embodied Transmission in the Performances of Katherine Dunham

Kate Elswit1, Harmony Bench2

1University of London, Royal Central School of Speech and Drama; 2The Ohio State University

This presentation comes from Dunham’s Data: Katherine Dunham and Digital Methods for Dance Historical Inquiry. The overarching project explores the kinds of questions and problems that make the analysis and visualization of data meaningful for dance history, pursued through the case study of 20th century African American choreographer Katherine Dunham. Dunham is an exemplary figure to pursue such an interdisciplinary inquiry into dance history and digital modes of analysis, due to her own model of research inquiry, which combined theoretical and print modalities across multiple fields, from anthropology to dance pedagogy. Dance is transmitted from body to body in communities, training and rehearsal studios, and theatres, and as such, it moves across transnational cultural and artistic networks. Here, dance studies functions as an interlocutor with imperatives of digital humanities to “bring back the bodies” in digital research (D’Ignazio and Klein 2019).

At ACH, we focus on modeling what we describe as traces of “influence” in and around dance touring, and reflect on the development of scalable digital analytical mesthods for studying influence that are shaped by approaches to embodiment from dance, critical race theory, and digital cultures. We further consider how to represent embodiment digitally, without reducing lived experience to data, as we bring our training as dance scholars to bear on those experiences that both underpin and haunt the data we have manually curated from Dunham’s archives.

Evaluating Dunham’s influence includes analyzing the direct and indirect circulation of dance gestures, forms, and practices spanning an 80-year career across six continents. This presentation focuses on the period 1947-1960. We employ spatial analysis to demonstrate how Dunham’s choreography materializes the influence of the many geographic places that infused her diasporic imagination. We also trace the flows of performers working together over time and space as a dynamic collective, and the embodied transmission of Dunham’s dance repertory. We further engage other computational methods to examine the relationship between touring and mobility, for example, how locations influenced one another as particular cities and theatres opened onto future sites of travel. Creating models of influence in these ways offers means to visually elaborate ephemeral corporeal practices of cultural transmission in dance. Such analysis enables deeper consideration of the dynamic relations of people and places through which Dunham’s diasporic aesthetic developed and circulated, in dance gestures, forms, and practices.

 
9:00am - 10:30am#SE2: New Media Paper Session 1
Session Chair: Elizabeth Losh
Marquis B, Marriott City Center 
 

The Essay that Broke the Internet

John Jones

The Ohio State University, United States of America

This talk analyzes O’Reilly’s 2005 essay “What Is Web 2.0: Design Patterns and Business Models for the Next Generation of Software” and its effects on the development of the Internet since the late 2000s. Where this highly influential essay purported to be an analysis of and prognostications about the future of the web—describing potentially successful, cutting edge web businesses and identifying how those businesses could overtake competitors—its predictions have become self-fulfilling prophecies influencing the values and goals of technology companies for over a decade. Following Castells’s arguments on the potential for network design decisions to “program” outcomes in those networks, I show how O’Reilly’s vision of Web 2.0 privileged data collection, dark patterns, and walled gardens, setting the commercial internet on a path that has greatly contributed to issues such as the growth of online surveillance; social media trolling and abuse; and, perhaps most importantly, the role internet companies play in these and other issues as both gatekeepers and arbiters of thought.

As with Ankerson’s Dot-Com Design and Tufecki’s Twitter and Tear Gas, this talk applies humanistic analysis and interventions to the products of digital culture, identifying the present-day impacts of Web 2.0 “design patterns” on the Internet in order to aid scholars and technologists in addressing the unintended consequences of those patterns, an integral step in making the web more hospitable to the goals of social justice.



Incels, The Red Pill, and Three Waves of the Manosphere

Chloe Perry

Carnegie Mellon University, United States of America

In 2014, Elliot Rodger went on a shooting spree in Isla Vista, California, killing seven people, including himself, and injuring fourteen more. Rodger’s manifesto claimed that websites such as “PUAhate” confirmed for him “how wicked and degenerate women really are.” His internet history linked him to communities of men that identify as involuntary celibates, or “Incels.” Similar attacks since have spurred interest in websites with nihilistic content associated with the Incel movement. Incels have often been contrasted to self-help and pick-up artist groups such as “The Red Pill” and “men going their own way” (MGTOW), which are collectively known as the “manosphere.” Positive aspects of the manosphere such as motivation and self-help are accompanied by sexist world-views and an obsession with dominance, the control of women, and one’s position in a social hierarchy.

Kate Manne’s book Down Girl (2017) observed that most public responses to Rodger’s Isla Vista attack rejected his actions as being misogynistic. Manne believes Rodger’s case of misogyny, and others, are often dismissed rather than researched because vague definitions such as “woman hating” render the term meaningless or seemingly inapplicable. Manne argues for revised definitions of misogyny and sexism, which we support and expand on in our research.

This presentation reports recent findings from a combination of close reading and large-scale computational analysis of online forums including Reddit’s “Braincels” and “TheRedPill” subreddits, the men’s blogs “SoSuave,” “Chateau Heartiste,” and “Dalrock,” as well as from pick-up manuals dating back to the 1970s. We identify three “waves” in the manosphere. By isolating terms from books and posts, we track the succession of the waves as they correspond to techniques, advice, and ideologies that men adopt in navigating what they term the “sexual marketplace.”

We have found the major transitions between these waves to include: (1) the shift from the enumeration of practical seduction techniques for use in a bar or nightclub to the development of theories on what women find innately and universally attractive; (2) the adoption of popularized versions of microeconomics and evolutionary psychology to provide a pseudoscientific basis for these theories; and (3) the use of these theories to represent woman as amoral, biological machines, lacking the freedom of will that men naturally possess, and thus unsuited to holding political power. Current manosphere ideology leads to a niche subset of the group feeling justified in retribution against women for their lack of attention, affection, and sex.



Engagement Through Collaboration and Digital Curation: The Headline News Project

Marietta Carr, Kim Lenahan

Cuyahoga Community College, United States of America

This paper discusses the conception, creation, outcomes and reflections on the use of a digital public humanities project to stimulate campus-wide engagement. A collaboration developed between community college resource divisions (Archives, Library, Instructional Design, Instructional Specialists, Studio, Creative Arts and Media, Diversity, etc.), faculty, students and community experts, created a ‘digital podium’ around an exploration of the college’s historic student newspapers.*

The Headline News project centered on the creation of a digital video loop and the curation of materials for accompanying both curricular and co-curricular exhibits and events that reflected interdisciplinary scholarship, critical thinking, debate, and engagement through the use of social media and digital tools. Beyond the academic component, this experiment stimulated new relationships, generated new learning tools, and broadened resource awareness and engagement both within the college and extending into the wider city. The ultimate goal of the project is to create an annual, iterative examination of the humanities topics through a variety of lenses and encourage thinking in new and creative ways.

Headline News involved students in questioning the role of the news within the broader questions of social issues, activism, community involvement, and power structures. Students’ understanding of how news is identified, produced, and distributed is at the heart of the project. This paper will explain the project, analyze outcomes, and propose new iterations. This project will appeal to faculty, librarians, instructional designers, and administrators seeking to entice their institutions to expand digital literacy, promote cross-unit collaboration, and explore new avenues of faculty, student and community engagement.

*implementing scholarship such as Calder & Scheinfeldt Uncoverage (Pedagogy); Sam Weinberg Historical Thinking & Rafael C Alvarado, Debates (Digital Humanities); Dan Cohen, Digital History, Jeffrey Pomerantz, Metadata (Information Science), Mariet Westermann & Donald Waters (Mellon Initiative Results)



Webs: An Ethnography of a Wikipedia Talk Page

Jarah Moesch

independent scholar, United States of America

On May 2, 2001, user Erdem Tuzen, a physician from Istanbul, Turkey, created the myasthenia gravis Wikipedia page. It was written as a simple paragraph describing the disease’s main symptoms, how the disease works, and how it is treated.

Zooming through time shows that it has been edited 1023 times by 521 users. It has 10 main sections, with another 15 subsections. There are 14 links to other Wikipedia pages in the introductory paragraph alone. The amount of ‘information’ has multiplied. What does it mean to be an embodied being within systems of data? What happens when standards and practices rely on particular forms of information as being more worthy than others, causing gaps in the sharing of knowledge?

When people want to gain knowledge about their health, a majority go online for information, resulting in almost 200 million medical articles being viewed on Wikipedia per month; yet Wikipedia is designed so that instead of providing useful, accessible information, it is actually a barrier to knowledge.

This is due to a number of reasons, first, that the writing itself is above the recommended guidelines for average US literacy, second, that this happens partially because of Wiki: projectMedicine, an international group of health professionals dedicated to making sure the medical information on Wikipedia is technically correct, and third, that the ‘Five Pillars’ for creating, editing, and maintaining Wikipedia are themselves ideologies that reinforce a particular white, Western, Christian set of already-established knowledges.

I will analyze this through an ‘ethnography’ of the myasthenia gravis Wikipedia page to come to know how already embedded knowledge practices and assumptions structurally co-create the environment for ways of knowing to be present and absent. I will query the policies, guidelines and structures of Wikipedia itself as content, coming to terms with the ways bodies are directed through a combination of histories and the (current, online) practices of ‘access’ in knowledge production.

 
9:00am - 10:30am#SE3: Connecting the Dots Roundtable
Session Chair: Meghan Ferriter
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Connecting the Dots: Closing Gaps & Using Collaboratively Created Cultural Heritage Data

Meghan Ferriter1, Effie Kapsalis2

1Library of Congress, United States of America; 2Smithsonian Institution, United States of America

In recent years cultural heritage institutions have hosted projects that generate textual data and metadata to further support use of library, museum, and archival resources, specifically via collaborative knowledge production and crowdsourcing workflows. Examples include large scale text transcription projects, image identification, and improving representation of collections in Wikimedia projects, and emerging cross-institutional collaboration that extends collections as data approaches. Public participants relate to these projects as empowering and seek to understand the goals of researchers who would use the data. In the process, these projects cultivate critical reflection and action upon gaps in data and representation. These projects have generated questions from participants about why particular collections are accessioned and how they are described. These projects also teach participants about the types of data they are helping to create and then demonstrate improved access to collections materials to extend understanding of how data feeds into systems that serve collections. As such, they may serve as public humanities efforts, as well as educational programming.

The cultural heritage + knowledge repository projects described above rarely begin oriented around a specific research question. This framing contrasts with researcher-led crowdsourcing and data transformation approaches. Yet the data outcomes for these projects may be quite similar: informed by the physical arrangement cultural heritage collections; the systems in which their metadata are stored; the interfaces through which they are accessed; and the resulting constraints researchers and general audiences may encounter in using them. Further work remains to articulate connections between the needs of those who steward data, those who augment and expand data, and those who might use these forms of data.

During this roundtable session, speakers will provide reflect on the design, outcomes, and positioning of projects including the Library of Congress’ By the People, the Smithsonian Institution’s American Women’s History Initiative, and other participatory projects. Key to the discussion: comparison of resulting data formats, the ways people describe their participation, and current uses of resulting data.

For this roundtable, we wish to welcome participants from many levels of skill, expertise, and resources, including newcomers, alt-ac, junior and senior scholars, and public participants. Together, we may discuss practices that may support continued use and access to metadata, collections, and research goals, as well as their connection to the motivations of public participants in these projects. A core goal of the session will be to engage with researchers and practitioners to surface possibilities, practical needs, and examples of barriers to using these data.

Key objectives for the session are to (1) make these projects and their data more intelligible to those who may wish to engage with them; and (2) create a space in which session participants can apply interdisciplinary considerations, so that these projects may include the needs of all users in their evolving design(s). Attendees who participate in this roundtable will develop shared understandings ways to improve access to and use of a range of data, while identifying opportunities to seed more user-centered futures and potential for collaboration.

 
9:00am - 10:30am#SE4: Digital Textuality Paper Session 2
Session Chair: Rebecca Munson
Marquis A, Marriott City Center 
 

Combative Collaboration: Readers, Literary Influence, and The Little Review

William Reed Quinn

Northeastern University, United States of America

Recent digital scholarship has attended to the pace of historical change (Underwood) and the dynamics of textual influence (Jockers, Barron, et. al.). At the same time, periodical studies has conceptualized the significance of seriality and the ways that time functioned in periodical print culture: as "one issue displaces another, a publication’s editor must avoid too much difference while supplying just the right amount of the same” (Mussell, 345). The seriality of magazines and newspapers has given rise to new questions about literary taste, influence, and causality amongst cultural works, questions that digital tools are well-equipped to explore.

This paper will focus on the influence of readers and their letters to the editors, which had a significant, yet overlooked role in constructing literary taste. I examine the extent of readers’ influence over other genres by topic modeling a corpus of modernist magazines. More specifically, I measure the cosine similarities of topic distributions to determine if any one genre set the tone—or topic—for the following year. This method highlights the various roles in literary production and begins to analyze which role had influence at different stages of cultural creation. Even the avant-garde Little Review, which prided itself for “Making No Compromise with the Public Taste,” regularly acknowledged and corresponded with its readers. Early twentieth century magazines are a site where the contests of literary taste are explicit and continuous. This is of particular importance to literary studies because it begins to show the complex and diverse activities that shaped literary production.



TEI, Transformation, and Text Analysis: Building a Markup-based Toolkit for Word Embedding Models

Sarah Connell

Northeastern University, United States of America

This paper will share insights gained from building a toolkit that uses text encoding to improve corpus creation for text analysis, with a web interface that is designed for theoretically-grounded experimentation in algorithmic text analysis. The Women Writers Project is currently developing the Women Writers Vector Toolkit (beta link at https://wwp.northeastern.edu/wwo/lab/wwvt/, final version to be published in December 2018), an interface that will allow users to explore several different word embedding models trained on texts from Text Encoding Initiative (TEI) corpora that include Women Writers Online, the Victorian Women Writers Project, and Early English Books Online. Word embedding models are a powerful method for studying relationships between words in large corpora, but training and querying them requires knowledge of a computer programming language, such as Python or R.
This project has two important foci. First, we investigate advanced methods of transforming TEI-encoded texts to improve results for both precision and semantic nuance in text analysis. We are using markup to remove elements that tend to distort results (such as speaker labels in drama), to improve tokenization based on encoding of named entities, and to enhance regularization by preferring elements such as those that mark expansions and corrections. We are also investigating methods of using the semantic distinctions instantiated in the markup to extract subcorpora based on generic features and document structures, enabling comparative analysis of prose and verse, paratextual and textual materials, quoted and non-quoted materials, and so on.
Second, the project is developing a web interface to allow for experimentation on pre-trained models, supported by a wide range of contextual materials, including glossaries and explanations, suggested searches and case studies, and class assignments and activities. The site is designed to open up word embedding models to research and teaching that is grounded in a thorough understanding of how the models operate, without requiring computer programming knowledge.
This project thus tackles two key challenges in digital humanities research: how to integrate text encoding and text analysis and how to make command-line technologies more accessible to novice users and in the classroom.



“Everyone is Gay”: The Presentation of Queer Relationships in Fanfiction

Michael Miller Yoder, Luke Breitfeller, Carolyn Penstein Rosé

Carnegie Mellon University, United States of America

Authors of fanfiction often explore same-sex relationships and gender expressions outside societal norms, leading researchers to label fanfiction as a “queer space” (Lothian et al., 2007). However, critiques of fanfiction culture posit that despite the commonality of queer relationships in fanfiction, these stories can still further existing heteronormative or cisnormative narratives (Walton, 2018). Does fanfiction as a “queer space” frame queer relationships as the norm? We process thousands of fanfiction stories with techniques from natural language processing to address this question on a large scale.

To analyze framing of queer identity terms throughout fanfiction, we use Word2Vec, a machine learning technique that projects words as vectors into geometric space based on the context words appear in. Analyzing these vectors across specific semantic axes allows a visualization of shifts in semantic associations for identity terms across corpora (An et al., 2018). We train word vectors on fanfiction from the website ArchiveOfOurOwn and compare to a corpus representing "mainstream" fiction, drawn from COCA, Hathi Trust, or other available sources. We plot identity labels on semantic axes from antonym pairs such as same/different, fake/real, and good/bad, based on three dimensions of identity representation in discourse (Bucholtz and Hall, 2004). Preliminary results when compared with a mainstream corpus of Google News vectors show fanfiction vectors for trans, gay and queer projected closer to real than mainstream vectors, which fits fanfiction as a “queer space.” However, we also find surprising associations--for instance, fanfiction vectors for LGBTQ terms such as 'gay' projected closer to 'bad'.

We plan on exploring these findings to determine whether they point to hetero- and cisnormativity lingering in a queer space or just to limitations in surface text analysis. Vectors for explicit mentions of gender and sexual identity miss implicit framing of same-sex relationships where identity labels are not mentioned. ArchiveOfOurOwn metadata tags help identify these cases; we plan on training vectors for metadata using the paragraph2vec technique of Le and Mikolov (2014). We also plan on splitting our fanfiction corpora using this metadata into fics that represent “real-world” challenges to LGBTQ acceptance and stories that present an “aspirational” world in which being queer is already accepted. These techniques will nuance our findings and point to examples of fics that may represent queer relationships in expected or surprising ways.

An, Jisun, Haewoon Kwak, and Yong-Yeol Ahn. “SemAxis: A Lightweight Framework to Characterize Domain-Specific Word Semantics Beyond Sentiment.” Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 2018. 2450–2461.

Bucholtz, Mary, and Kira Hall. “Theorizing Identity in Language and Sexuality Research.” Language in Society 33 (2004): 469–515.

Le, Quoc V., and Tomas Mikolov. “Distributed Representations of Sentences and Documents.” Proceedings of the 31st International Conference on Machine Learning. 2014. 1188-1196.

Lothian, Alexis, Kristina Busse, and Robin Anne Reid. “Yearning Void and Infinite Potential: Online Slash Fandom as Queer Female Space.” English Language Notes 45.2 (2007): 103–111.

Walton, S. S. (2018). The leaky canon: Constructing and policing heteronormativity in the Harry Potter fandom. Participations: Journal of Audience and Reception Studies 15(1).

 
9:00am - 10:30am#SE5: Digital Reading Paper Session
Session Chair: Ryan Cordell
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Reading Against Models: Approaches to Algorithmic Criticism with Poetry

Lisa Marie Rhody

The Graduate Center, CUNY, United States of America

Our reading ecologies-- ecosystems of searching, aggregating, selecting, recommending, sharing, and interpreting texts--are increasingly governed by computation; nevertheless, stories play a pivotal role in how computer and data scientists develop computational methods. When data scientists talk about the need to "tell a story" with their data, they demonstrate the power of narrative to explicate spatial and quantitative reasoning. This talk considers the role of the literary scholar in reading ecologies continually transforming in the face of machine learning. It takes as its point of departure the long-standing history of remediation between language and images and extends a humanistic tradition to the treatment of specific instances of computational modeling with poetry. Through an exploration of stories used to explicate the formalized, procedural rhetoric of algorithms that underlie machine learning, this talk proposes a heuristic for reading against computational models and argues that such skills are increasingly necessary for humanities scholars.

I came to topic modeling not because I thought it could be prescriptive or replace human interpretation but because it efficiently organizes words in a corpus according to the likelihood that they share vocabularies. In my study of “ekphrasis” (poetry about the visual arts), I wanted to question the assumption that the genre “turn[s] on the gendered antagonism” between poets and painters--a dynamic signaled by the ubiquitous use of words such as silent, mute, and still in the genre. If topic modeling could identify clusters of ekphrastic language over thousands of texts, it may help to locate more examples of the genre by women. Conversely “ekphrastic” poems that did not appear in the same topics could provide clues to alternative approaches to the genre. This paper demonstrates how “communities” of topics may point to alternative genre conventions in ekphrasis, while at the same time demonstrating a new heuristic for reading against models that can extend productively to other areas in literary studies.

Drawing connections to recent publications that explore the potential cultural biases and violence of algorithms by scholars such as Cathy O’Neil, Virginia Eubanks, Meredith Brussard, Safiya Umoja Noble, Ed Finn, and Hannah Fry, this talk will explore topic modeling as an activation of what Heidegger calls the hermeneutic circle and demonstrating how the complementary and interrelated activities of human and computer as readers can expose implicit bias in both literary and algorithmic models.



Intellectual History from Below? Applying Corpus Linguistic Tools to the 19th Century British Periodical Press

Hugo Bonin

Université du Québec à Montréal, Canada

The history of political thought tends to focus on “great” authors and certain key concepts, thereby analysing over and over a small handful of texts. However, with the recent digitization of numerous 19th century corpora, other possibilities emerge. The accessibility of those sources enables researchers to grasp the more popular and common political languages used by a great variety of actors. The question then becomes, how to study those corpora?

Drawing on an on-going thesis on the history of the word “democracy” in Britain, this presentation will argue that corpus linguistics offer crucial insights on how to handle and analyse such sources. More precisely, by bringing together corpus linguistics and the history of political thought, one is able to offer an “history from below” that differs from the interpretations obtained through classical methods.

As an example, an analysis of the uses of the word “democracy” in the British periodical press with corpus linguistics tools (AntConc, #LancsBox, Meaning Fluctuation Analysis (MFA)) will be presented. This exercise underlines several elements, such as the polarized uses of the term, the influence of the American example, as well as the disconnection between “democracy” and “people”. For most of the period, “democracy” was mainly used by conservatives to signify the rule of the “populace”, and it is only with the acceptation of the United-States as a stable and legitimate power that the term became synonymous with popular government. Such results both confirm previous historical insights as well as challenging mainstream interpretation of the rise of “democracy” in Britain.



“I Am Not Enjoying This Book”: Oppositional Auto/biographical Writing in Online Infinite Jest Reading Group Blogs

Philip Miletic

University of Waterloo, Canada

This paper considers the auto/biographical writing of online reading groups dedicated to David Foster Wallace’s Infinite Jest. For this paper, I draw from my close reading of the most successful online Infinite Jest reading group, 2009’s Infinite Summer, as well as my own experiences moderating an online Infinite Jest reading group which I ran in 2016 as part of my Doctoral research. This paper focuses on the auto/biographical writing within these groups that critically intervene into public conversations surrounding Wallace that ignore or suppress critiques of Wallace’s work while praising his “genius” and justifying his canonization. The critiques within the blog posts and comments of these online reading groups are consonant with recent expressions of discontent with Wallace in blogs, Twitter, online journalism, academic discussions, and independent bookstores. These auto/biographical disclosures of “dislike” are “social and political act[s]” (Fuller and Sedo 41) that communicate private reactions to Wallace’s work or experiences with Wallace’s fans/scholars through a public platform in order to expose the uncritical adoption of Wallace into the American literary canon.

Drawing from auto/biography studies and Mass Reading Event studies, I identify two modes of auto/biographical writing: affirmational and oppositional. Affirmational autobiographical writing identifies with Infinite Jest, containing discussions primarily about mental health and addiction. Oppositional autobiographical writing challenges and/or is unable to identify with Infinite Jest, containing personal disclosures about disliking the novel and the novel’s poor representations of race and gender. While affirmational auto/biographies contribute to the public discussions of mental illness and addiction, they also have a tendency to be uncritical of Wallace’s work and dismiss any critiques. My paper traces the “compositional strategies” (Morrison) of oppositional auto/biographies that negotiate the power structures of “liking” or “admiring” the book. These strategies, I argue, maintaine a space for further articulation of critique and reminding fellow participants about the lives of other participants who are not represented within the novel’s narrative that primarily focuses on white, cishetero male characters.

 
9:00am - 4:00pm#Install2: Installations: A City for Humans, Data Beyond Vision, The Cybernetics Library
City Center A, Marriott City Center 
 

A City for Humans

Everest Pipkin, Loren Schmidt

Withering Systems

This installation proposal is for an interactive digital diorama installed into a terminal or arcade machine at the conference. Titled “A City For Humans”, this project allows the public to collaboratively build a dynamic, shifting landscape together by typing single words on a keyboard. This text is then parsed for 3000+ common nouns and verbs, which are then immediately populated into the city as visual objects, represented by hand-drawn tiles.
For example, if a person types the words tree and rain, each would translate to an individual tile that is placed in the world. Writing a word like ‘tree’ makes one. Verbs- like rain- cause an action, like a small rainstorm forming over part of the city. The visitor to the space can simply type these single words, or can choose to tell more narrative stories (eg- ‘a pine tree is standing in a rainstorm’), which will be parsed similarly. This way, each contribution is given living form in the diorama.
Each type of object also has specific rules that govern its behavior. For example, groups of related plants like to form around one another, and will populate particular regions rather than scattering randomly across the landscape. Roads, sidewalks, fences, hedges, and aqueducts form in linear rows, while plants, buildings, and detritus clump together more organically.
Furthermore, intelligent things like people and animals have sets of shifting needs and desires. A person may be thirsty or hungry, but may also desire more abstract things, like excitement, or beauty. These entities have freedom of movement, and will seek out objects that meet these needs, like a well (for thirst) or a field of flowers (for beauty). In this way, these placed people, animals, and plants will go about their daily business, reacting to one another and the world around them as it changes.
Rather than attempting to distill, abstract, or pare-down community data, “A City for Humans” is a 1:1 interaction with those who choose to engage with it. However, this engagement is not temporary- much like real investment in place, the changes that are made to the digital city persist over time, influencing the digital world for the indefinite future.
The project’s central goal is to foster a sense of community by providing a quiet but responsive platform to collaboratively build a beautiful space together.
We are also dedicated to producing visualization systems that prioritize ‘small data’- in a world of big-data visualization, we also need generous and playful networked systems that respond to the individual, the hyper-local, and the immediate. This project maintains that data-visualization is not inherently an abstraction, a reduction, or an illustration. Rather, it can be a specific and responsive exchange that facilitates play, experimentation, joy, and a sense of place.



Data Beyond Vision

Rebecca Sutton Koeser1, Xinyi Li2, Nick Budak1, Gissoo Doroudian1

1Princeton University, United States of America; 2Pratt Institute, United States of America

Data visualization is frequently used in Digital Humanities for exploration, analysis, making an argument, or grappling with large-scale data. Increasing access to off-the-shelf data visualization tools is beneficial to the field, but it can lead to homogenized visualizations.

Data physicalization has potential to defamiliarize and refresh the insight that data visualizations initially brought to DH. Proliferation in 3D modeling software and relatively affordable 3D printing technology makes iterative, computer-generated data physicalization more feasible. Working in three dimensions gives additional affordances: parallel data series can be seen next to each other, rather than color-coded, overlapped, or staggered; and physical objects can be viewed from multiple angles, allowing for changing perspective.

Data visualization necessarily privileges sight. Participants can experience data through sensing — feel, touch, hear. Touch is particularly significant, since, like sight, it is a meta-sense and because it affords intimacy, as feminist philosopher Luce Irigaray has discussed. By foregrounding sensory experience and embodiment, we will challenge conference participants to consider other approaches for engaging with and representing humanistic data. Multimodal data explorations incorporating touch and sound can offer new possibilities of accessibility to those with low vision (for example, see the #DataViz4theBlind project). Spatial, acoustic, and temporal dimensions of data representation can generate rich narratives, invite the audience to explore new relationships, and turn passive consumption into a sensory experience that encourages interpretation. In addition, creating data physicalizations is a form of critical making; the iterative and reflective process requires more time to engage with the data, including the human aspects represented.

The final multimedia installation will display descriptions of the methods and processes alongside the final data physicalization objects and dynamic displays. By foregrounding sensory experience and embodiment, we offer an opportunity to explore humanities data meant to challenge conference participants to consider other approaches for engaging with and representing humanistic data. We are inspired by the work of Lauren Klein and Catherine D’Ignazio, who encourage a reorientation toward the emotional and affective qualities in our engagement with data. In employing physicalization as a technique to corporealize and “re-humanize” humanities data, we follow the ethical principles articulated by the Colored Conventions Project to “contextualize and narrate the conditions of the people who appear as ‘data’ and to name them when possible.”

Pieces in the installation will utilize space, time, and/or interaction to provide new ways of engaging with a dataset and the arguments and narratives behind it, in order to challenge the dominant paradigms of conventional screen-based data visualization.

Provisional list of pieces:

  • 3D printed model of library member activity over time from the Shakespeare and Company Project, juxtaposing documented activities from two sets of archival materials

  • Folded paper models for individual membership timelines from the Shakespeare and Company Project, allowing attendees to select a library member and fold a model based on their data, allowing the recovery of women and and non-famous members.

  • A weaving representing intertextuality based on references in Jacques Derrida’s de la Grammatologie from Derrida’s Margins



The Cybernetics Library: Revealing Systems of Exclusion

Sarah Hamerman1,2,3, Melanie Hoff1,4, Charles Eppley1,2,6, Sam Hart1,2, David Isaac Hecht1,5, Dan Taeyoung1,5,7

1Cybernetics Library; 2Avant.org; 3Princeton University Libraries; 4School for Poetic Computation; 5Prime Produce Apprenticeship Cooperative; 6Fordham University; 7Columbia University GSAPP

We propose a 4-day installation of a physical library collection, digital interface, and software simulation system. We are a research/practice collective that explores, examines, and critiques the history and legacy of cybernetic thought via the reciprocal embeddedness of techno-social systems and contemporary society. This installation’s intention is to examine and expose to users patterns of systemic bias latent within those systems and their use. The collection will be housed in custom-built, secure furniture and made accessible to all attendees of the conference.

Our collective is comprised of members from a diverse set of backgrounds and practices, including art, architecture, technology, publication, librarianship, gender studies, media/cultural studies, cooperatives, fabrication, design, simulation, queer studies, and more. We work on the project independent of institutional affiliations, but have had numerous successful collaborations, and were the organizers of an independent but highly successful conference, from which our ongoing project emerged.

From this outsider position, our project seeks to refigure and make more accessible the relationships between people, technologies, and society. The project has been manifested through activities such as community-oriented artistic installations, reading groups, workshops, and other public programs. The project also incorporates ongoing development of tools, platforms, and systems for enhancing, deepening, and extending engagement with the knowledge it organizes and to which it provides access. The project aspires to support its collaborators and users by serving as a connecting node for disparate communities that share intellectual or activist goals for exploring and advancing art, technology, and society.

The first version of the software simulation system used cataloging data to form associations between the usage histories of users of the library system, as well as linking content from works accessed during the initial conference to the topics presented by the speakers (in the context of a multi-layered visual representation). Another system, part of an installation at a program around the theme of "uncomputability", prompted users to participate in the construction of a collective poem by scanning in books from the collection which had meaningful associations for them. Another highly interactive implementation allowed users to engage their practices of sharing knowledge through metaphors of gardening: cultivation, care, attention, and community.

Our installations have been featured by The Queens Museum, The Distributed Web Summit by The Internet Archive, The School For Poetic Computation, Prime Produce, The Current Museum, vTaiwan, and Storefront for the Commons.

While the specific implementation of the installation for the ACH conference is still in preliminary stages of development, we are building on the themes of direct engagement, and collective, emergent explorations of structures of knowledge that can reveal hidden assumptions and biases latent in our approaches to technology and society. Based on our history of successful, memorable installations and collaborations, we are confident that this installation will contribute a valuable critical, conceptual, and technological resource the conference. We hope to produce an ecology for new collaborations, unexpected encounters, and deeper explorations of the themes and methods of the conference, and would be happy to be able to provide more detail soon.

 
9:00am - 4:00pm#Install3: Installations: Muybridge 1 and Museum of Forbidden Technologies
Salon 6, Grand Ballroom, Marriott City Center 
 

Muybridge 1

Stephen Ramsay, Brian Pytlik-Zillig

University of Nebraska-Lincoln, United States of America

We -- a composer and an animation artist -- would like to propose an installation to be run throughout the conference.
Our work is representational, but non-narratival. We use found sounds and images to create short (continuously looping) pieces that attempt to engage the viewer/listener through the ludic interplay of grand gestures and quotidian obsessions.
Our work is loosely "algorithmic," in the sense that certain elements of the animations are driven by audio events in a separately-produced score. We avoid, however, using standard multimedia frameworks and notations (MIDI, Max/MSP, Adobe After Effects, etc.), in favor of a bespoke system written in XSLT and SVG combined with an audio analysis tool (the open source Sonic Visualizer) that allows for the annotation of audio waveforms.
We began working with this complex framework in the context of conventional DH work; over the last several years, it has become a separate artistic practice for both us (and has been presented in several venues around the world). As such, we think it's a particularly good fit for the theme of ACH 2019, and we'd be honored to present it.
The work can be displayed using any A/V system that can be hooked to a laptop or other small computer system -- a conventional flat-screen television, a projector casting the image on a neutral wall, the cyc screen of a conventional theater space, or (with some modification) a large multi-screen video wall. We would be happy to work with the conference organizers on customizing the work for whatever space/technology they think would work best, and we can mix the audio for the specific characteristics of the space as well.



Birding the Future

Krista Caballero1, Frank Ekeberg2

1Bard College, United States of America; 2Independent Artist

Birding the Futureis an artwork that explores current extinction rates by specifically focusing on the warning abilities of birds as bioindicators of environmental change. The installation invites visitors to listen to endangered and extinct bird calls and to view visionary avian landscapes through stereographs, sculpture, and video.

Birds provide a unique window into the entanglements of our time. Unrestricted by human-imposed borders, approximately 5 billion birds migrate every year thereby linking cultures, countries, and ecologies and revealing issues collectively shared. It is also estimated that a third of all bird species will have disappeared by the end of this century. Declining bird populations in practically all habitat types signal profound changes over our entire planet, changes that affect our ecologically-bound cultural identities. Birding the Future poses three questions in response to this crisis: What does it mean that we can only see and hear extinct species through technology? What might happen as the messages of birds are increasingly being silenced? How can traditional ecological knowledge be combined with technological advances to surpass what any one way of knowing can offer?

Calls of endangered birds are extracted to create Morse code messages based upon tales, stories, and poetry in which birds speak to humans. These messages are combined with unmodified calls of extinct birds, which act as a memory of the past and underscore technological reproduction as the only means to hear certain species. Using a real-time control algorithm (Pd) the projected extinction rate is scaled to the duration of the exhibition by decreasing the density and diversity of bird calls.

A series of stereographs offer a loose narration through the soundscape. Popular from the mid-nineteenth century through the early twentieth century, the stereoscope has been chosen as the viewing instrument for its potential to heighten perceptual awareness and provide a historical link to human impact on the environment. The viewer’s gaze wanders back and forth between foreground and background, and by doing so continuously shifts one’s point of view within the frame. In this way the stereograph plays with the act of looking and the viewer is challenged to consider how the filters through which one looks then translate into ways knowledge is constructed. Composite photographs of real and imagined environments connect regional issues with a global perspective. On the back, textual analysis explores the complexities of our more-than-human world via poetry, data and other relevant habitat and behavioral information. To date there are six series: Queensland Australia, Arabian Peninsula, Norway, Mid-Atlantic USA, Frankfurt, and a series focused on laboratory birds.

For ACH 2019 we propose an interactive day-long installation that will include stereographs from each series and the sound installation described above. Birding the Future is highly adaptable and has been installed nationally and internationally in multiple types of configurations based upon particularities of the site. Artists will supply technical equipment and work with conference organizers to determine location.

To view artwork, please visit: https://www.birdingthefuture.net

Dependent upon space and interest, the following video could also be installed: https://vimeo.com/238204874

 
10:30am - 11:00am#Break4: Break
Grand Ballroom Foyer A, Marriott City Center 
11:00am - 12:30pm#SF1: Lived Experiences: Gender and DH Roundtable
Session Chair: Quinn Dombrowski
Marquis C, Marriott City Center 
 

Lived Experiences: Gender and DH

Quinn Dombrowski1, Jennifer Guiliano2, Andie Silva3, Tassie Gniady4, Anne Cong-Huyen5

1Stanford University; 2IUPUI; 3York College/CUNY; 4Indiana University; 5University of Michigan

The role of gender and intersectional identities in digital humanities remains an urgent topic of conversation. Despite this, precious few spaces exist where open, safe, and inclusive discussions around intersectional gender can happen. Digital spaces like the Crunk Feminist Collective (http://www.crunkfeministcollective.com/), FemTechNet (https://femtechnet.org/), and FemBot Collective (https://fembot.adanewmedia.org/) provide blogs, resources, and opportunities for public writing on issues that matter to female-identified researchers. Perhaps despite these spaces, the expression of digital humanities in conferences, publications, and projects struggles with striking a balance between public and private discourse. The narratives of digital spaces and the blogosphere prioritize sharing and making visible the labor of feminist activism within academia. Nevertheless, despite the emphasis on visibility, individuals in precarious, contingent labor conditions need protective shielding, as speaking about gendered experiences in DH can result in personal and professional consequences. Safety is even less assured in the conference venues and the purview of anonymous peer review of proposals, papers, and grant applications, where institutional affiliation and long-established projects and reputations regularly prevail.

During fall 2018, a loosely organized working group formed around lived experiences of gender in the digital humanities. This group aims to provide a space for an open discussion about embodied experiences and intersectional gender identities in digital humanities. The working group aims not only to raise awareness, but pragmatically to enact change in the larger digital humanities community in its interests in strategies of resistance and survival for women and gender non-conforming digital humanists. Between January and June 2019, individual volunteers are organizing a series of monthly virtual meetings, each around a specific topic (e.g. credit, authority, (lack of) infrastructure, emotional and invisible labor, gender equity at panels, gender disparities in technical work, gender and leadership in digital humanities centers, etc). We anticipate that these discussions will lead to the production of documents such as white papers that will be made available for anyone to use when advocating for change at their institution, for conferences they are organizing, etc. We expect at least two of these documents to be ready for dissemination by the ACH conference. We intend to release these as fodder for discussion within the roundtable and by the audience generally.

The ACH conference would provide an opportunity to expand the community of this working group by presenting the work done to date, soliciting input and volunteers for the next series of discussions starting in the fall, and considering what organizational steps (e.g. incorporation as an ADHO SIG) should be taken to maximize the impact this group can have on the culture of digital humanities. We anticipate that the audience for this roundtable will be largely composed of female-identified and genderqueer individuals, but we welcome anyone interested in issues of gender, intersectional identities, and DH, with particular attention to issues of policy and infrastructure.

 
11:00am - 12:30pm#SF2: The South Paper Session
Session Chair: Robin Kear
Marquis B, Marriott City Center 
 

Trans Voices of the South

Alli Crandell, Tripthi Pillai, Shonte Clement, Joshua Parsons

Coastal Carolina University, United States of America

This presentation will showcase the process, discussions and products of a student-centered project on transgender voices of the rural Carolinas. Emerging from a publishing lab at a regional university, the Trans Voices of the South project (tentatively titled), highlights gender expression in the American South(s). This multimedia project aims to redefine communities and encourage empathy to the voices it highlights as a starting place to change policies, and to disrupt the monolith of trans identity and southern politics present within popular discourse.

The products of this year-long project, to be launched in early summer 2019, will be a multimedia project that combines print and digital mediums to highlight the voices and experiences of trans individuals. We will discuss the publishing lab (now in its 7th year) and the application of collaborative design to this project. We will describe the lab's process and its previous projects, from museum exhibitions to CDs, and what this experiential learning opportunity offers students, faculty and local community members.

This presentation will look at how physical and digital interfaces can both translate and inhabit trans spaces, focusing on the micro-performances of expressing gender in southern locations. We will showcase excerpts from the final project while investigating questions such as: How do you engage community members in critical making around political and contemporary topics? What are the lines between general readership and trans theories and bodies? How can we combine stories of location and resources for trans individuals? How can we inhabit the spaces of silence in thinking about gender expression in the South?

We will end with recommendations and points of collaboration for the project and larger digital publishing lab at a public liberal arts.



Telling Hampton's Stories: Design and Production of Virtual Hampton's Spatial Vignettes

Susan Jean Bergeron, Alli Crandell

Coastal Carolina University, United States of America

This paper discusses the development of spatialized multimedia content for Virtual Hampton, an immersive virtual landscape exploration platform for historic Hampton Plantation, one of a complex of well-known rice plantations along the South Santee River and now a South Carolina State Park and Historic Site. As a case study, Hampton Plantation offers unique opportunities to implement an immersive virtual landscape. Hampton Plantation was part of a complex of well-known rice plantations along the South Santee River. During the eighteenth and nineteenth centuries, Hampton was a prime example of plantation culture and the plantation economy in South Carolina. Because the plantation remained in private family ownership for over 200 years and was then acquired by the State of South Carolina, its cultural landscapes have not been impacted by modern development and still retain readily identifiable features from past activities, such as the remnants of dikes and water control features in the former rice fields. In addition, recent archaeological work in the undisturbed areas of the former slave village are yielding new clues to that portion of Hampton’s past cultural landscape which can be illuminated through the virtual reconstructions. The completed first-phase prototype of the immersive landscape platform was developed in the Unity3D development environment, and includes the virtual recreation of the early 19th-century topography, plantation structures and rice fields, as well as proof-of-concept examples of embedded media content that provides historical, cultural, and physical geographic information about select features within the recreated landscape.

The second phase of the project has built on this prototype and focuses on the development of the spatial narrative elements that present the intertwined stories of the people who lived and worked at Hampton Plantation and the natural and cultural landscape they inhabited. Through design workshops and early testing, project staff and stakeholders have developed an organizational structure for the narrative content of Virtual Hampton that consists of short, themed multimedia spatial stories that are being termed “spatial vignettes.” These vignettes will enhance a user’s explorative experience with the historical, archaeological, ethnographic, and other scholarly information that guide our knowledge of Hampton’s past cultural landscapes. The development of the first spatial vignettes is being completed through a close collaboration with Coastal Carolina University’s Athenaeum Press, which is a student-driven publishing lab that offers students the opportunity to gain unique, professional-level experience in developing and producing innovative digital stories. This presentation will detail the design and production of these first vignettes and provide a live demonstration of the Virtual Hampton platform with the spatial vignettes embedded in the virtual landscape.



Harlem in Lynchburg: Anne Spencer's House and Garden Project

Alison Booth

University of Virginia, United States of America

One of the few museums dedicated to an African American woman is the Lynchburg residence of the Harlem Renaissance poet Anne Spencer (1882-1975), a gem of vernacular architecture and garden design. University of Virginia holds the Spencer papers; Virginia Humanities hosts a virtual tour; and the Scholars’ Lab is launching digital enhancements of the site’s resources and history, in order to increase educational and public access and interest in Spencer’s archive and “salon” (as in Revolutionary France). Key visitors to the site include W.E.B. du Bois, James Weldon Johnson, and Langston Hughes as well as the pygmy Ota Benga, a subject of racist research who was sent to Virginia Theological Seminary. The house is associated with the founding of the local NAACP. Spencer’s few published poems should be complemented by her unpublished writings and her arrangements and graffiti in the house, studio, and garden. This paper considers the context of Virginia’s racist history, much in the current news, and addresses the risk of appropriating historically Black archives and institutions, while it features potential R1 university collaboration with a family-owned heritage site. The Scholars’ Lab’s project, a pilot before applying for grants, draws upon Alison Booth’s 2016 monograph on literary house museums in the UK and US, and applies varied tools to samples of materials. One approach is textual: based on the curator’s, Spencer biographer’s, and UVA faculty expertise, students and the public can participate in digitizing and annotating selections from the archive, as in recent community events to transcribe the papers of Julian Bond. Another approach unites mapping, network analysis, and timelines: a Neatline project, When Harlem Came to Lynchburg, will feature snippet text (poems, letters), itineraries, and portrait-profiles of guests and regulars at the house, 1920-1940. In addition, using digital approaches to spatial data and modeling, we can amplify the museum's educational outreach, to attract more school visits and scholarly interest, and to make educational materials available at a distance from Lynchburg. VR/AR experts in Scholars’ Lab will work with Booth and curator Shaun Spencer Hester to augment the existing virtual tour of the Spencer site with audio and images or text (e.g. explaining the portrait of the poet's white grandfather in one of the bedrooms). This AR tour will also help document and preserve the data of the current design and display of rooms, objects, and garden as well as the family memories of the curator. Artifacts in the museum will be modeled and 3D printed so that visitors can handle them without harming them, or so that they can be seen in the round online or printed for school study. This talk will offer an example of reflective intersectional DH that tries to bridge gaps between advanced research using the latest technology, historic preservation, and shared learning about an African American women poet and a phase of cosmopolitan culture in a Jim Crow town.

 
11:00am - 12:30pm#SF3: Collaboration Paper Session
Session Chair: Vika Zafrin
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Navigating the DH Center - Library Divide

Laurie Allen, Stewart Varner

University of Pennsylvania, United States of America

Institutions often struggle with the question of where technical support for the humanities should be located. Libraries have long been supporters of digital humanities and institutions like the University of Virginia have provided inspiring leadership for some. On the other hand, independent DH Centers like the ones at George Mason and Stanford have produced some useful and popular tools as well as solid scholarship.

Both models have their own benefits and drawbacks. Library based support for DH can often tap into expertise in scholarly communication and digital preservation and benefit from existing organizational structures (and corresponding budgets) designed to support research. However, some libraries can seem rigid and unwilling to experiment. Conversely, independent centers are often quite nimble and able to act on exciting, if experimental, ideas. That being said, these centers are frequently constrained by a reliance on soft money that prioritizes innovation over maintenance and can sometimes feel like exclusive clubs built for a select few.

We are two friends with 10 years of experience each who have taken leadership positions on opposite sides of the library/center divide at the same institution at the University of Pennsylvania. Laurie Allen is the Director of the Digital Scholarship Department in the Penn Libraries while Stewart Varner is the Managing Director of the Price Lab for Digital Humanities in the School of Arts and Sciences. Working with a small but devoted group of colleagues, we have been trying to build a truly collaborative DH program that attempts to merge the best of both worlds, and create a model that supports experimentation and is welcoming to a broad range of people.

In this talk, we will share what we have learned from building and administering a relatively large and new Digital Humanities program, and from attempting to build a new kind of community at our institution. Our experience has involved constant balancing between capacity building structure, and the creative chaos necessary for learning how to do new things across two organizational structures. We’ve developed some very helpful models for staffing and planning while continuously tinkering with our project management workflow in an ongoing effort to create something capable of surviving reality.

We will also discuss how our own values, and the values of our colleagues and teams have helped shape the direction of Digital Humanities at our institution. Taking very seriously Miriam Posner’s call to focus on people rather than projects (http://miriamposner.com/blog/commit-to-dh-people-not-dh-projects/), we have attempted to build a wide network of faculty, staff, librarians, and students who are thoughtful users, creators, and critics of technology. Prioritizing support for staff and colleagues over particular projects can raise challenges, and while not all of the process has been successful, the deep partnership between the library and the DH Center continues to grow, and we hope that by sharing the lessons learned, the wider community can benefit from our experience.



Building Political Will for Inter-Sector Collaboration in Support of Digital Preservation

Josh Shepperd

Catholic University of America, United States of America

How does one organize a federal big-data project that supports media preservation, utilizing the service time of professors? This paper discusses the work of the Library of Congress Radio Preservation Task Force, a project of the Library of Congress National Recording Preservation Board. I discuss how the final product of the task force - our metadata interface - is representative of a holistic process of organization that includes intense negotiation, parallel planning, labor organizing, and awareness-raising strategies regarding the role of sound history in broader preservation discourses.

Our interface is designed by Mark Matienzo (Stanford) and William Vanden Dries (Indiana), and provides collection-level descriptions, and will eventually act as a hub for educational, curatorial, and curricular projects. I argue that the content embedded in the interface also reflects thousands of hours of accumulated work put forward by multiple academic research divisions - Network, Communications, Development, Grantwriting, and Research Content - in the development of a public humanities resource. To accumulate collection-level data presented for public access, the task force is structured for flexible labor and input from over 40 federal, public, and academic conference partnerships.

I close by discussing strategies to design a national project so that it emphasizes ways that preservation and access initiatives can help with deficiencies in curriculum, especially related to alterity experience and associated primary sources.



Theorizing and Re-theorizing Collaboration in the (Digital) Humanities

Paige Morgan

University of Miami, United States of America

In Debates in the Digital Humanities (2012), Tom Scheinfeldt explained that “the fact that nearly all digital humanities is collaborative accounts for much of its congeniality—you have to get along to get anything accomplished.” Scheinfeldt is only one of many commentators to emphasize the importance of collaboration in DH. DH initiative charters, likewise, may note that “the most innovative scholarship in the digital humanities is collaborative.” At the same time, many people have observed repeatedly that collaboration is not the norm in the humanities; and essays like the Collaborators’ Bill of Rights and the Student Collaborators’ Bill of Rights have worked to outline best practices to make collaboration more equitable for all involved.

Why does collaboration remain challenging? In this paper, I will argue that many of the difficulties of collaboration arise from an enduring focus within the humanities on originality. I will examine how this focus shapes humanities scholars’ sense of both creation and collaboration; and how it influences our interactions with existing DH projects. I will use this examination to reflect on common assumptions about collaboration, and to present a series of specific questions and lenses that could be used to discuss and theorize collaboration in the humanities. My goal in this presentation is not to singlehandedly solve the problem of collaboration. Instead, I hope to illuminate how undertheorizing collaboration contributes to and exacerbates ongoing structural inequity and precarity in DH, as well as contributing to recurring challenges around sustainability. Greater clarity around these issues will in turn allow members of the community to engage in dialogue about how we might intervene.



Revealing Voices: Establishing Meaningful Outreach Strategies for Project Vox

Meredith Claire Graham

Duke University, United States of America

In an effort to disseminate announcements to our scholarly audience for Project Vox, a digital humanities project dedicated to introducing women into the philosophical canon, we launched a blog on our website in 2017. The Project Vox team not only shares content about women philosophers, but also publishes transcriptions of documents written by these philosophers, provides teaching materials for undergraduate courses, and curates a gallery of images related to the content on the website. In my own form of online activism as the Outreach and Assessment Coordinator, I created our leading blog series, Revealing Voices, which reveals those voices of formerly forgotten women philosophers and the scholarly voices of those in the field. Through this series, we are able to provide the contributors at all different points in academic life a place to communicate with others about feminist research and publishing. For example, we had one contributor write about translating the works of women philosophers into Lithuanian in order to combat a Soviet-era censure imposed on philosophical writing. Another contributor discussed her struggles of rejection in academia when writing grant proposals to examine gender and feminism in philosophy. By engaging with our social media users and reaching out to them to write a post for our blog series, we are not only creating stronger relationships with our readers, but we are giving them an outlet to speak to a community working to correct the exclusions of women from writing and academic study. I believe that we must prioritize outreach for digital humanities projects to go beyond social media and news articles to create a global community that trusts and recognizes our work.

 
11:00am - 12:30pm#SF4: Embodied Data Paper Session 2
Session Chair: Élika Ortega
Marquis A, Marriott City Center 
 

Mining for the Implications of the Changing Landscape of Digital Humanities Blogging

Laura Morgan Crossley

George Mason University, United States of America

Over the last several years, there has been a noticeable decline in the number of blogs and blog posts on the digital humanities. While the growing popularity of Twitter and the expanding opportunities to publish peer-reviewed digital humanities scholarship help explain this trend, it is less clear if or how the changing modes of scholarly communication have shaped the content of scholarship and the composition of the field. Using the text, author statistics, and other metadata associated with the more than 4,000 posts that have been republished on the online publication Digital Humanities Now since 2011, this paper shows how the volume of digital humanities blog posts has changed over time and how the forms and content of those posts have changed. To help gather the data for quantitative and text analysis, I make use of the statistics API built into the PressForward WordPress plugin, which facilitates the publication’s editorial process. Though community-driven, Digital Humanities Now is still an edited publication and not a perfect representation of all digital humanities blogging. Placed in conversation with other qualitative and quantitative examinations of digital humanities blogs, tweets, conferences, and journals, however, this analysis provides a new perspective, drawn in part from data on the post-publication community review process. By helping to approximate the perceived value of posts, it allows us to consider the shifting priorities of the field.



Retos en la producción de tutoriales de HD en contextos hispanohablantes

Jennifer Isasi1, María José Afanador-Llach2, Antonio Rojas Castro3

1University of Texas at Austin, United States of America; 2Universidad de los Andes, Bogotá, Colombia; 3Universität zu Köln, Cologne, Germany

The Programming Historian inició en 2011 la publicación de tutoriales en abierto y revisados por pares dirigidos a humanistas que desean aprender técnicas computacionales para su investigación y enseñanza en inglés. Cinco años más tarde, en 2016, un nuevo equipo de editores comenzó la traducción de dichos tutoriales para hacerlos accesibles al mundo hispanohablante, proponiendo además cambios editoriales como la contextualización de materiales para una audiencia global. Más recientemente, este equipo propuso incentivar la producción de tutoriales originales en español con un llamado y un taller de escritura de tutoriales.

Si bien las estrategias de divulgación de los tutoriales traducidos han sido un éxito - como demuestra, por ejemplo, un total de 13,167 visitas en el último mes (octubre de 2018)-, la recepción de propuestas para la publicación de tutoriales originales en español no lo está siendo tanto. Esta ponencia explora algunos de los factores económicos, tecnológicos, culturales e institucionales que ayudan a explicar los incentivos y barreras para que los investigadores de habla española se involucren en la producción de tutoriales en español. Las preguntas que guían este análisis son: ¿Qué barreras impiden a los investigadores de lengua española, que desarrollan su trabajo en Latinoamérica, Estados Unidos o España, escribir y publicar tutoriales? Más ampliamente, ¿Cómo afectan las brechas digitales en acceso, uso, competencias y beneficios asociados a las TIC, y la existencia incipiente de infraestructuras digitales para investigación, el desarrollo de las humanidades digitales en español? ¿Qué papel cumplen las barreras culturales y de índole político como la falta de reconocimiento académico o la ausencia de una tradición interdisciplinar en el campo de las Humanidades Digitales?

Esta presentación plantea una reflexión sobre los retos de producir contenidos sobre herramientas y métodos digitales en las humanidades en español y, claro, en un proyecto cuyo enfoque principal ha sido la audiencia anglosajona. Por un lado, presentaremos las estrategias que el equipo editorial ha promovido para promover el acceso, la diversidad y la escritura para audiencias globales. Analizamos algunos ejemplos de traducción de tutoriales de metodologías para la extracción y el análisis de datos (Topic Modeling o R) así como de la publicación de material digitalizado (Omeka), y la importancia de su adaptación del inglés al español con las audiencias y contextos hispanohablantes en mente. Por otro lado, y principalmente, reflexionaremos sobre cómo la desigualdad de recursos, diversidad de intereses y la consolidación de programas HD plantea, desde nuestra experiencia, una relación diferente con las herramientas y metodologías digitales en contextos de investigación en habla hispana. Para ello analizamos los resultados del taller de escritura de tutoriales para Programming Historian, llevado a cabo en Bogotá en agosto de 2018 y en el cual participaron 22 investigadores de toda América con propuestas para la escritura de tutoriales originales en español.



#FemaleFollowFriday: Making Feminism Outreach’s Bitch

Vanessa Hannesschlaeger

Austrian Academy of Sciences, Austria

The provocative title of this contribution expresses that the core problem of instrumentalizing academic outreach activities for empowerment of marginalized groups can never empower all of these groups simultaneously and equally. In my presentation, I will argue this by example of a specific empowerment and outreach activity I developed for my research institute.

As a DH research facility, we consider structured and active outreach and network activities one of the cornerstones of the new approaches that the digital humanities have developed. This is why the institute has its own “Networks, Knowledge Transfer and Outreach” department. This department runs the institute’s twitter account, is lead by a female department head and all of its members are women (if I consider myself to be one, that is). This fact might have shaped the outreach activity I will describe in this paper: a twitter series we have been running for 1,5 years under the hashtags #FemaleFollowFriday and #womeninDH.

I will introduce the concept and discuss the manifold types of reactions by the people who have been included in the series so far as well as by other followers. The main focus of the presentation will be the potential pitfalls of doing “white female” feminism (Loza 2015) and failing to “capture the experience of feminist activists that might identify differently” than as “women” (Lane 2015). Finally, I will offer a critical self-reflection of the question how this institutional outreach activity might label and instrumentalize people as females and put them in the spotlight as such without taking their self perception and personal feelings into account.

References

Lane, Liz. “Feminist Rhetoric in the Digital Sphere: Digital Interventions & the Subversion of Gendered Cultural Scripts.” Ada: A Journal of Gender, New Media, and Technology 8 (2015). Web. doi:10.7264/N3CC0XZW

Loza, Susana. “Hashtag Feminism, #SolidarityIsForWhiteWomen, and the Other #FemFuture.” Ada: A Journal of Gender, New Media and Technology 5 (2014). Web. doi:10.7264/N337770V



Terms and Conditions: Examining the Role of Transparency Documentation in Humanities Data Application Development

Grace Afsari-Mamagani

New York University, United States of America

In cases where digital humanities, humanities data, and educational technology projects include "Terms and Conditions," they are typically concerned with clarifying data collection, privacy, and usage protocols on the part of the platform and its operator(s) in order to comply with international standards and clarify legal obligations. But how might reimagining the “terms and conditions” of DH projects and platforms enable their developers to promote a radically situated and self-reflexive praxis?

In this paper, I draw on my role in this collaborative app development project at NYU and as a frequent visitor to web-based digital humanities projects in order to propose (and enthusiastically invite ideas about) a genre of documentation that extends beyond the legal, technical, and tutorial. Building upon the input offered by my wonderful colleagues at DHSI in the summer of 2018, I investigate the potential role of transparency documentation as a means of epistemological self-critique that permits project or application designers to confront the discourses and modes of knowledge production in which their projects participate — for example, as a site to both acknowledge and critique the decision to utilize cartographic visualization tools that privilege Mercator base maps and therefore belong to the same discursive matrix as colonial exploration, the Global North/South divide, and GPS surveillance. By tracking the decision-making process that informs a “final” digital product, transparency documentation at once serves as a testament to the procedural nature of the application, centers rather obscures the project team and the contingency of its knowledge, and provides users with a vital critical framework for their own research. In other words, using my own project as a speculative test case, I propose a discussion focused on a “terms and conditions” model that lays bare the project’s intellectual and political terms and conditions, both to itself as a critical undertaking and to potential users and interlocutors.

 
11:00am - 12:30pm#SF5: The Politics of Probability Panel
Session Chair: Justin Joque
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

The Politics of Probability and Uncertainty in the Digital Humanities and Society

Justin Joque1, Theodora Dryer2, Bradley Fidler3, Cengiz Salman4

1University of Michigan; 2University of California San Diego; 3Stevens Institute of Technology; 4University of Michigan

As computational, algorithmic and machine learning methods become increasingly used in the humanities and across society, these methods are bringing with them deep philosophical, epistemological and metaphysical claims about the relevance of probabilistic thinking to the world. While the use of statistical methods and probabilistic explorations of massive datasets have allowed the humanities and other fields to investigate structures and patterns at scales that would be otherwise inaccessible, the ways in which we deal with uncertainty, especially at scale, have massive political, social and economic implications. From historical commitments to eugenics by early statisticians such as Ronald Fisher to contemporary realizations that many algorithmic systems simply repackage extant social discrimination, these modes of thinking and processing data have never been neutral.

This panel will consist of four speakers, who will each provide short position statements/presentation on the history and implications of probability to the digital humanities, computation and society writ large, followed by a discussion amongst the panelists and attendees. We will encourage attendees to think through and share how the politics of uncertainty and probability intersects with their own work. The short presentations will focus on:

  • The shift from frequentist to Bayesian understandings of probability; how this change has influenced and been influenced by the rise of computational methods for science, social sciences and the humanities; and the implications of this shift for our work and society

  • The history of predigital probabilistic thinking in action; how data economies were generated through powerful uncertainty management regimes in various global trade programs. This will be represented through new visual mapping methods part and parcel of growing statistical mapping methods in digital humanities.

  • The history of how the use of probabilistic information coding to optimize communication channels allowed information to be reduced to syntax, and as such amenable to large-scale procedural analysis, the ability to understand the content of such information was at once impoverished and redeployed. Now empty of semantics and computable, the new information systems became sources of metadata, a quasi-semantic value for a world of coded data.

  • The potential of machine learning, AI and related technologies of probabilistic management to further automate labor and expel workers from the value relation in contemporary capitalism; Silicon Valley technologists’ advocacy for progressive policies like a Universal Basic Income as a means to mitigate the effects of unemployment that these technologies produce; and the implications of these proposals for a society that becomes increasingly dependent on corporate beneficence to manage uncertainty.

While the panelists backgrounds are largely in the study of science and technology; philosophy of technology; media studies; etc. our focus will be on how these historical and metaphysical developments underwrite the computational work that is being done in the digital humanities and larger systems of knowledge production--along with the ways in which work in the digital humanities can help reveal the larger political stakes of these changes, especially focusing on the sociopolitical implications of this shift.

 
12:30pm - 2:00pm#Lunch2: Lunch Break
 
2:00pm - 3:30pm#SG1: Pedagogical Approaches to Teaching Digital Humanities Roundtable
Session Chair: Helene Williams
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Pedagogical Approaches to Teaching Digital Humanities and Digital Scholarship: Common Ground and Collaborative Models for Faculty and Librarians

Helene Williams, Sarah Ketchley

University of Washington, United States of America

The discipline of digital humanities, and digital scholarship more broadly, is fluidly defined. Frequently considered non-traditional in nature, it resists straightforward approaches to pedagogy and learning. Centers for digital scholarship are often housed in the library, staffed by librarians expected to have skills ranging from traditional librarianship, technical expertise, digital project development and management, and provide training, workshop development and education. Humanities faculty teaching with or about DH tools and methodologies may draw on librarians for support in developing their digital humanities curricula, but more often develop course materials without taking advantage of overlapping expertise and experience.

This roundtable will consider whether it possible to determine a core set of skills to successfully work and research in the field of digital humanities. Can we identify a pathway for educators to present these skills to their students and in doing so define best pedagogical practices for classroom instruction, and for learning? These questions will be considered from the perspective of faculty-led DH courses situated in humanities departments, and librarian-led digital librarianship and digital scholarship courses in an MLIS program.

While there are frequently intersections between the content and technology required to work successfully in the fields of academic and library digital scholarship, there are also subtle differences in library vs. faculty roles. Digital librarians need to understand both analog and digital worlds, and the different relationships they will have with both the technology and its end users. They need to be conversant in digital scholarship writ large, to facilitate and support digital scholars in multiple disciplines. Librarians may know five mapping tools and have a good grasp of print and digital resources across the humanities, but they are usually not experts in specific subfields or tools, focusing more on resource awareness and facilitating the setup and operation of tools than the process of research as such. By contrast, faculty often choose one tool to work with to support their research goals and go deeply into their subject and the workings of this tool. Training in digital librarianship emphasizes breadth, while faculty research and teaching focus on depth. Another key difference is that librarianship research methods emphasize the ‘meta’ aspect of data: where does it come from? What are the ethics and values of data-driven research?

Roundtable participants will consider the pedagogy underpinning digital humanities and digital scholarship programs across a variety of classroom situations including in-person, virtual and hybrid offerings. We’ll highlight pedagogical practices that have been successful in the panelists’ experience, as well as those that have been less so. We will explore the value of identifying common pedagogical ground in an effort to develop curricular materials and teaching strategies that are relevant for digital scholars and librarians, and taught by specialists from both fields, with the goal of making DH pedagogy an engaging experience. We’ll also discuss the question of what department(s) are the best homes for DH/DS education in both undergraduate and graduate settings, addressing the issue of balance between technology and subject matter expertise.

 
2:00pm - 3:30pm#SG2: Keywords in Asian/Am DH Roundtable
Session Chair: Anne Cong-Huyen
Marquis A, Marriott City Center 
 

Keywords in Asian/Am DH

Anne Cong-Huyen1, Dhanashree Thorat2, Amardeep Singh3, Danielle Wong4, Lia Wolock5

1University of Michigan; 2University of Kansas; 3Lehigh University; 4University of British Columbia; 5University of Wisconsin-Milwaukee

In recent years, ethnic studies scholars have carved an increasingly visible space for digital scholarship informed by and attuned to ethnic and critical race studies. Perhaps less visible amongst these interventions has been the work done in Asian and Asian American Studies. As many would contend, however, before digital humanities emerged as the field we recognize today, Asian Americanists such as Lisa Nakamura, Rachel C. Lee, Mimi Thi Nguyen, Radhika Gajjala and other scholars in Asian American Studies were researching and publishing on issues of digital labor, digital diasporas, online communities, archives, infrastructure, and networks. We situate this scholarship from Asian American Studies in the genealogy of Digital Humanities.

This roundtable brings together Asian Americanist faculty, librarians, and postdoctoral scholars to unpack and frame research, pedagogy, and praxis at the intersection of Asian American Studies, media studies, Science and Technology Studies, and digital humanities. We will address questions often encountered in both digital humanities circles and Asian American studies, where there is seemingly little overlap: Is there an Asian/Am digital humanities? What does Asian/Am DH look like? How do you do Asian/Am DH? At the same time that we address these questions, we will foreground how Asian/Am DH, like other ethnic studies informed DH praxis, centers concerns of race, social justice, transnationalism, and community.

Borrowing a structure taken from keyword collections, each panelist will be given 5 minutes to illuminate one keyword related to themes, methods, and approaches to Asian/American Digital Humanities:

  • Networks will focus on the larger Asian American literary movement through a quantitative networked study of Asian American periodical culture, and whether it supports the idea that 1974 is a starting point for such a movement;

  • Activism explores connections between the radical tradition of activism in Asian American Studies and the resurgent interest in social and racial justice in digital humanities communities;

  • Connectivity will be addressed as not only a technological feat, but also a cultural practice and process, as revealed by studying the labor of Asian diasporic communities. Contingent and always at risk of breaking down, the maintenance of connectivity demands constant affective, imaginative, and technical effort.

  • Collaboration will highlight the importance, but also the challenges in building and sustaining long-term and equitable collaborations across units and institutions with particular attention paid to precarious labor and power asymmetries;

  • Interface will examine how the historically contested, produced, and performed site of the “inscrutable” Asian face is reimagined on and as digital interfaces, particularly in recent discussions around the selfie politics of surveillance under the travel ban and of the wellness industry.

A substantial amount of time will be reserved to engage with audience members, who we anticipate to be a more general digital humanities audience. We hope that this panel can also provide space to build a community of practice around Asian/Am digital humanities, as many of us are the “lonely only” on our campuses and departments.

 
2:00pm - 3:30pm#SG3: Meta DH Paper Session
Session Chair: Kathleen Fitzpatrick
Marquis B, Marriott City Center 
 

Lightning Talk: Humanities Projects and Google Summer of Code

Patrick J Burns

Classical Language Toolkit

Since 2005, Google has offered summer stipends designed to introduce students to open-source software development through a program called Google Summer of Code (GSoC). While several academically oriented projects participated in the most recent iteration of GSoC, these tended largely to represent STEM fields. In contrast, I have served in recent years as a mentor (and former student) for an open-source humanities project and would like to raise awareness of the program in the Digital Humanities community. This lightning talk sets out to accomplish the following: 1. introduce GSoC to a Digital Humanities audience; 2. give a brief overview of my experience participating both as a student and as a mentor in GSoC; and 3. encourage open-source Digital Humanities projects to consider applying for GSoC 2020.



On the Feedback Loop Between Digital-Pedagogy Research and Digital-Humanities Researchers in DH Tool Building Practices

Kalani Craig, Joshua Danish, Cindy Hmelo-Silver

Indiana University, United States of America

The line between digital-humanities research and digital-humanities pedagogy often seems impermeable. From edited collections to conference submissions, research and pedagogy are structurally separate (Eichmann-Kalwara, Jorgensen, Weingart, 2017), and the tools we use to enact digital analysis are bifurcated along similar lines. This presentation tracks the design and functionality choices that shaped a network-analysis tool, Net.Create, over several cycles of use by both digital-history and digital-pedagogy research teams. The tool was initially developed by the first presenter to support a team of 5 researchers engaged in simultaneous synchronous entry of network-analysis capta (Drucker, 2009) from open-prose text for a digital-history project. Its next iteration supported video and screen-capture data of undergraduate students as a research team explored how network analysis can support history reading comprehension in large active-learning undergraduate classrooms. Two more feedback cycles between these environments resulted in further changes to the tool. The presentation will detail some of the design changes that brought the tool into line with 8 of the 11 features of an ideal network analysis tool that Scott Weingart recently proposed at GHI (Weingart, 2018). We found that research-driven feature choices requested by the digital-history teams fostered more robust student learning. More surprisingly, analysis of the pedagogy-research videos identified several network-theoretical and digital-history-methods supports that directly improved the capta process and coding-schema clarity for the digital-history research teams. Simultaneous entry and live visualization in particular were fostered by the tool’s movement between research and teaching environments. These indirect interactions between digital-history researchers undergraduate history learners make a case for better integration of disciplinary-expert research and rigorous in-classroom pedagogy research in tool-building practices for digital humanists.



Quantifying the Degree of Planned Obsolesce in Online Digital Humanities Projects

Luis Meneses1, Jonathan Martin2, Richard Furuta3, Ray Siemens1

1Electronic Textual Cultures Lab, University of Victoria, Canada; 2King’s College London; 3Center for the Study of Digital Libraries, Texas A&M University

Many of the online projects in the digital humanities have an implied planned obsolesce –which means that they will degrade over time once they cease to receive updates in their content and tools. We presented papers in Digital Humanities 2017 and 2018 that explored the abandonment and the average lifespan of online projects in the digital humanities and contrasted how things have changed over the course of a year. However, we believe that managing and characterizing the degradation of online digital humanities projects is a complex problem that demands further analysis.

In this proposal, we dive deeper into exploring the distinctive signs of abandonment to quantify the planned obsolesce of online digital humanities projects. In our workflow, we used each project included in the Book of Abstracts that is published after each Digital Humanities conference from 2006 to 2018. We then proceed to periodically create a set of WARC files for each project, which are processed and analyzed using Python (Rossum, van, 1995)and Apache Spark (Apache Software Foundation, 2017)to extract analytics that we used in our statistical analysis. More specifically, our analysis incorporates the retrieved HTTP response codes, number of redirects, DNS metadata and detailed examination of the contents and links returned by traversing the base node. This combination metrics and techniques allow us assess the degree of change of a project over time.

Finally, this study aims to answer three questions. First, can we identify the signals of abandoned projects using computational methods? Second, can the degree of abandonment be quantified? And third, what features are more relevant than others when identifying instances of abandonment? In the end, we intend this study to be a step forward towards better preservation strategies for the planned obsolesce of digital humanities projects.

References

Apache Software Foundation(2017). Apache Spark: Lightning-fast cluster computing http://spark.apache.org (accessed 11 April 2017).

Rossum, G. van(1995). Python Tutorial, Technical Report CS-R9526. Amsterdam: Centrum voor Wikunde en Informatica (CWI) https://ir.cwi.nl/pub/5007/05007D.pdf.



Paratexts from the Early English Book Trade: A Work-in-Progress Database

Andie Silva

York College/CUNY, United States of America

By the seventeenth century, the print marketplace was a ubiquitous presence in the lives of both authors and readers, influencing the ways they understood and produced new works. Printers, publishers, and booksellers quickly understood how to properly frame books so that readers would not only know where to buy a book, but would learn to identify categories such as genre and authorship as measures of quality and good taste. These agents of print were responsible for translating this new technology into a recognizable format, and using it to establish relationships with new and returning readers through prefaces, dedications, and even in more structural elements like tables of contents and errata. While Leah Marcus notes that “the printer and the publisher play a striking part [in creating] a strong authorial presence” (193) for printed books, many paratexts play a much larger role in setting up a connection between the stationer as a textual gatekeeper and the reader as a potential customer.

We currently have access to a wide range of digital projects for the study of early modern literature—e.g. Early English Books Online (EEBO), Database of Early English Books (DEEP), and English Broadside Ballads Archive (EBBA), to name a few—as well as to a growing number of projects that aid in the study of the book trade—e.g. The London Book Trades Database, the British Book Trade Index, and the (not publically available) Stationers’ Register Online. While larger repositories tend to make the study of paratexts difficult (either by inconsistently cataloguing information or omitting it altogether), historical databases typically limit themselves to the already-daunting task of tracing individuals’ biographies and social networks. Book history scholars still need tools that consider how social and labor networks played out not only historically and legally, but also textually through the authorship of dedications, epistles to the reader, and errata. This 15-minute presentation will introduce audience attendees to a new database for researching paratextual materials authored by early modern stationers. This database will demonstrate the value of considering paratexts from the early book trade as a unique genre, encourage researchers to find new research questions, and invite users to contribute to growing the existing dataset.

Following a discussion of the project’s rationale and an overview of the database in its current iteration, this presentation will conclude by reflecting on the challenges and lessons of working on digital humanities projects without much institutional support or collaborators. Why should early career researchers undertake digital projects? What are the realities of pursuing such projects in light of other institutional demands, and what kinds of realistic timelines and setbacks should be taken into consideration? The audience for this presentation should include researchers interested in book history, project management, and database development, particularly but not limited to those studying early modern England.



Jupyter Notebooks and Reproducible Research in DH

Jeri Wieringa

George Mason University, United States of America

With the use of computational methods and digital sources in the humanities, there is growing concern regarding the need for increased transparency and reproducibility of workflows and code as part of digital scholarly activity. From Alan Liu’s work on reproducible workflows as part of the WhatEvery1Says project to Matthew Burton’s analysis of the digital humanities landscape, scholars are increasingly drawing attention to the need for systematic transparency about data handling and computational methods. Fortunately, the digital humanities are not alone in this endeavor. Researchers in the sciences have been developing processes and technologies around the production and dissemination of code as part of the scientific publishing process. As a result, there are multiple tools and communities of practice already in development which humanities scholars can adapt to document, execute, and publish their computational analysis.
In this presentation, I approach the challenge of reproducible research from the perspective of an individual scholar. Drawing on examples from my dissertation, I highlight strategies for and advantages of integrating narrative, code, and visualizations within interactive documents such as Jupyter notebooks. I will discuss ways scholars can adapt the existing tools and processes around reproducible research to the context of individual digital humanities projects, with attention to the different platforms available and current research on reproducible research in scholarly communication and the sciences. By showing how code can be integrated with visualizations and written analysis using existing tools and software, this presentation argues for the importance of documented code and methods as part of the scholarly output of computational analysis in the digital humanities and the necessity of expanding the paradigm of humanities publishing to support such work.

 
2:00pm - 3:30pm#SG4: Embodied Archives Paper Session 1
Session Chair: Tassie Gniady
Marquis C, Marriott City Center 
 

Transforming Archives in Indigenous Languages into Language-Learning Software

Alexa Little

7000 Languages, United States of America

43% of the world’s languages are endangered, and hundreds are no longer spoken. For Indigenous communities, access to archival materials is an important element of promoting and reviving their languages. However, many archival materials are in a format incompatible with language-learning or even user-friendly access.

We are a nonprofit that helps Indigenous communities sustain and revive their languages by creating digital courses.

In this presentation, we will demonstrate our simple process for converting archival materials in Indigenous languages into free language-learning courses. Our program is free and requires minimal technical ability. We will show how to structure a .csv or .xml file so that it can be processed by our software. We will also discuss some of the challenges of translating archival data into language courses, as well as the complicated issues of copyright, ownership, and access that can occur with Indigenous language projects. (In our case, copyright over the archival materials remains with the copyright holder, although copies of the courses are posted to our nonprofit website and the website of our donor.) Finally, we will display examples of courses and language-learning activities that could result from converting archival materials.

Our goal is to reach archivists and institutions who may have collections of Indigenous language materials, and to encourage them to make the materials accessible to community members who want to learn their heritage language.



Participatory Methods and Knowledge Generation to Support Decision Making under Uncertainties

Eveline Wandl-Vogt1, Enric Senabre2, Amelie Dorn1

1Austrian Academy of Sciences, Austrian Centre for Digital Humanities-ÖAW, Austria; 2Open University Catalonia, Internet Interdisciplinary Institute, Dimmons, Spain

This presentation aims to introduce into participatory methods and knowledge generation to support decision making under uncertainties, and exemplifies this with a certain DH project (currently ongoing, 3 years, 4 european partners, multidisciplinary international european project with workpackage on open innovation).

The project aims to develop a multimodal, collaborative platform and share knowledge with the broad range of actors and stakeholders beyond the research community.

Community-based research, where communities can be involved as full partners in proposing and designing projects and solutions along with researchers, represents a challenge in modalities of interaction and as a relationship-building process. With its roots in the action-research tradition of social sciences, community-based research has recently integrated design thinking, with co-design as it’s more collaborative dimension. This way, collaborative creativity can combine visual and conversational modalities for the definition and solving of problems based on design, generating different types and forms of cross-organizational knowledge.

A key principle of co-design is to be able to involve community participants in two types of moments or stages that complement and feed each other : moments of divergence (group dynamics to be able to generate maximum options, ideas and possible variants) and moments of convergence (participatory techniques to select from among the options generated, focusing on them in a consensual and argued manner).

For this study, based on thay key distinction, the following three multimodal methods of co-design in community-based research were tested:

1) Dotmocracy or Dot-voting
a collaborative convergence technique for finding consensus

2) Conceptual Prototyping
(usually based on paper, or on very basic digital interfaces) allows to build the minimum enough to test ideas

3) Toolkits and Canvas
key mode of co-design, a set of canvas materials prepared by experts in participatory design dynamics

In this presentation the authors focus on the interaction with the communities and offer insights into first experiments based on a range of actors in all societal areas against the background of the Quintuple Helix model (1. University, Research, Science; 2. Government, Politics; 3. Media, Society; 4. Business, Industry; 5. Environment).

The results are described based on participant observation as well as interviews.



Digital Archives from Below: Notes from Alternative Toronto

Lilian Radovac

University of Toronto, Canada

Alternative Toronto is a pilot digital humanities project that is bringing the spirit of the History Workshop movement into the digital realm. A loose collective of historians, archivists, artists and activists, we're building a community-contributed archive of Toronto’s alternative cultures, scenes and spaces of the 1980s and 90s, with a special emphasis on pre-Internet antiauthoritarian, antiracist, and trans*/feminist/queer movements. Our goal is to document radical and countercultural microhistories of this period in order to facilitate intergenerational conversations about how local social and political change happens, while bridging the widening gap between scholarly and public research. As we grow and share our collection, we encourage contributors and visitors to envision what a permanent archive from below might look like.

In this presentation, I’ll discuss the project's rationale and working process and position it in relation to the wider social justice turn in DH, as exemplified by Documenting Ferguson, A People's Archive of Police Violence in Cleveland and Torn Apart/Separados, among other projects. In particular, I want to highlight the importance of interdisciplinary collaborations that combine data collection and visualization with critical historical and ethnographic research, with an eye toward helping contemporary activists to learn, connect and organize in the context of current social crises. I conclude by arguing that this approach is especially necessary in Canada, where social movement history is still under-documented in comparison with the U.S. and U.K., which in turn undermines our ability to resist the injustices of the present.



A Canadian Utopia: A Communal Approach to Digital Scholarship

Lydia Zvyagintseva

University of Alberta Libraries, Canada

This talk operates on the assumption that critique is important, but acts of imagination and possibility are necessary now more than ever, both in the academy and society more broadly. Inspired by Frederic Jameson reimagining utopia, I am responding to Gaudry and Lorenz’s call to envision a socially just Canadian academy beyond mechanisms of inclusion (2018). Recognizing contemporary debates of indigenization of academic spaces and programs in Canada, I am interested in adopting the ideas of a resurgence-based decolonial indigenization as an opportunity to apply the benefits of balanced power relations to all learners.

From this starting point, I explore the digital scholarship centre as a site for putting into practice Ranciere’s theories of radical intellectual equality and a commitment to intellectual liberation. I also draw on Leanne Betasamosake Simpson’s idea of land as pedagogy as a frame to reconsider knowledge creation and dissemination. My goal with this presentation is to create a space to ask the following questions: What should be the role of the academy in a society where the material conditions of its members have been met and the fundamental relationship is not based on exchange? Can the digital scholarship centre model non-oppressive organization approaches in the context of a research and learning institution?

As such, I focus on several issues as critical components of an imaginary digital scholarship and meaningfully inefficient approaches to knowledge production using technology. These include the importance of responsibilities as well as rights, learning by doing and community-based research, collaborative publication, digital citizenship, the tensions of openness and gatekeeping.

Digital scholarship centres, much like makerspaces in public libraries, have the potential to embody a commitment to public humanities. However, the very definitions of the public good and disciplinarity will require an epistemological reframing in such a proposed utopian context.

 
2:00pm - 3:30pm#SG5: Quantitative Textual Analysis Paper Session 1
Session Chair: Heather Froehlich
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Topic Modeling and Textual Analysis of American Scientific Journals, 1818 – 1922

Shawn Martin

Indiana University, United States of America

When performing a distant reading of some of the most prominent American scientific publications in the nineteenth-century U.S., some very clear patterns emerge. LDA topic modeling and textual analysis methods of over one hundred years of the American Journal of Science (AJS), Proceedings of the American Association for the Advancement of Science (PAAAS), and the Journal of the American Chemical Society (JACS) between 1818 and 1922 helps historians to understand how these journals reflect the larger social context of nineteenth-century American science.

Through much of the early nineteenth century AJS served as a news source for American scientists; in the mid-nineteenth century it began to publish more original research in a variety of different fields, and by the twentieth century was dedicated almost entirely to geology. PAAAS was fairly similar to AJS, but by the twentieth century was almost entirely dedicated to news of the society and also discussed theory and method of science somewhat more often than AJS. JACS combined elements of both AJS and PAAAS. Early in its publication, JACS published research, but it is not until the 1890s that JACS began to serve as a news source and a space for discussion of theory and method in chemistry.

Overall this analysis shows that there was an increase in discussion of business and professional issues and a shift in the journals that scientists used to discuss these issues. This shift happened during a very specific period, 1870 – 1890, the very same time that specialized scientific societies, particularly the American Chemical Society, split from the more generalized American Association for the Advancement of Science. Therefore, at least in the United States, there is a clear shift in professional identity of American scientists in the nineteenth century from generalists to specialists, and topic modeling allows historians of scientists to clearly identify this evolution of professional identity.



Using HathiTrust to Explore Librarianship’s Past.

Eric Novotny

Penn State University, United States of America

Using tools developed by HathiTrust I have analyzed hundreds of volumes of library science journals published between 1876-1920; a formative period for American librarianship. In 1876 the American Library Association was founded and the Library Journal was established. Over the next several decades more specialized journals appeared. These early journals provided a forum for librarians to discuss common concerns.

My research explores a collection of library journals I created in the HathiTrust Digital Library. The collection includes national library publications such as the Library Journal as well as many state publications like the Bulletin of the Iowa Library Commission, and the Wisconsin Library Commission Bulletin. The collection expands the canon and provides historians a more complete picture of early library conversations taking place across the country.

I will share findings generated from the computer analysis and discuss how these results compare with studies conducted using more traditional methods. Additionally, in the spirit of the early library journals which emphasized the exchange of practical information, I will discuss the process of creating a corpora for text analysis in Hathi and the challenges and opportunities afforded by the HathiTrust research tools.



Best Practices in Authorship Attribution in Greek

Sean Vinsick, David Berdik, Patrick Juola

Duquesne University, United States of America

Authorship attribution is a key task, not only in humanities scholarship, but also in application such as resolution of legal disputes. As with other forensic disciplines, accuracy is key to fairness and justice. Recent scholarship has focused substantial effort on finding the best and most accurate techniques for determining the author of a document, but much of this effort has been focused on English. Other languages, such as Greek, have not received as much attention.

Using the Java Graphical Authorship Attribution Program, we ran hundreds of thousands of different experiments testing the ability to correctly determine the correct author on a corpus of 50 Greek language blogs. Additional hundreds of thousands of experiments were performed testing the ability to correctly attribute the correct author on a 161-authors in a corpus of Greek-language tweets. Each experiment uses various combinations of canonicizers, event drivers, and distance measurements. By using a Leave One Out analysis driver, each author's document set is trained with a single document left out as an unknown author, then each author is compared against each of the known documents and ranked in order of similarity. This method allows us the ability to grade the correctness of each experiment against all authors by checking the held-out document's author against the highest ranked author. After the grading we can quantifiably demonstrate the experiment options that provide a more accurate form of authorship attribution.



Parallel Lines: Modeling Event Modality and the Possible Worlds of Fiction

Matthew Sims, David Bamman

UC Berkeley, United States of America

In his essay “The Art of Fiction,” Henry James harshly criticizes the English novelist Anthony Trollope. “[Trollope] admits,” James writes, “that the events he narrates have not really happened, and that he can give his narrative any turn the reader may like best. Such a betrayal of a sacred office seems to me, I confess, a terrible crime.” James is making a strong claim here for the sanctity of representational authenticity, for the novelist’s responsibility to truth. Truth, however, is a slippery concept, in narrative fiction as well as in life. Rather than trying to account for the verisimilitude of fiction then, we propose a far more straightforward question: within the context of a novel, how might we go about determining those events that are depicted as actually occurring as opposed to all those events that could have occurred based on the expectations, assumptions, and imaginations of the characters and the narrator?

Although this may seem like a difficult problem, there is in fact a direct way to address it. Computational work in event detection in natural language processing (including datasets released under the ACE, ERE and TAC-KBP programs) represents events in part through their linguistic modality: specifically, whether an event is asserted as occurring (“He opened the door”) or not (“He wanted to open the door”; “He would be in trouble if he opened the door”). By adopting this theoretical framework, we can use a powerful form of representation for distinguishing between actual occurrences and possible occurrences (including beliefs, hypotheticals, commands, threats, desires, and promises).

Our current work in this space involves annotating events in 200,000 words from 100 different literary texts (creating a new dataset with very different qualities than the news data examined in previous work). Using a neural model trained to distinguish event modality in this annotated dataset, we will discuss the empirical distinctions we find when applying the model to a large collection of novels.

We believe quantifying asserted and unasserted events is useful not only for comparing genres and charting historical shifts in novelistic events, but also for exploring how novels construct possible worlds in parallel to those specific worlds that are realized by the plot. In fact, for some novels, what may have happened or could have happened is arguably as compelling as what actually occurred, indicating in turn the richness of the novelistic imagination.

 
3:30pm - 3:45pm#Break5: Break
Grand Ballroom Foyer A, Marriott City Center 
3:30pm - 4:30pm#NimbleTents: Solidarity Archive: A Nimble Tents Intervention
Session Chair: Alexander Gil
Today we will be mounting a nimble tent to help colleagues in UPR Rio Piedras collect coverage of the protests in US mainstream media, focus on newspapers. All welcome!
Salon 1, Grand Ballroom, Marriott City Center 
3:45pm - 4:45pm#SH1: Feminist Digital Humanities Paper Session
Session Chair: Emily Esten
Marquis A, Marriott City Center 
 

DH-Mapping "Comfort Women Statues" as Transnational Dissent Opposing the Denial of War Crimes

Nan KIM

History Dept, UW-Milwaukee, United States of America

This presentation introduces a DH project that seeks to document the recent transnational activist movement to memorialize female survivors and victims of wartime violence, a campaign that has frequently faced controversy around the creation of public memorial statues. These statues are mostly replicas or counterparts to an earlier statue that can be found in Seoul, at the site in front of the Japanese embassy. There, a bronze statue depicts a young teenage girl, seated with a placid expression on her face and wearing chin-length hair in the manner of a schoolchild. Originally placed there by a non-governmental women’s advocacy council in 2011, “Peace Statue of a Girl” memorializes victims of the so-called “comfort system” of sexual slavery during WWII when an estimated 200,000 girls and women from several Asian countries and also the Netherlands were abducted and forced into sexual slavery by the Japanese imperial military. Women and girls were captured, coerced, or deceived with promises of employment and sent to military “comfort stations,” including on the frontlines. The remaining survivors and their advocates have long pressed for that system of sexual violence and institutionalized rape to be legally recognized and prosecuted as war crimes, an effort that continues nearly 75 years after WWII ended. In the absence of an official recognition of state responsibility regarding those human-rights violations, replicas of the Statue of a Girl have been erected in various sites internationally – including California, Australia, and Germany – despite objections by the Japanese government. This presentation will introduce a DH project that seeks to document the multi-sited material practices of memorialization as an embodied strategy and transnational discourse of protest. An aim of the project is to make more accessible various interpretations of this decentralized movement, with over 50 statues in South Korea and 13 in other countries, as a transnational strategy for sustaining dissent surrounding the denial of war crimes.



Posthumanities: Interrogating Identities in Digital Fourth-Wave Feminisms in the South

Narayanamoorthy Nanditha

York University, Toronto, Canada

In recent years, India has become a battleground for gender wars and feminist activism both on the streets and on digital media platforms. Digital and technoscientific affordances have enormous stakes, not merely in the redefinition and reimagination of digital feminist movements but in enabling a specific functionality for digital feminism to subvert dominant narratives and structures and challenge the status quo of existing pre-ordained fixities in the understanding of traditional gender roles and identities.

This paper analyzes the #MeToo movement in the Southern Hemisphere, particularly in the context of India, through Twitter hashtags, to redevelop theories of digital feminist activism or fourth-wave feminism in the quest for alternative forms of feminist embodiment and humanity than those offered by traditional, dominant and masculinist models of identity delineation. I posit this inquiry outside the realm of Digital Humanities and humanist identitarian politics and locate it in posthumanist thought to counter decentralizing, hegemonic and monopolizing human value systems to eventually arrive at varied possibilities, affirmations and imaginations of female bodies and identities and that in Donna Haraway’s words does not foreclose an understanding of the entangled, relational and processual nature of identity (1991). In other words, I embark on a quest for interrogating identity construction in the context of digital feminism and the dismantling of these utopian structures to enter into new modes of thought using digital spaces that I call digital democracies. The online becomes a site for a complex posthumanistic inquiry; a space of mutuality and broken barriers in identity construction that redefines feminist freedom.

Delving into the particular case study of the #MeToo movement in India enables an exploration of its evolution, its migration to the South and an examination of how women take control of their bodies and sexualities using digital affordances and ultimately reimagine their histories and their futures outside of patriarchy.

I use Twitter Web API’s to extract Big Data from online digital platforms, Twitter in this case, for a posthumanist close reading of social media discourses, employing scholarship by scholars like Judith Butler and Donna Haraway, in the attempt to gauge patterns of resistance and subversion and in the understanding of the role of hashtag movements in the Fourth Wave of Indian Feminism in the ‘liberation’ of women’s bodies and sexualities. Finally, I argue for a positivist evolution and transformation of Digital Humanities into Posthumanities or Post Digital-Humanities for a critical adoption of new knowledge systems and imaginations of self-definition.



Bechdel.io: The Future of Film and Feminism

Laurel Anne Carlson1, Joseph Stephen Carlson2

1University of Iowa; 2Independent

This collaborative digital humanities project is the product of a shared passion for film, feminism, and the creative potential of technology. By combining the talents and interests of an American Studies scholar and an independent software engineer, we’ve created an innovative data mining tool for feminist film analysis.

The Bechdel Test asks whether a film meets the following criteria: 1) It includes at least two women, 2) who have at least one conversation, 3) about something other than a man or men. The bechdel.io film script parsing tool automatically tests film scripts to determine whether or not they pass the Bechdel Test in just a few seconds.

The importance of this tool lies in its ability to analyze films on the macro-level. While anyone can sit through a film with a notebook and pencil in order to determine if it passes the Bechdel Test, this is a slow and cumbersome process. With the tool we’ve created, the process is automated, which allows massive amounts of data to be generated with ease. Thus, data can be produced for large bodies of film, i.e. a certain director’s filmography, a certain actress’ body of work, or for the films released in a specific year.

We view this tool as a form of feminist activism. As such, the software is open source and available for use by anyone and everyone.

 
3:45pm - 4:45pm#SH2: Pittsburgh Paper Session Yinz
Session Chair: Anelise Shrout
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Mapping Jazz Venues in Pittsburgh’s Hill District

Marc Patti

Duquesne University, United States of America

The speaker will present a 5-minute Lightning Talk on his ongoing digital mapping project which visualizes the dense concentration of entertainment infrastructure of the Hill District, a predominately-African American neighborhood situated next to downtown Pittsburgh.

From the 1920s to the 1970s, the density of the neighborhood’s entertainment infrastructure resulted in a thriving cultural hub where an abundance of home-grown jazz artists could gain access to opportunities for professional experience and invaluable networking opportunities. The history of Pittsburgh’s Hill District consists of a black experience riddled with radiant triumph and bitter tragedy. Despite the diverse experiences of its black residents, most contemporary conceptions of the neighborhood’s history suffer from a strong focus on what Laurence Glasco calls “the Narrative;” the tendency of black history written since the civil rights movement to focus primarily on black struggle and white oppression, often at the expense of appreciating the scope of black achievement. The goal of this visualization project is to present a direct challenge to “the Narrative’s” continued hold on current perceptions of the Hill District.

The project, currently in development for a final project assignment in a Digital Humanities and the Historian course, will utilize Northwestern University’s Knightlab StorymapsJS program to visualize the dense concentration of nightclubs, theatres, and bars that made up the Hill District’s entertainment infrastructure. Each venue will be plotted at its original location complete with original photos and a description which will link to further reading. These points will be plotted on a reconstructed map depicting the area as it existed before Pittsburgh’s Urban Renewal strategy decimated the Hill’s cultural district. This process will produce a visualization depicting the high density of entertainment infrastructure which will effectively communicate the neighborhood’s cultural significance in a way that written narratives can’t. The use of photographs for each plot point will also contextualize vast quantitates of digitally available primary sources in one centralized location which currently does not exist. And ultimately, this visualization will seek to distill and synthesize current scholarship on the neighborhood’s jazz venues into smaller, more transparent narratives while providing references that can point interested readers towards more detailed sources should they want to know more.



Mashers and Street Harassment in Progressive Era Pittsburgh, 1880s to 1930s

Lauren Churilla

Carnegie Mellon University/Saint Vincent College, United States of America

My proposed paper focuses on issues of street harassment between 1880 and 1940 in the city of Pittsburgh. During this time, police targeted offenders known as mashers and arrested them under disorderly conduct laws. While I do not suggest that all disorderly conduct cases are mashing cases, street harassment did fall within the accepted definition of disorderly conduct. This project looks at the demographic and spatial patterns regarding street harassment in respect to the stereotypical male masher found within urban public space in Pittsburgh from 1884 until 1939. Other studies have defined the male masher in purely cultural terms and have argued that men who bothered women on the streets came from a background of affluence. They have concluded that these fashionable men were the primary threats to female mobility within urban space. Using police records, my study adds to this picture an entire class of men previously unrecognized in the scholarship: white, working-class men. These men made up 94% off the men arrested in Pittsburgh for disorderly conduct, the offense under which mashing was prosecuted. This discovery add to previous studies’ conclusions about the identity of the masher and thereby reveals that the annoyance of women was an action shared across class lines and thus much more widely practiced and experienced by women than previously recognized.

Additionally, the use of a geographic information system (GIS) mapping framework allows for the investigation of spatial patterns of disorderly conduct arrest within the city of Pittsburgh. Using a QGIS model, I am in the process of exploring visual patterns that look at when men were arrested and uncover popular locations of arrest or where concentrations of disorderly conduct occurred. Such an analysis will help to explore patterns of behavior and reveal places within the city that police identified as “problem areas” for offensive behavior against women. While my project is historical in nature, current day law enforcement use GIS to examine many of the same issues about the spatial relationships of crime. Digital maps “are the quickest means of visualizing the entire crime scenario” and provide a simple means to convey a multiplicity of information about a crime.[1] Additionally, GIS allows for the integration of community characteristics that can be used to interpret relationships between urban environment and crimes. Examining street-harassment and trying to account for patterns of behavior among groups of men is critical to understanding how Progressive Era women experienced being in public space and exercised autonomy within the metropolis. In order to understand what mashing meant for women’s experiences, we must discern how widespread it was.

[1] C.P. Johnson, “Crime Mapping and Analysis Using GIS,” Geomatics 2000: Conference on Geomatics in Electronic Governance, January 2000, Pune, accessed February 15, 2018, http://fac.ksu.edu.sa/sites/default/files/crim_mapping.pdf.



Building a Digital Public Space with (not for) Pittsburgh's Music Communities

Kelly Hiser2, Toby Greenwalt1

1Carnegie Library of Pittsburgh; 2Rabble LLC

Join us for a conversation between a public librarian working at the intersection of digital learning, collections, and engagement, and a humanities scholar turned software executive. We're working together to build STACKS, an ongoing document of the Pittsburgh region’s vital, evolving music scene. Powered by the open-source platform MUSICat, STACKS shares artist profiles, streaming, and downloads with the community. Carnegie Library of Pittsburgh uses MUSICat to run open submission rounds, license albums from local artists, and work with community leaders to curate the STACKS collection.

Built outside of academia with contemporary rather than historic material, local music collections like STACKS are outstanding examples of how public librarians are engaging with critical issues facing the field of digital humanities, including copyright (Allan Liu, 2011), critical stances toward digital tools (Jamie Skye Bianco, 2012), and ethical engagement with communities (Deb Verhoeven, 2015). Collections like STACKS—built "with not for" communities (Laurenellen McCann, 2015)—in public and civic digital spaces offer digital humanists productive frameworks for building ethical and just relationships in many kinds of projects.

Together, we'll walk through how we apply this "with not for" ethic to our work, sharing how the library collaborates with its community and how the MUSICat team works with librarians to build digital public spaces that center the concerns of musicians and communities.

For the library, STACKS is a critical component in a feedback loop of community engagement that cycles through inspiration (the spark that leads to creative activity), interaction (person-to-person connections that foster learning and discovery), transaction (resources being shared and used), and, finally, reflection (when community members share their work with the library and community). Built in collaboration with leaders from local music communities, STACKS draws energy and inspiration from existing library practices, programs, and places that have community engagement at their core. By creating connective tissue between all four of these stages, this loop serves as a virtuous cycle in facilitating creative community output.

The developers building the MUSICat platform strive to work with the librarians and musicians who use the platform to create ethical digital tools that respect privacy and enable inclusive and fair licensing and curating practices. Our track record is not perfect; we will speak to the real-world process of making mistakes and compromises, educating our less technical partners and ourselves, and continually working toward better practices and tools.

Initiatives like STACKS break down the boundaries between library collections and library communities by involving users in the aggregation of materials they themselves create. Empowerment is at the heart of our work: we aim to amplify work by local creatives that is often unheard, while ensuring everyone involved can make meaningful contributions and reap concrete benefits. This is the process of working “with not for” in practice.

 
3:45pm - 4:45pm#SH4: Embodied Archives Paper Session 2
Session Chair: Spencer Keralis
Marquis B, Marriott City Center 
 

Critical Digital Archives

Hannah Alpert-Abrams

Brown University, United States of America

Critical digital archives are digitized collections of vulnerable archival materials, often produced under conditions of political or environmental urgency. Because of the urgency of their creation and the vulnerability of the materials, these collections are often digitized with limited funding, short timelines, and under the supervision of individuals who lack formal archival training. Yet given these same conditions of urgency and vulnerability, the indefinite delay of digitization is often considered unacceptable. This is especially true for collections that reflect histories of marginalization, oppression, and violence. As with many justice-oriented projects, knowing circumstances are imperfect, we move forward anyway.

It is my contention that the spread of critical digital archives calls for a new paradigm of archival theory from within the humanities. As Michelle Caswell has argued persuasively, humanistic archival theory has had little interaction with the intellectual work of professional archivists, largely because we have treated archival practice as a feminized service profession rather than an intellectually rigorous pursuit. This leaves us ill-prepared to take on the work of digital archiving, to the detriment of our projects and the frustration of our archival collaborators. One solution might be to place digitization work exclusively in the hand of archivists. Yet in the context of U.S. universities, I have found that faculty leadership is essential to maintaining the long-term investment required for the creation of a sustainable collection. The community relationships and specialized knowledge of humanists are also vital to the ethical description and dissemination of digitized materials from vulnerable collections.

In this talk, I use the case of the Archivo Histórico de la Policía Nacional de Guatemala (AHPN), a digital collection hosted by the University of Texas at Austin, to articulate a role that faculty can play in advocating for critical digital archives, supporting sustainable archival practice, and educating for and with digital archives. Through these three forms of intervention, I argue for a more constrained role for faculty members in the digital archive. At the same time, I will show that this approach can enable more effective collaborations that better allow us to achieve the goals of preservation and dissemination which motivate the digitization of critical archives. This approach allows humanists to leverage our considerable theoretical training and institutional power towards a more collective archival good.



"We Demand Increased Exposure of All Documents": African American Student Protests and Increased Access to Archives via Digital Collections

Lauren Havens, Max Eckard, Elizabeth James, Thomas Vance, Ozi Uduma

University of Michigan, United States of America

Following a monumental Twitter campaign begun by students at the University of Michigan in the fall of 2013, students of the Black Student Union set forth to address major issues that were presented. Several members traveled to the university’s major archive to research the history of the Black Action Movements at U-M in order to obtain knowledge and inspiration. Upon visiting the archive, they encountered challenges in accessing materials. Although the archival materials had been, strictly speaking, accessible, some of the rules seemed restrictive compared to how they had been available previously. Materials were scattered across several different collections--making them hard to find--and located on the University’s North Campus after some had been removed from a more accessible location on central campus. Placing them on a more remote location of campus left many students feeling that they were increasingly difficult to access.

This ultimately led to a protest coinciding with Martin Luther King, Jr. Day in 2014 where members of the Black Student Union bore witness to racial issues on campus and called for more minority inclusion on campus Due to their previous experiences, as one of the seven demands they made of the University, they called for easier access to materials considered critical to their identity at the University. “We demand for increased exposure of all documents within the Bentley (Historical) Library. There should be transparency about the University and its past dealings with race relations.”

In response to this demand, the Library digitized the records of the Department of Afroamerican and African Studies, which detailed the Department’s history, as well as archival material relating to black activism and organizations of interest to black students, faculty and staff. The Library Information Technology department within the University Library enables preservation and access to this digital collection along with over 250 other digital collections. All told, this student-initiated effort took more than eight months to complete, and today all University students, faculty, and staff may access digitized content from any location. In the process of responding to the protests and building the collection, the University and those working there gained knowledge and established ties that may not only facilitate better relations going forward but also aid in the development of similar digital collections as needed in the future.

Bringing perspectives from those involved with the Black Student Union and the Department of Afro and African American Studies as well as those who helped to create and will maintain the digital collection, this presentation highlights the positive outcomes and collaborations that resulted from this process as well as issues encountered along the way and limitations that restrict usage of the digital collection to only authenticated users rather than open to the public.

Our story will be useful to those organizations that wish to continue to improve access to information, especially that which sheds light on histories that may be uncomfortable or sensitive. It also sheds light on the many issues involved with digitizing archival materials, including privacy and copyright issues we encountered.



Sounding Spirit and Readux: Cultural Paratext and Augmented Facsimile in Digital Scholarly Editions

Jesse P. Karlsberg

Emory University, United States of America

Understanding significant gospel, spirituals, and shape-note music songbooks from the late nineteenth to early twentieth centuries involves looking beyond text and music to paratextual elements ranging from music notation system to format and page dimension to evidence of use that are important markers of these works’ cultural context. Yet most recent approaches to digital music editions, and digital editions generally, erase these markers of bibliographic form, centering new digital renderings of encoded music and text in the user’s browsing experience. In this paper, I discuss the NEH-funded forthcoming series of digital critical editions of vernacular sacred American music books from 1850–1925, Sounding Spirit, which employs Readux, a new platform for annotating and publishing digital scholarly editions developed by Emory’s Center for Digital Scholarship (ECDS), which emphasizes books’ bibliographic forms in their digital expression. Readux is a Django/Python application that builds on the Mirador image viewer and IIIF protocol to augment annotated digitized books with transparently rendered text in a seamless digital interface. By retaining bibliographical forms in digital (music) editions, new critical editions can better subject to analysis technologies of print that meaningfully express these works’ contexts before and after the turn of the twentieth century, a time of great dramatic demographic and cultural change that shaped intersections of race, religion, region, and music in the United States. This presentation articulates the value of an approach to digital editions that centers cultural paratext through augmented facsimile and describes the Readux platform that enables Sounding Spirit to adopt this approach to digital critical editing.

 
3:45pm - 4:45pm#SH5: New Horizons in Network Analysis Panel
Session Chair: Scott Weingart
Marquis C, Marriott City Center 
 

New Horizons in Network Analysis

John Ladd1, Melanie Walsh1, Maeve Kane4, Matthew Erlin1, Matthew Lavin3, Scott Weingart2

1Washington University in St. Louis; 2Carnegie Mellon University; 3University of Pittsburgh; 4University at Albany-SUNY

In the past decade, network analysis in the humanities has grown from a niche community into a rich and active area of scholarship, large enough to sustain regular conferences, journals, and academic centers. Node-link diagrams, centrality measures, and the small world effect are now common features of digital humanities projects, but there is still much that network analysis has to offer. We propose an hour-long panel that explores productive emergent areas of network research in the humanities. While early trailblazing DH network projects introduced the affordances of network visualization and analysis to humanities scholars, more recent projects are borrowing network techniques that have matured in other fields, including sociology, physics, and epidemiology, among others. This new work positions network metrics alongside visualization as a primary site of analysis, and it applies quantitative network analysis to a wider range of critical objects and concerns.

Our panelists will present three projects that showcase possible new directions for humanities network analysis in 15-minute segments, followed by a discussion led by an additional panel member.

In the first presentation, a panelist will explore network morphology as a tool for examining colonial erasure of indigenous women in the settler colonial archive. In conversation with new work on erasures and silences in early American history, this project examines how document genre shaped network morphology and the visibility of indigenous women’s community influence to European observers. Rather than using network analysis for an empirical argument about indigenous women’s structural place in their community networks, this project suggests ways to read networks for European understandings of indigenous women’s roles.

In the second project, two panelists will examine the social and conceptual networks of the eighteenth-century German Enlightenment based on a computational network analysis of 66,000 journal articles from the Zeitschriften der Aufklärung electronic database. The analysis entails two distinct but related avenues of inquiry. The first is a consideration of authorial co-publication networks, undertaken with the aim of better understanding the collaborative and collective practices that shaped intellectual life in this period. The second investigates semantic networks derived from article titles. When constructed on the basis of a large number of source texts, these semantic networks can shed valuable light on the conceptual topography of a particular historical moment. One key question is how these approaches can be combined to illuminate the relationship between literary and scientific discourses in the period.

In the third presentation, two panelists will address a common question in humanities network analysis: how can we determine which nodes are structurally similar to others, whether intra-network (as in the 19th-century American print network) or inter-network (as in early modern dramatic text networks)? Using modern clustering and supervised classification (or "machine learning") techniques, it is possible to build models of structural similarity based on network metrics such as centrality measures and clustering coefficients. These models allow us to compare nodes more easily and work out questions of who within a large, complex network shares similar social fields.

 
5:00pm - 6:00pm#NimbleTents2: Solidarity Archive: A Nimble Tents Intervention
Session Chair: Alexander Gil
Today we will be mounting a nimble tent to help colleagues in UPR Rio Piedras collect coverage of the protests in US mainstream media, focus on newspapers. All welcome!
Marquis C, Marriott City Center 
5:30pm - 7:00pm#NightInOakland: Thursday Night in Oakland
 
5:30pm - 7:00pmKeystoneDH: Keystone DH Planning Meeting
Session Chair: Henry Alexander Wermer-Colan
Session Chair: Nabil Kashyap
Brief meet up at Sharp Edge Bistro for planning for next year's Keystone DH conference at Temple University. Anyone interested in or curious about helping plan next year's conference is welcome. Email: Alex.Wermer-colan@temple.edu, phone: 7812641992, @alexwermercolan on twitter with questions
Sharp Edge Bistro 
7:30pm - 10:00pm#Newcomer'sDinner: ACH Newcomer's Art & Dinner/Thursday Night in Oakland
ACH Newcomer's Dinner is open to any attendee via our signup sheet. Attendees are expected to pay for their own meal. Shuttles will leave from the Marriott City Center at 5pm. The shuttles will have return trips scheduled for 7:15, 8, 9, and 10pm from Oakland
 
Date: Friday, 26/Jul/2019
8:00am - 12:00pm#Reg3: Registration/Check-In
Grand Ballroom Foyer A, Marriott City Center 
8:00am - 5:00pm#BookExhibit3: Book Exhibit 3
City Center A, Marriott City Center 
9:00am - 10:30am#SI1: Networks Paper Session
Session Chair: Matthew Hannah
Marquis A, Marriott City Center 
 

Beyond Letter Networks

Ruth Ahnert1, Sebastian Ahnert2

1Queen Mary University of London; 2University of Cambridge

Archives of correspondence have provided a fruitful area of application for network analysis in recent years. Our own research has exploited an archive of over 132,000 letters digitized at State Papers Online from the Tudor period (1509-1603), using quantitative network analysis to uncover the social and textual organization of this vast archive. However, as Scott Weingart has warned "By only looking on one axis (letters), we get an inflated sense of the importance of spatial distance in early modern intellectual networks.... Distant letters were important, but our networks obscure the equally important local scholarly communities". This paper explores methods for recovering those local networks at scale from letter metadata and content description fields. By reconstructing the itineraries of all the letter authors from the location from which letters were sent, we can discover people who were in a given location at the same time. We can then cross-refer these findings with mentions of those individuals in the other's correspondence (using the letter content description), as well as estimate the significance of two people overlapping by the frequency with which that location was visited over the period of study. We will discuss both our methods for undertaking this process and our key findings.



Picking Up Good Citations: Tracing Networked Ethos Across Rate Your Music

Thomas Lawson

University of Pittsburgh, United States of America

Within rhetoric and composition, digital rhetoric has become a burgeoning subfield in the last decade. In addition to a focus on digital methods for rhetorical analysis and the role of interfaces in rhetorical productions, analyses of review-based database cultures such as RateBeer have received a share of the subfield’s critical attention. For example, scholars such as Jeff Rice have demonstrated how users construct an ethos when reviewing databased content. Oftentimes, the marshaling of community-specific commonplaces with which ethos is constructed is considered to be mimetic, with certain keywords emerging from the repetition of terminology within the network, yielding a common vocabulary and taste.

For this paper, I argue that such scholarship has neglected how a shared vocabulary might derive from these sites’ organization and description of content, promoting an ethos of expertise that entails mastery of the database's digital archives. To illustrate this, I turn to Rate Your Music, where digital archives related to genre, compiling the canons and curios of an exhaustive list of styles, are central to one’s navigation of the database and thereby facilitate a thorough knowledge of popular music’s stylistics. Moreover, turning to Bernard Stiegler’s concept of mnemotechnologies, I show how these pages’ crowd-sourced genre descriptions, for which users comprehensively cite authoritative sources to supply information, enable users to circulate genre-related commonplaces based upon these bygone critics’ writings. That is, this mastery of the site’s archives results in heavily referential, genre-minded reviews of contemporary releases, signaling expertise by repeating the database’s narrow genre categories, its list of canonical and obscure albums, and past experts’ descriptions of these musical styles.



Accounting for Taste: Scientific Print by Subscription in Restoration England

Pierce Williams

Carnegie Mellon University, United States of America

The circulation of scientific print was a key process in the seventeenth and eighteenth centuries whereby scientific knowledge became widely accessible as a form of lay expertise and a source of symbolic capital. Yet little can be said with precision about the audience for scientific print or how it evolved over time. In an ongoing effort to characterize the emergence of public science, scholarship has focusedpredominanty on the textual record left by scientific elites, who originated experiments in laboratory spaces, and by experimental demonstrators, who re-staged laboratory experiments in social spaces. However, to understand fully when and by whom science came to be imagined as a public enterprise, a more textured portrait of the non-specialist audience for science is necessary.

This paper makes use of network statistics that were generated using an undervalued source of data concerning the non-specialist audience for scientific print. Subscription lists record the names, addresses, pedigrees and a variety of other information about individuals who paid for copies of books before they were printed. After a fitful emergence, publishing by subscription became a common practice in the eighteenth century whereby authors and publishers guaged the demand for risky publications and secured financial commitments in advance. Extant subscription lists record the commitments of every class of society, from bricklayers to Oxford dons and officers of state. They also indicate the kinds of learning that people found worthwhile and when. Moreover, the social scheme of things is almost invariably reproduced in miniature in the hierarchy of subscription lists.

Subscription lists have long been dismissed as a valid basis of inference for two reasons: first, they do not present reliable pictures of readership; second, the vast constellation of economic and sociological data they record are difficult for serial readers to analyze at scale. With respect to the latter concern, this paper demonstrates the power of network analysis to address the multimodal and multiplex data preserved in subscription lists. With respect to the former concern, I argue that while subscription lists may not present reliable pictures of readership, they do present reliable pictures of what Chris Warren has called “net work”: the material and social labor of making and sustaining associations—between communities and commitments, people and ideas, and practices and public judgments about their place in society.

Rather than attempting to infer intellecutal influences on the public from the contents of their libraries, this study analyzes the evolving appeal of public science on the basis of the contours of its non-specialist constituency. In the process, I discuss the challenges of using what are in fact graphs of commercial data as a proxies for sociocultural change. I also consider the affordances and limitations of various methods for reducing graph complexity to facilitate interpretation.



You Are What You Watch: Mapping Cultural Difference via Media Consumption

Brendan Kredell

Oakland University, United States of America

This paper begins from the fundamental premise that our efforts to understand culture via the taste preferences that signal it are limited by the ways those preferences themselves are constrained. Perhaps the most conspicuous example of this is the binary of American electoral politics: the deep blue hues of Alabama's Black Belt, Texas' Rio Grande Valley, and the Main Line suburbs of Philadelphia all look the same to us on a map, their superficial similarity obscuring significant cultural differences at the local level.

With this research, I propose that we can achieve a much more nuanced portrait of cultural difference in the United States via another indicator: home movie rentals. Using ZIP code-level data representing rentals in twelve of America's largest metropolitan areas from 2009, I am able to capture preference with a degree of granularity that allows for neighborhood-level analysis of audiences. I identify a variety of patterns by looking across the viewing habits of residents in roughly 3000 separate geographies; what emerges is a portrait of what Daniel Dayan once memorably termed the "map of an American Babel." Inspired by what Deb Verhoeven's notion of the "computational turn" in media studies, I augment the traditional reception studies with GIS and data analysis; in so doing, I assert the importance of cultural specificity – and difference – when understanding media audiences. Following Bourdieu, my analysis explores the production of capital within "the economy of prestige" and how the geography of taste is reproduced through cultural distinction.

Here, I focus on a set of seven films with greater-than-expected levels of variation within the geographical distribution of their popularity. Few critics would think to speak of Milk, Rachel Getting Married, Frost/Nixon, Vicky Cristina Barcelona, Tyler Perry's The Family That Preys, Obsessed, and Paul Blart: Mall Cop as a coherent group; however, by tracing their cultural afterlives via the home video market, I demonstrate how we can see in their reception patterns evidence of the the unevenness of the topography of media consumption. Distinct patterns emerge through this kind of spatial analysis, leading me to argue that home video allows us to better understand both the degree to which media consumption is bound up in issues of race and class, but also the ways in which the residential segregation of America has influenced the strategies of its media industries.

 
9:00am - 10:30am#SI2: What Do We Teach When We Teach DH Across Disciplines Roundtable
Session Chair: Amanda Phillips
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

What Do We Teach When We Teach DH Across Disciplines?

Brian Croxall1, Diane Jakacki2, Zach Whalen3, Amanda Phillips4, Angel David Nieves5, Toniesha Taylor6

1Brigham Young University, United States of America; 2Bucknell University, United States of American; 3University of Mary Washington, United States of America; 4Georgetown University, United States of America; 5San Diego State University, United States of America; 6Prairie View A&M University, United States of America

Over the last decade as digital humanities research has flourished, disciplinary conferences have featured increasingly vigorous discussions about teaching digital humanities. We now find ourselves in a discipline that is not so new (acknowledging, of course, that DH is as old as the computer itself) and simultaneously at a moment when we need to talk formally about teaching and learning. As such, if the unacknowledged debate that sits at the heart of discussions about digital humanities is always, “What is digital humanities?”, it’s important to acknowledge how that question is always already related to the question of how we teach digital humanities.

This panel will feature four practitioners from different domains: communication studies, history and urban studies, game studies, and cultural studies. We have chosen these fields in part to carve out a space for digital humanities (and digital humanities pedagogy) against “traditional” literary studies, which is often strongly represented. Our history and urban studies panelist outlines the unique challenges of teaching inter-, multi-, and trans-disciplinary courses within a limited range of available course offerings, and how they require new ways of thinking about including digital methods and tools in pre-existing courses in a department’s course offerings. The panelists will discuss how their pedagogy works against this hegemonic narrative—or whether their praxis simply and artfully sidesteps the literary altogether. As the panelists discuss what teaching digital humanities means within their particular domain, they will also consider how (and when) they can strike a balance between complementary pedagogical modes. Our game studies panelist, for example, approaches teaching games, design, and social justice in terms of process rather than product, which embodies the hacking ethos of the digital humanities. Finally, they will discuss what, specifically, they teach when teaching “Introduction to Digital Humanities,” a course that has flourished over the last 10 years but that depends (as it should) entirely on the context of its teaching–the instructor, her/his department, and the level of the course. As our digital studies practitioner writes, “I use a modular approach in the introductory course where students select their entry points and outcomes, choosing their own intellectual adventures in cohorts of similarly-inclined peers.”

The panel organizers will ensure that the speakers do not exceed their allotted time by more than 60 seconds. We hope that the session will last for 60 or 75 minutes, providing ample time for questions and discussions among the panelists, organizers, and the audience.

 
9:00am - 10:30am#SI3: Embodied Data Paper Session 3
Session Chair: Heather Froehlich
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Digital Curation for Social Justice: Strategic Approaches to Metadata as Nexus for Collaboration Between Archives and Digital Humanists

Arjun Sabharwal

The University of Toledo, United States of America

Research in public history and the digital humanities has extensively relied on rich collection-and item-level metadata needed to curate digital collections in institutional repositories. Well before the arrival of digital technologies, social historians since the 1960s had extensively perused archival collections related to labor movements, woman's suffrage, disability history, underrepresented and marginalized ethnic, racial, and gender groups. With the help of computers since the 1960s, social historians were also turning to computer-generated data as an additional tool for analysis, teaching, and writing. In the digital environment, the relationship of archivists, digital curators, and digital humanists has not only thrived but has led to crossover (transdisciplinary) work. While digital humanists began creating informal digital archives, professional archivists and digital curators began exploring the digital humanities for new perspectives on curatorial practice. Virtual exhibitions (thematic hypertextual representations) have emerged as a tool for curating public history, humanities, and eventually scientific topics.

Metadata, which had been the focus of cataloguing, has gained greater visibility and relevance to the emerging digital curation landscape and the information architectures evolving alongside these developments. The new role for metadata has become a vital component in the nexus for collaboration across institutions as well as data preservation, content migration, and knowledge organization. Via OAI-PMH and Linked Data, collection- and item-level metadata have become accessible through discovery layers and harvesters such as the Digital Public Library of America, ArchiveGrid, and Google Scholar. The same metadata has also become instrumental in visualizing historical, humanities, and other scientific data for computer-assisted analysis and hermeneutic work. This presentation will focus on selected digital curation projects focused on social justice-related topics and the role of metadata in supporting digital humanities scholarship.



As Data, As Performance: Humanities Content Twice-Behaved

Susan Garfinkel

Library of Congress, United States of America

We are in a data moment: the quantified self; the computational turn; the data revolution; the datafication of everything. This paper explores the designation and use of humanities content “as data”—text as data, collections as data, and the like—as an interpretive, performative act. Viewing, thinking about, or using a something or set of somethings “as data” implies both an equivalence and a transformation, a swapping of conceptual overlays or frameworks that may or may not transpose our something(s) into something(s)-else. But can we swap the container, the wrapper, without changing its contents? Its contents’ meanings? How does “as data” change things?

Dramatist Richard Schechner makes a strikingly straightforward and useful distinction between “is performance” and “as performance”—two ways to think about performance in the larger world it inhabits. In an early version of his foundational textbook, for example, Schechner writes that “any event, action, item, or behavior may be examined ‘as’ performance. Anything at all may be studied ‘as’ performance.” Useful definitions of performance come from a variety of sources, most broadly “language in use” from Saussure and “discourse in practice” from Foucault. In the field of performance studies, which brings together theater and anthropology, Barbara Kirshenblatt-Gimblett identifies performance as “embodied practice and event” while Schechner’s definition is “behavior that is twice-behaved.” “Performance isn’t ‘in’ anything,’ he writes, ‘but ‘between.’” All of these definitions acknowledge the meaning in the doing, in the making, in the sharing.

Here I use “as performance” to think about how “as data” changes things. Is humanities content already data by default, or does the act of framing it “as data” make it twice-behaved? While the title of the first “Collections as Data” IMLS grant, “Always Already Computational,” suggests that data-ness is inherent, the tendency of the Santa Barbara Statement it produced is more aspirational: “The concept of collections as data emerges at—and is grounded by—a particular moment in the recent history of cultural heritage institutions… [that] have rarely built digital collections or designed access with the aim to support computational use. Thinking about collections as data signals an intention to change that.” Surely in the context where such a conscious change is needed, “as data” content becomes mutable, and therefore contingent, transformative, and performative.

Thus, at this extended moment of the computational turn, in our current digital-scholarly ecosystem, this paper will take up an exploration of prepared humanities data as twice-behaved, analyzing some of the concrete changes that are rendered upon humanities content when it is transformed for computation, or potential computation, in research settings. Of necessity this includes the creation of item-specific metadata as well. Drawing inspiration from critical code studies as well as performance studies, examples for close readings of humanities content “as data, as performance” include Visualizing English Print, The American Philosophical Society’s Open Data initiative, The Library of Congress’s Chronicling America newspaper project, and Smithsonian X 3D.



Presence and Absence with Derived Historical Data: The Enslaved Community Owned and Sold by the Maryland Province Jesuits

Sharon Leon

Michigan State University, United States of America

In 1838 Thomas Mulledy, S.J. signed his name to an agreement selling the 275 enslaved persons who resided on Jesuit-owned estates in Southern Maryland to Louisiana. The sale served as the culmination of the Maryland Province of the Society of Jesus’s fraught experience with slaveholding in the colonial and early national period. While much historical work has been written on Jesuit slaveholding, that writing has primarily focused on the implications for the religious community and the moral universe in which these men made their decisions about slavery. Thus far, however, no scholar has studied the enslaved people themselves.

My work in the Jesuit Plantation Project <http://jesuitplantationproject.org> focuses on the lives and experiences of the enslaved, rather than on their Jesuit owners. Focusing on the enslaved community itself makes this project ideally suited for digital methods. With an eye to the events and relationships that formed the warp and woof of the daily lives of this enslaved community, I have worked to identify more than 1,000 individual enslaved people present in the documentary evidence and to situate them within their families and larger community. In processing and representing this archival research, I employ linked open data and an array of techniques to visualize the entire community of enslaved people and their relationships to one another across space and time. These approaches allow me both to focus on the distinct individuality of each enslaved person and to have the capacity to pull back to grasp the community in aggregate, noting trends and changes in their experiences and relationships during their time in Maryland.

Working with these digital methodologies opens up a host of important questions about their appropriate application to the history of enslavement and the representation of the enslaved. Linked open data principles demand that every individual be represented by a stable uniform resource indicator—a space on the web that is their own, where historians can gather information about their lives. Nonetheless, we are faced with the difficulty of providing a responsible point of entry that allows visitors to grapple with the representation of an individual and the hundreds of others who shared their lives and experiences. Social network analysis and visualization offers some promise, but also raises a host of difficulties that I will explore in this paper: What does it mean to apply social network analysis measures to a community that is bounded and has very little control over their inclusion/movement? With a significantly incomplete data set, what is the threshold at which social network analysis makes sense? What are the appropriate visualizations to provide an entry point to this medium-sized collection of data? How can we mitigate against erasing the significance of these individuals in an effort to provide an aggregated view of their community? How can historians best integrate these techniques with traditional narrative interpretation to provide users—both members of the interested public and scholars—with a rich understanding of the lives of an (this) enslaved community?



What Are We Doing With Our Data?

Spencer Keralis1, Elizabeth Grumbach2, Sarah Potvin3

1Digital Frontiers, United States of America; 2Arizona State University, United States of America; 3Texas A&M University, United States of America

Despite the 2011 mandate by the National Endowment for the Humanities’ Office of Digital Humanities for funded projects to develop plans for sharing project data, the embrace of open data in the digital humanities has been uneven at best. A 2015 analysis of 400 successful proposals conducted by this project team found fewer than 6% indicated a commitment to open data. While cultural memory institutions increasingly look to Linked Open Data to make their data widely accessible, the reluctance on the part of digital humanists to adopt this technology has the potential to limit the discoverability and usability of DH data in the semantic web. This paper presents updated findings from our ongoing research into trends in open data praxis in digital humanities. Through a FOIA request, the project team obtained all of the Data Management Plans from NEH-ODH funded projects through the 2018 grant cycle, and conducted text analysis on this corpus to determine whether and how data from these projects is being preserved and shared. Our classification of funded projects according to their commitment to produce or apply linked and/or open data reveals a strikingly small subset. We identify projects which purport to produce linked open data, and determine whether they have fulfilled their promise, and to what effect. Finally, we identify potential barriers - social, institutional, and technological - to the implementation of linked and open data technologies, and we suggest next steps for research and programming to address a growing gap between projects situated in humanities departments and those in cultural heritage institutions. By examining trends in data management, preservation, and sharing as presented by Data Management Plans for funded projects, we offer a forecast of what a linked, open future for digital humanities might offer, and what hurdles we as a community must overcome to get there.

 
9:00am - 10:30am#SI4: Digital Textuality Paper Session 3
Session Chair: Emily Sherwood
Marquis B, Marriott City Center 
 

Frankenstein Variorum: Finding Insights in Comparisons

Emma Ruth Slayton1, Elisa Beshero-Bondar2, Jack Quirk1, Scott Weingart1, Avery Wiscomb1

1Carnegie Mellon University, United States of America; 2University of Pittsburgh, United States of America

Frankenstein is arguably one of the most influential works of science fiction, but the novel exists in multiple versions, with considerable controversy over which is the "best" text. Mary Shelley re-wrote Frankenstein several times, leading to major changes that are difficult to track from the first manuscript through three print editions, and one set of handwritten edits. To track the changes across these five versions, the Frankenstein Variorum project publishes all the editions in the standard language of the Text Encoding Initiative (TEI) and displays variant readings between editions, highlighting “hotspots” of variation. By enabling evaluation of these changes, the Variorum helps researchers come to new understandings of the text and Mary Shelley’s intentions during her writing and editing process. While past presentations of the Variorum at ADHO and Balisage have discussed our team's efforts in building the TEI of the project, this presentation concentrates on the design of an accessible visual interface, contextual framing of annotations, and geographic orientation applied to the Variorum edition.

Since the Variorum tracks changes in Frankenstein over time, it aims to provide a distinctive experience for its contextual annotations, including a GIS component that not only identifies locations mentioned in the novel but also follows Mary Shelley's travels in the years that she wrote and revised the text. The annotations, too, yield contextual insight into the most significant "hotspots" of variation—insight as yet undeveloped in previous print and digital editions of Frankenstein. The integration of an ESRI Story Map, which maps both the novel and the author’s travels, will enable researchers interested in the physical location of events in Frankenstein, as well catalog changes in the mention of places between different versions of the novel, to more fully explore these and other issues. By comparing animated spatial journeys against the text, we add a new layer of context for those interested in exploring the complexities of the writing process in relation to space and place. Key technological challenges for our team involve connecting digital contextual work, made in the ESRI story map, to the TEI of the Variorum edition. We will also discuss how the use of online interactive map can showcase the geographic context of the work, and invite participation and response from the audience in interacting with the storymap.



The Impact of Literature on Early AI Research

Avery Jacob Wiscomb, Daniel Evans

Carnegie Mellon University, United States of America

At a time when tech visionaries and engineers are calling for serious moral reflection on the future of machine learning and artificial intelligence (AI), we trace one thread in the history of thought about the sciences of the artificial. This text analytics project compares the complete papers of Herbert A. Simon (about 1.1GB of plain text) to 408 books taken from his personal library. Our work in progress attempts to identify some of the literary or philosophical sources present in Simon’s early discussions of AI and “the artificial brain” in the 1950s and 60s. Despite his later reputation as a computer scientist, Simon was trained as a student of political science at the University of Chicago, where he took courses in philosophy, biology, and economics. His research was acclaimed for its interdisciplinary nature, and it spanned across the fields. So we were curious if ideas from the books Simon read also appeared in his writings about AI and related technologies; there seems to be uniqueness in the way traditional literature colors Simon’s views of contemporaneous affairs, and his vision for the future of the machines we have in our pockets and homes today. Throughout his life, Simon also maintained that the computer, the organization, and even the individual human mind was a being “species of the genus information processor,” which some have labeled a fundamentally anti-humanist position, at odds with a scholar who read Proust, Shakespeare, and Aristotle. To map the interplay of ideas in Simon's work, we use Word2Vec and other software to compare word embeddings in his writings against some of the books that were in his library and later donated to Carnegie Mellon University where he taught for more than 50 years. We also report on our early exploration of Simon's corpus using the text-mining application Sifaka, which is built on top of the open-source search engine Lucene. By tracing Simon's influences, we seek to inform essential problems concerning the future of machine learning and AI by turning to its literary and philosophical past. We argue that similar analyses could help situate and ground historical sources for the digital in the history of humanistic inquiry from which AI springs.



Institutional Challenge: Text Encoding Rare 19th Century Job Printer Volume

Lisa Hermsen, Rebekah Walker

Rochester Institute of Technology, United States of America

This paper will describe the use of TEI to offer text analysis for a rare category of print–products of industry or business printing, the jobbing printers that produced it, and the clients who used it. We have identified five objects from a job printer’s firm, dated 1885-1920: a prices volume; an address book; a library catalogue; and a Volume no. 3 (a work manual, and the focus of this project). The rarity of this collection and the messiness of its creation requires text encoding to accurately analyze the contributions of this printing house and its place within a larger network of business practices of the time. TEI will be used as a way to manage collection transcription, and provide the most promising access to the original source material.

Volume no. 3 is a work manual overflowing with print job details regarding vellum binding, cat gut, durable paper, and much more. It also includes passages regarding the ethics of good work, assignment of duties, and the firm’s observation of Sundays and Holidays. The volume is particularly interesting in that it highlights the firm’s reputation for manufacturing accounting books in what the printer described as “the Bankers Way.” This volume describes special binding materials and methods necessary to throw the accounting book flat and recommends ruling for the pages suited for double-entry bookkeeping. While double-entry accounting had existed for centuries, this practice is thought to have transformed new industry, wage labor, and capital investment in England in the nineteenth century. This collection, therefore, includes valuable information about not only book and print history, but also how printing influenced and affected the history of finance and accounting.

The organization of Volume no. 3 presents a difficulty for transcription. It is loosely organized by an alphabetical index of topics and accompanying page numbers. Over time, topics were inserted, pages perhaps glued in place, and page numbers crossed. Many topics appear with multiple page numbers that may or may not be accurate. Within the volume, there are sketches, receipts, and price lists. Often the script is interrupted by insertions noted with a later date and initialed.

We will use text encoding to cull information from the collection, primarily working with undergraduate classes and capstone students starting in Spring 2019. We will use TEI not only to make the text machine-readable, but also provide a scheme for transcribing all five volumes in the collection. Our ultimate goal is to publish an indexed, searchable website containing full-text transcription and analysis for easier and quicker access to more people: the current scholars working on the project; students working with the materials; and future, specialized researchers.

This paper session, presented by the primary scholar working with the collection and the digital librarian helping shape and foster classroom engagement, will present work in progress for what is a seminal digital humanities project at our institution: initial analysis of encoded texts and student work inspired by the encoded text.



The Conglomerate Era

Dan Sinykin

University of Notre Dame, United States of America

I analyze how the conglomeration of US publishing changed literature. In this talk, I report on my findings based on new corpus of book reviews and on computational modeling performed over my Random House and nonprofit press corpora in my data capsule in Hathi Trust.

With a collaborator, I developed a corpus that indicates where, out of more than five hundred possible publications, each of more than a million titles was reviewed between 1950 and 2000. We produced from this a corpus of the 1% most-reviewed US novels with metadata on the race and gender of the author and the publisher. Through social network analysis, I show how publishers compare in terms of the immediate reception of their novels in the period, with particular attention to disproportionate gender distributions.

In my computational modeling, I adapt, with another collaborator, the model he used in an essay in Cultural Analytics to model genre in post-WWII US literature to discover latent patterns in Random House's list from 1950-2000. The model includes topic modeling, stylistic features, and extra-textual features including race and gender.

Additionally, I use the machine learning technique of text classification to analyze the difference with regard to literary form of conglomerate and nonprofit novels. This is work-in-progress on which I will report at the conference.

I bring these two lines of analysis—social network analysis of review reception and computational modeling of full text—to illuminate how women and writers of color at Random House and nonprofit presses responded formally through autofiction to interpersonal misogyny, structural patriarchy, and racism in publishing.

 
9:00am - 10:30am#SI5: The State of Digital Humanities Software Development Roundtable
Session Chair: Matthew Lincoln
Marquis C, Marriott City Center 
 

The State of Digital Humanities Software Development

Matthew Lincoln1, Zoe LeBlanc2, Rebecca Sutton Koeser3, Jamie Folsom4

1Carnegie Mellon Unviersity; 2University of Virginia; 3Princeton University; 4Performant Software

In 2013, the University of Virginia Library Scholars’ Lab convened an NEH-funded symposium exploring the often-tacit intellectual labor in the practice of digital humanities software development. Called “Speaking in Code”, this event drew attention to issues faced by coding practitioners in the field. But where do we stand now, five years on? This roundtable will bring together DH software developers from a variety of different roles and professional stages, including in-house developers associated with institutional DH centers, independent consultants, and faculty who produce and publish their own code.

This range of perspectives is crucial for understanding how institutional contexts affect all aspects of DH development, from tool building, to research projects, to teaching programming. Some universities have adopted the “Virginia Model” pioneered by UVA and George Mason, establishing a centralized library DH shop. How does this structure compare to practices around individual faculty/grad-students-as-coders? Or to the the lab-based PI organization with staff research programmers prominent in the sciences? Or programs that contract with independent consultants?

This roundtable will also grapple with the relationship between industry-based “best practices” and DH development. Taking advantage of tools and techniques such as unit testing, code coverage, and pair programming ought to result in better software products with fewer bugs that are easier to maintain. But how well does that correlate with better scholarship? While software developers often strive to develop minimal components that can be used in a variety of applications, many DH projects pride themselves on context-specificity. In which projects do these priorities complement each other, and in which do they conflict? Larger software engineering teams benefit from the ability to do things like code sprints, code review, etc. Can one-person or smaller DH developer shops do the same?

Different organizational contexts also complicate the rhythms of DH development. How do we determine when something is complete, in absence of a publication deadline that requires us to stop refining something? Grant-driven development can be quite different from developing for industry; planning cycles are much longer, and projects can be “resurrected” when a former PI lands a new grant. Goalposts also change, such as when a small product built for one article or class now needs to be deployed as a larger “bulletproof” service.

Our participants will also discuss the personal considerations of doing software engineering for the academy. Participants' training and backgrounds span a wide spectrum from industry and freelancing to humanities departments and self-taught programmers. How does this diversity shape DH development? As DH developer becomes a more established position, how does the academy treat software engineering time differently from researcher or teacher time? And what are the opportunities for professional advancement in software engineering in DH development?

We hope that this roundtable will be of particular interest to software developers in the DH community, broadly defined. We also believe this roundtable will be valuable for non-programmers who want to better understand the mindsets of their colleagues.

 
9:00am - 4:00pm#Install4: Installations: A City for Humans, Data Beyond Vision, The Cybernetics Library
City Center A, Marriott City Center 
 

A City for Humans

Everest Pipkin, Loren Schmidt

Withering Systems

This installation proposal is for an interactive digital diorama installed into a terminal or arcade machine at the conference. Titled “A City For Humans”, this project allows the public to collaboratively build a dynamic, shifting landscape together by typing single words on a keyboard. This text is then parsed for 3000+ common nouns and verbs, which are then immediately populated into the city as visual objects, represented by hand-drawn tiles.
For example, if a person types the words tree and rain, each would translate to an individual tile that is placed in the world. Writing a word like ‘tree’ makes one. Verbs- like rain- cause an action, like a small rainstorm forming over part of the city. The visitor to the space can simply type these single words, or can choose to tell more narrative stories (eg- ‘a pine tree is standing in a rainstorm’), which will be parsed similarly. This way, each contribution is given living form in the diorama.
Each type of object also has specific rules that govern its behavior. For example, groups of related plants like to form around one another, and will populate particular regions rather than scattering randomly across the landscape. Roads, sidewalks, fences, hedges, and aqueducts form in linear rows, while plants, buildings, and detritus clump together more organically.
Furthermore, intelligent things like people and animals have sets of shifting needs and desires. A person may be thirsty or hungry, but may also desire more abstract things, like excitement, or beauty. These entities have freedom of movement, and will seek out objects that meet these needs, like a well (for thirst) or a field of flowers (for beauty). In this way, these placed people, animals, and plants will go about their daily business, reacting to one another and the world around them as it changes.
Rather than attempting to distill, abstract, or pare-down community data, “A City for Humans” is a 1:1 interaction with those who choose to engage with it. However, this engagement is not temporary- much like real investment in place, the changes that are made to the digital city persist over time, influencing the digital world for the indefinite future.
The project’s central goal is to foster a sense of community by providing a quiet but responsive platform to collaboratively build a beautiful space together.
We are also dedicated to producing visualization systems that prioritize ‘small data’- in a world of big-data visualization, we also need generous and playful networked systems that respond to the individual, the hyper-local, and the immediate. This project maintains that data-visualization is not inherently an abstraction, a reduction, or an illustration. Rather, it can be a specific and responsive exchange that facilitates play, experimentation, joy, and a sense of place.



Data Beyond Vision

Rebecca Sutton Koeser1, Xinyi Li2, Nick Budak1, Gissoo Doroudian1

1Princeton University, United States of America; 2Pratt Institute, United States of America

Data visualization is frequently used in Digital Humanities for exploration, analysis, making an argument, or grappling with large-scale data. Increasing access to off-the-shelf data visualization tools is beneficial to the field, but it can lead to homogenized visualizations.

Data physicalization has potential to defamiliarize and refresh the insight that data visualizations initially brought to DH. Proliferation in 3D modeling software and relatively affordable 3D printing technology makes iterative, computer-generated data physicalization more feasible. Working in three dimensions gives additional affordances: parallel data series can be seen next to each other, rather than color-coded, overlapped, or staggered; and physical objects can be viewed from multiple angles, allowing for changing perspective.

Data visualization necessarily privileges sight. Participants can experience data through sensing — feel, touch, hear. Touch is particularly significant, since, like sight, it is a meta-sense and because it affords intimacy, as feminist philosopher Luce Irigaray has discussed. By foregrounding sensory experience and embodiment, we will challenge conference participants to consider other approaches for engaging with and representing humanistic data. Multimodal data explorations incorporating touch and sound can offer new possibilities of accessibility to those with low vision (for example, see the #DataViz4theBlind project). Spatial, acoustic, and temporal dimensions of data representation can generate rich narratives, invite the audience to explore new relationships, and turn passive consumption into a sensory experience that encourages interpretation. In addition, creating data physicalizations is a form of critical making; the iterative and reflective process requires more time to engage with the data, including the human aspects represented.

The final multimedia installation will display descriptions of the methods and processes alongside the final data physicalization objects and dynamic displays. By foregrounding sensory experience and embodiment, we offer an opportunity to explore humanities data meant to challenge conference participants to consider other approaches for engaging with and representing humanistic data. We are inspired by the work of Lauren Klein and Catherine D’Ignazio, who encourage a reorientation toward the emotional and affective qualities in our engagement with data. In employing physicalization as a technique to corporealize and “re-humanize” humanities data, we follow the ethical principles articulated by the Colored Conventions Project to “contextualize and narrate the conditions of the people who appear as ‘data’ and to name them when possible.”

Pieces in the installation will utilize space, time, and/or interaction to provide new ways of engaging with a dataset and the arguments and narratives behind it, in order to challenge the dominant paradigms of conventional screen-based data visualization.

Provisional list of pieces:

  • 3D printed model of library member activity over time from the Shakespeare and Company Project, juxtaposing documented activities from two sets of archival materials

  • Folded paper models for individual membership timelines from the Shakespeare and Company Project, allowing attendees to select a library member and fold a model based on their data, allowing the recovery of women and and non-famous members.

  • A weaving representing intertextuality based on references in Jacques Derrida’s de la Grammatologie from Derrida’s Margins



The Cybernetics Library: Revealing Systems of Exclusion

Sarah Hamerman1,2,3, Melanie Hoff1,4, Charles Eppley1,2,6, Sam Hart1,2, David Isaac Hecht1,5, Dan Taeyoung1,5,7

1Cybernetics Library; 2Avant.org; 3Princeton University Libraries; 4School for Poetic Computation; 5Prime Produce Apprenticeship Cooperative; 6Fordham University; 7Columbia University GSAPP

We propose a 4-day installation of a physical library collection, digital interface, and software simulation system. We are a research/practice collective that explores, examines, and critiques the history and legacy of cybernetic thought via the reciprocal embeddedness of techno-social systems and contemporary society. This installation’s intention is to examine and expose to users patterns of systemic bias latent within those systems and their use. The collection will be housed in custom-built, secure furniture and made accessible to all attendees of the conference.

Our collective is comprised of members from a diverse set of backgrounds and practices, including art, architecture, technology, publication, librarianship, gender studies, media/cultural studies, cooperatives, fabrication, design, simulation, queer studies, and more. We work on the project independent of institutional affiliations, but have had numerous successful collaborations, and were the organizers of an independent but highly successful conference, from which our ongoing project emerged.

From this outsider position, our project seeks to refigure and make more accessible the relationships between people, technologies, and society. The project has been manifested through activities such as community-oriented artistic installations, reading groups, workshops, and other public programs. The project also incorporates ongoing development of tools, platforms, and systems for enhancing, deepening, and extending engagement with the knowledge it organizes and to which it provides access. The project aspires to support its collaborators and users by serving as a connecting node for disparate communities that share intellectual or activist goals for exploring and advancing art, technology, and society.

The first version of the software simulation system used cataloging data to form associations between the usage histories of users of the library system, as well as linking content from works accessed during the initial conference to the topics presented by the speakers (in the context of a multi-layered visual representation). Another system, part of an installation at a program around the theme of "uncomputability", prompted users to participate in the construction of a collective poem by scanning in books from the collection which had meaningful associations for them. Another highly interactive implementation allowed users to engage their practices of sharing knowledge through metaphors of gardening: cultivation, care, attention, and community.

Our installations have been featured by The Queens Museum, The Distributed Web Summit by The Internet Archive, The School For Poetic Computation, Prime Produce, The Current Museum, vTaiwan, and Storefront for the Commons.

While the specific implementation of the installation for the ACH conference is still in preliminary stages of development, we are building on the themes of direct engagement, and collective, emergent explorations of structures of knowledge that can reveal hidden assumptions and biases latent in our approaches to technology and society. Based on our history of successful, memorable installations and collaborations, we are confident that this installation will contribute a valuable critical, conceptual, and technological resource the conference. We hope to produce an ecology for new collaborations, unexpected encounters, and deeper explorations of the themes and methods of the conference, and would be happy to be able to provide more detail soon.

 
9:00am - 4:00pmInstall4: Installations: Museum of Forbidden Technologies
The presenter(s) will be available from 1 pm- 3 pm to answer questions about this installation.
Salon 6, Grand Ballroom, Marriott City Center 
 

Museum of Forbidden Technologies

Emily Esten

N/A, United States of America

In the bimonthly podcast Welcome to Night Vale, a small desert community experiences a world where every conspiracy theory is true. From the Sheriff's Secret Police to mysterious lights in the night sky, the constant surveillance in the town is an absurd perspective of our own. One of several town attractions in the podcast, The Museum of Forbidden Technologies is a museum featuring numerous exhibits, all of which are covered in thick burlap at all times, with explanatory plaques completely blacked out with permanent marker. The exhibits—featuring time machines, lie detectors, and pollution-free energy—declare both what is forbidden to Night Vale residents & how they may experience them (which is to say, not at all).

As a one-day installation during ACH 2019, this submission intends to bring a pop-up Museum of Forbidden technologies for participants to experience. This small, one-room installation—part-fan culture celebration, part-public humanities project, part-surveillance studies intervention—visitors will consider the "forbidden technologies" within our real world. True to the Night Vale experience, it may take more effort to be able to view these objects and learn more about them. But true to ACH, a participatory intervention into the exhibit will help make a strong contribution to challenging the ethics and questions of our field. Participating in a one-of-a-kind experience, museum visitors will use this time to address complex themes around agency, detection, and technology.

These objects—such as the Drone Warriors' technology at Standing Rock, anti-surveillance facial camouflage, and community ordinances for community input and oversight over use of surveillance technologies – highlight efforts to counteract the unethical physical and digital surveillance methods at work in the world around us. Exhibit labels will address the histories of these objects & how they found their way into the museum. This installation has three goals for visitors: examine the way communities address and respond to surveillance; contemplate the language of "forbidden" and "technology"; and consider the role of science and technology museums in exploring this dilemma.

 
10:30am - 10:45am#Break6: Break
Grand Ballroom Foyer A, Marriott City Center 
10:45am - 12:15pm#SJ1: Reanimate Roundtable
Session Chair: Roopika Risam
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Reanimate: Recovering, Reviewing, and Redistributing Lost Intersectional Histories of Media

Carol A. Stabile1, Gabriela Baeza Ventura2, Adrian Driscoll3, Sarah Kember3, Trevor Munoz4, Roopika Risam5, Carolina Villarroel2

1University of Maryland; 2University of Houston; 3Goldsmiths University; 4University of Maryland; 5Salem State University

For decades, literary scholars have effectively and successfully engaged in projects of recovery. That Zora Neale Hurston and Nella Larsen are now names that successive generations of high school and college students have encountered in their studies owes to the efforts of scholars who worked to recover, review, and redistribute texts that were out-of-print, overlooked, and in some cases, forgotten. But how can innovative approaches to publishing, made possible through digital humanities methods, accelerate this process?

This roundtable brings together partners in Reanimate, a collective of digital humanists working across institutions and disciplines to examine this question. Reanimate takes as its inspiration these earlier scholarly efforts, in its own work excavating bodies of work across the twentieth century by cultural producers creating counternarratives and alternatives to the xenophobia, racism, misogyny, and homophobia of mainstream mass media. This prolific body of work by people of color and women—much of it undigitized, unpublished, and scattered across archives and out-of-print publications—amounts to a counterfactual history of cultural production, born of a political commitment to recovering this writing and reanimating the histories of cultural and media studies with previously unheard voices. Understanding media in its broadest sense (as encompassing a wide range of texts, moving images, audio, performances, etc.), the Reanimate collective is comprised of projects that seek to make counter-archives available online and in open access format. The need for open access materials featuring such work is acute, especially for public institutions that lack the resources to purchase textbooks that are costly, conservative, and slow to incorporate new perspectives or ideas.

This panel convenes partners in the collective--from Goldsmiths University Press, the Maryland Institute for Technology in the Humanities, Recovering the U.S. Hispanic Literary Heritage at the University of Houston and Arte Público Press, Salem State University, and the University of Oregon. Organized as a roundtable, this panel will provide partners the opportunity to provide brief presentations describing their investments in the project and goals for the collaboration. Topics to be addressed by roundtable participants include: the ethical and practical dimensions of recovery work; constructing pipelines for publication, designing intersectional feminist workflows, financial models, and labor models; challenges of e-book production and distribution; decision-making with Text Encoding Initiative standards and web solutions; the affordances of multilingual textual recovery; managing hosting and server side administration for a distributed network of collaborators. Six panelists will speak for 10 minutes on topics above, followed by 60 minutes time reserved for conversation with the audience about the collaboration and the overlap between the dimensions of the project identified by the speakers.

 
10:45am - 12:15pm#SJ2: Space and Place Paper Session 2
Session Chair: Francesca Giannetti
Marquis A, Marriott City Center 
 

Creative Capital: Collaborative Approaches to Developing and Supporting Digital Contexts for Hyperlocal Histories

Jim McGrath

Brown University, United States of America

This paper describes ongoing cultural initiatives in Providence, Rhode Island that focus on “hyperlocal histories” of these cities and communities: narratives, archival collections, and exhibits that take a close look at particular neighborhoods, buildings, parks, residents, political operatives, murals, artists, and activists (among other topics) over time. It will call particular attention to the ways these initiatives consider digital contexts for their work and what ideas of value seem to inform decisions to utilize particular tools, platforms, social media networks, forms of digitization, curation, and preservation. Providence is nicknamed “The Creative Capital” because it is home to an active community of artists and educators, many of whom focus their efforts on stories of the city. In 2019, a collaborative initiative titled “Year of The City” will document the varied ways local residents and cultural institutions think about Providence and the long histories of its twenty-five neighborhoods. I am particularly interested in when and how these images and perspectives on the city circulate online, materialize in archival records, reach local publics as well as national or global audiences. In looking at the uses of particular technologies, methodologies, publication platforms, and avenues of outreach and dissemination, I will highlight the obstacles and challenges facing creators of hyperlocal histories within and beyond the contexts of higher ed, and I will document forms of collaboration, pedagogy, and institutional support that can aid efforts in these local contexts.

This paper will highlight particular digital initiatives invested in the hyperlocal: developing work tied to “Year of The City,” the popular podcast Crimetown, the mobile phone tours offered by Rhode Tour, images and ideas of Providence on Wikipedia. It will also consider popular efforts that prioritize non-digital contexts: the Rhode Island Collection of materials at the Providence Public Library, the Dirt Palace feminist art space, physical tours of what was once the city’s Chinatown neighborhood, artistic interventions at Mashapaug Pond by the Urban Pond Procession. By bringing digital contexts related to technologies of tours, exhibits, archives, audio storytelling into conversation with non-digital projects invested in these same metaphors and methodologies, I will assess possibilities of remediation, alternate forms of dissemination and strategic re-use of digitized and born-digital materials, and possibilities for collaboration. And I will discuss the work of ethical and inclusive forms of collaboration, resource-sharing, and compensation that I have been part of in recent efforts to connect digital humanities practitioners at Brown University with community partners and practitioners. While Providence has its own particular challenges and contours, my hope is that a candid assessment of ongoing work related to hyperlocal histories will help conference attendees consider how their own research and resources might lead to generative partnerships with local practitioners and audiences.



Urban Panorama: t-SNE Street Feature Mapping Tool

Frederico Freitas, Todd Berreth

North Carolina State University, United States of America

In the last two decades, historians have increasingly employed GIS to understand urban and spatial change. However, GIS approaches landscapes from above, reproducing the point of view of the planner. The Urban Panorama Project aims to introduce a new dimension in the historical assessment of how cities change. By shifting the focus from geometric parcels, as seen from the air, to images of streetscapes, as seen at the street level, we intend to move closer to the perspective of the people experiencing change in the space of the city as they traverse its streets and avenues. The Urban Panorama Project, therefore, is testing different computer vision and machine learning techniques to assess historical and present-day images of streetscapes to investigate urban change. One of these techniques, and the object of this presentation, is exemplified in our t-SNE Street Feature Mapping Tool. The tool allows the visualization of clusters of tens of thousands of building facades, or other desired street-level visual features, scraped autonomously from geolocated street-level photographic corpora. The system leverages two popular machine-learning techniques and open source software libraries. The first, YOLO, is a neural-network-based computer vision object detection system. We are able to train desired image classifiers (e.g. what does a building facade, commercial sign, or mailbox, etc. look like), by providing hundreds of example annotated images. Such classifiers are then used to autonomously mine photographic archives and extract these features (and their associated geo-location/temporal metadata), depositing the results in a database. The second, t-SNE (t-distributed stochastic neighbor embedding), is a machine learning technique for dimensionality reduction, useful for visualizing high-dimensional datasets. Unlike YOLO, it does not require human training—in our case we used the t-SNE technique to analyze all images in a particular feature corpus—i.e., all building facades, etc.—and autonomously cluster visually similar images together in a spatial plot. In this presentation, we will present the tool and its application to a corpus of geolocated historical (1920-1980) and present-day streetscape images of Raleigh, NC. The tool uses an intuitive thumbnail grid interface where features are selectable and visualized as a heatmap layer in an adjacent city map. The t-SNE Street Feature Mapping Tool will help researchers within the Urban Panorama Project to use historical streetscape images to understand the spatial distribution of urban features across the space of the city. With the development of this tool, we can tap into corpora of street-level historical images to understand the spatial distribution of streetscape features in a city and compare different time periods. Phenomena such gentrification, urban decay, the spread of architectural styles, the use of different materials, textual analysis of urban signs, social uses of public space, urban flora, etc. could be mappable through this technique.



Slave Streets, Free Streets: Mapping the Dispossessed and Un-Addressed in Early Baltimore

Anne Sarah Rubin, Dan Bailey

University of Maryland, Baltimore County, United States of America

Advances in historical GIS have made it possible to map the past in ways that would have seemed impossible a few years ago, georeferencing disparate maps in order to build deeply accurate visualizations and recreations. But what happens when we reach the limits of our information and sources? How do we map people who don’t have addresses?

This paper grows out of an effort to map the lives and locations of free blacks and enslaved workers in an immersive map of Baltimore, circa 1850. One of us is an historian of the 19thcentury Unied States, and the other is a photographer and animator. This deeply researched, detailed site Visualizing Early Baltimore, http://earlybaltimore.org is like a Google Map for the 19thcentury abd serves as a basemap. We want users and researchers to be able to walk down the streets of this virtual city and learn about the lives of people usually left out of historical narratives. Too, we believe that these maps will show the degree to which free blacks and enslaved workers lived in an integrated, rather than segregated world, a marked contrast to the deeply segregated Baltimore of today.

This paper will discuss our efforts to solve this problem in ways that have implications for historical reconstructions of other cities in the years before the mid-nineteenth century standardization of urban addresses. We have chosen to foreground our uncertainty, to show the dearth of information about black lives and spaces, even as we attempt to bring forward the names of individual enslaved people and free blacks.. Their places on the historical landscape can be visualized and contextualized.



Mapping City-Scale Reading Events: Geography and Sentiment of "One Book One Chicago"

Ana Lucic, Mihaela Stoica, John Shanahan

DePaul University, United States of America

In The Bestseller Code (2016), Jodie Archer and Matthew Jockers argue that “while it does matter whether an author chooses a city or the wilderness, the specific city does not matter all that much when it comes to bestselling.” In this paper, researchers from the “Reading Chicago Reading” (RCR) DH project team will demo tools and methods for determining whether the settings of books do in fact have a measurable influence on reader popularity.

The RCR project studies the Chicago Public Library’s (CPL) ongoing city-wide “One Book One Chicago” (OBOC) program to capture correlations between circulation data and outreach programming, and to create tools and predictive models that would help librarians increase patron engagement. We will provide a walk-through of analysis and visualizations of CPL checkout data, associated social media, and community programming events since 2011. If literary representation of place and real geography have detectable links to one another, our RCR project data can capture the effect.

Using city-wide library branch data we have received from CPL, we first ask whether Chicago-themed OBOC selections check out at the same rate as non-Chicago-located OBOC selections. Initial results indicate a statistically significant difference in checkout numbers per branch even though CPL maintained roughly similar marketing efforts during each season.

Next, we will explain how real and imagined geography in texts do and do not directly relate, and how sentiment can link to place. Using Stanford NLP tools, we extracted Chicago locations in several recent OBOC works for which we have real branch-level circulation statistics and sentiment scores that have been assigned using automated methods. Our visualization reveals pockets of city space that consistently or exclusively receive negative sentiment scores in the selected books, and show checkout effects.

Finally, we will present spatial correlations between location-based sentiments and socio-economic, demographic, and crime statistics using American Community Survey (2012-2016) and CPD data. Initial analysis shows that most sentiments metrics, especially negative ones, are higher in areas where there is higher inequality (as measured by a Gini index 45%+).

Our analysis suggests that in some cases, and despite Archer and Jockers’ provocative claim, the particular city does matter when readers are in that same city and can recognize place names.

 
10:45am - 12:15pm#SJ3: Mixed Methods Research Panel
Session Chair: Calvin Pollak
Marquis B, Marriott City Center 
 

Mixed Methods Research as Storytelling with Data: Making Sociohistorical Meaning from Digital Projects

Calvin Pollak1, Laura McCann1, Alicia Urquidi Díaz2, Timothy Brown3

1Carnegie Mellon University; 2University of British Columbia; 3University of Washington

Scholars are increasingly concerned with integrating quantitative and qualitative methods of analysis to study emerging technologies. Such scholarship often teaches us as much about people’s individual and collective narratives as it does about technological change. In this panel, we ask: how can we draw on the sociohistorical context provided by humanistic knowledge in order to tell stories with (and about) data?

Two of our panelists are studying how media discourse travels intertextually online, influencing people’s views about political issues related to the Internet. The first is analyzing blog posts about government surveillance that were published by two elite think-tanks: the American Civil Liberties Union and the Brookings Institution. This study compares the keywords and rhetorical strategies that these posts tended to feature before the 2013 NSA leaks of Edward Snowden to those they tended to feature after the leaks, thus empirically distinguishing the surveillance rhetorics of civil-libertarians and national-securitarians. The second panelist is considering how the New York Times’s coverage of the Equifax and Cambridge Analytica data breaches was recontextualized through public discourse. This project draws on rhetorical corpus analysis and natural language processing to ask how narratives develop intertextually after information is circulated by major media institutions and people read and react to their stories.

Our third panelist is concerned with the ways that traditional normative grammar, informed by (philological) corpus analysis, often reduces the complexity of language variation. Linguistic features tend to be classified into discrete categories: correct, preferred, vulgar, obsolete, regional, colloquial, etc. What this conceals is the underlying debate: who gets to decide what things mean? In this panelist’s Twitter dataset (derived from the hashtag #RAEconsultas), such a debate is documented in a network of discussions allowing factions to be identified: gender-inclusive language advocates, language purists, social conservatives, etc. and, in between them all, the normative voice of the Royal Spanish Academy (Real Academia Española, or RAE). Using qualitative content analysis and network analysis, this case study explores how changing notions of natural gender, propagated online, are challenging traditional grammar.

Lastly, our fourth panelist is investigating the moral implications of an emerging technology for treating people with Essential Tremor, a progressive motor disorder that causes uncontrollable tremors in the limbs. The technology, adaptive Deep-Brain Stimulation, enables users to control their symptoms with their brains, with the potential to profoundly change their sense of self-identity, feelings of self-control, or even their close relationships. Through phenomenological interviews with new users, electrical engineers, neuroscientists, and medical practitioners, this project provides humanistic, narrative context for understanding this technology. By taking seriously the stories of people with disabilities and respecting their voices, this project encourages designers of future medical technologies to incorporate their insights.

We seek to push conversations in the Digital Humanities beyond simply showcasing data-driven methods and towards a more critical reflection upon the meaning and purpose of our work. Who is represented in our datasets, what were / are they doing, and why? And what stories do digital datasets allow us to tell that we might otherwise overlook?

 
10:45am - 12:15pm#SJ4: New Media Paper Session 2
Session Chair: Caroline Schroeder
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Making Up: The “Post-human” Bodies and Gender Disobeying at the Turn of the Millennium

Slavna Martinovic

Academy of Fine Arts Vienna, Austria

Making up – The “post-human” bodies and gender disobeying at the turn of the millennium

The aim of my paper will be in defining the development of “post-human” looks and lifestyles of artists, performers and individuals better-known trough their Instagram pseudonyms like @isshehungry, @salvjiia, @ines_alpha to name a few. I will analyze how their make up styles are “shaping” their corporal and gender representation creating what is surpassing male or female, beyond human for that matter. These artists, performers, and individuals that were at one point called the “club kids”, are becoming through social media celebrities and entrepreneurs, with hundreds and thousands of followers, forming their own economies and becoming the influence of the mainstream; at the same time recycling the mainstream into something larger than life.

Instagram, a perfect medium for digital exposure of “public intimacy” (Thrift), allows an embodiment of a certain “allure”, that Thrift defines as “the creation of worlds in which the boundaries between alive and not alive and material and immaterial have become increasingly blurred” (Thrift, 2010). The digital “in materiality” (Parikka) allows construction of gender identity and corporality not constrained by the physically possible, exhilarating the mixing of different “specificities” and embodiments.

By briefly tracing the history of the individuals that fought the normative gender throughout the 20th century, my aim is to shine light on these crucial pioneers of the progress of “the necessity to transform” that “come (s) not to claim the rightness but to dismantle the system that meters our rightness and wrongness according to the dictates of various social orders” (Halberstam, 2018) .

Secondly, I will unpack the illustrations of the embodiment of the early "digital materiality" simulation of self, in a form of an avatar, constructed by juxtaposing of high and low culture, motivated by the post-structuralist fragmentation of self, indeed blurring the uncompromising differences between human and animal and human from the machine (Haraway)

Finally, by considering the notion of flow, excellently interpreted by Boris Groys - the material flow being irreversible, and the Internet being founded on the possibility of return and reproduction - I will consider the effects of “zooming in” in through outlets like “insta stories” or “on live” in the perpetual process directed at both self and others, of interminably curating, creating, performing and transforming one's corporality through processing of images, data, information, knowledge, fashion, makeup, photo apps and even personalized augmented face filters.

In conclusion, I will be offering answers to the following questions:

How did the medium of the Internet change the message (McLuhan)? How is it changing the way we send the message? How did this continuous window into “public privacy” enabled us never to end the message?

Bibliography

Groys, Boris. (2016) In the Flow London: Verso

Halberstam, Jack, “Unbuilding Gender,” Places Journal, October 2018.
Available at: https://doi.org/10.22269/181003 (Accessed: 23/01/2019)

Haraway, D. (2004). The Haraway reader. New York: Routledge.

Thrift, Nigel (2010) Understanding the Material Practices of Glamour The
Affect Theory Reader (Kindle Locations 3983-3984 and Locations 4014-4015).
Kindle Edition



Not Too Close-Teading, Not Too Distant-Reading: Mixed Methods for Social Media Analysis

Aimée Hope Morrison

University of Waterloo, Canada

Social media presents complex and multimodal sets of objects, practices, and meanings that require interdisciplinary competencies to understand, analyse, and interpret. The massive scope, scale, and variety of such texts further requires new methods for text selection: neither massive automated data scraping nor boutique-style artisanal close reading seem appropriate.

This paper describes how interdisciplinary fields and methods can be drawn together to understand the full action of social media life writing in the world. I propose a generalizable and customizable mixed methods process employing some combination of the following: data collection and analysis tools drawn from digital humanities; close reading and surface reading methodologies drawn from literary fields; grounded theory study design drawn from sociology; thin and thick description drawn from anthropology and ethnography; and the delineation and interpretation of the software and hardware affordances drawn from new media studies.

1-Explore and Engage: The first step involves wide exploration and reading among linked texts (images on Instagram tagged "#effyourbeautystandards," and those that link to them, for example) in order to get a sense of the scale and scope of a set of practices: who does it, how, and why? This reading practice is embedded, context-driven, and interpretive; it traverses a field of texts rhizomatically, across webs of connection, in order discern emergent patterns from a diffuse set of instances.

2-Categorize: From this emergent sense of the contours of a given set of practices or a community, second, I engage in a thin description of the practice: a main outcome of the research is precisely in this work of meticulously describing what constitutes membership in the community, the goals of communication, and the boundaries of shared practices, as well as themes and content. This thin description gives a full contextual reading of the purpose of the communications, and how they perform meaningful work in a given community. From this emerges a sense of rhetorical genre: a delineation of who comprises the community of practice, what goals they aim to accomplish through these communications, and how these goals are advanced through specific and describable compositional practices.

3-Select: In the third stage of research, both exemplary (unusually skilled or somehow noteworthy to the broader community of practice) instances of the determined genre and representative (typical of the larger class) ones are chosen to serve as target texts for analysis and interpretation. This work employs literary strategies of discernment and discrimination, modes of scholarly judgement that animate any choice of primary text in print or otherwise. Rhetorical genres at scale ("fat fashion selfies," or "the Kiki challenge") must be described at the more general level of purpose, strategy, and community as I suggest above.

4-Interpret: Fourth, these exemplary and representative instances are subjected to literary-inflected close reading practices that interpret the means by which each instance performs the work central to the genre described in Stage 2, and which characteristics mark it as an exemplary or representative instance.

 
10:45am - 12:15pm#SJ5: Print and Probability Panel
Session Chair: Christopher Warren
Marquis C, Marriott City Center 
 

Print and Probability: Computer Vision Approaches to Clandestine Publication

Christopher Warren, Stephen Wittek, Dan Evans, Matthew Lincoln, Shruti Rijhwani

Carnegie Mellon University, United States of America

Scholars frequently turn a blind eye to this remarkable fact, but there are over 100,000 early modern books and pamphlets whose printers remain unknown. Before the modern
era, the book trade was often dangerous and secretive. For fear of persecution and punishment,
printers between 1473-1800 declined to attach their names to about a fifth of all known books and
pamphlets. However, now that over 130,000 books have been digitized by Early English Books
Online (EEBO), anomalies and variations in the printing materials of this era may hold the key to
identifying these printers. Painstaking, individual studies by historians, editors, and analytic bibliographers have found tell-tale clues in individual characters from this era, due to damaged type pieces. Speakers on this panel will offer four linked case studies based on their work identifing and aggregating such anomalies at scale. Tackling mysteries that in some cases have confounded scholars for centuries, panelists will present new printer ascriptions derived from computer vision and machine learning amidst topics such as anonymization, piracy, distributed printing, and fictitious imprints.

 
12:15pm - 1:30pm#ACHAGM: ACH Annual General Meeting
Salon 2 & 3, Grand Ballroom, Marriott City Center 
12:15pm - 1:30pm#Lunch3: Lunch Break
 
1:40pm - 2:40pm#SK1: Experiencing the Self Panel
Session Chair: Kristen Lillvis
Marquis A, Marriott City Center 
 

Experiencing the Self Through Transduction, Distortion, and the Glitch

Kristen Lillvis1, Steven Smith2, Kristin Steele1

1Marshall University, United States of America; 2North Carolina State University, United States of America

In her study of “technogenesis,” Katherine Hayles highlights the “dynamic interplay between the kinds of environmental stimuli created in information-intensive environments and the adaptive potential of cognitive faculties in concert with them” (97). Drawing upon the work of evolutionary biologists, Hayles asserts that feedback loops intertwine humans with their techno-rich surroundings, and epigenetic changes result in environmental modifications that “favor the spread of these changes” (100). The movement between analog and digital, or physical and virtual, provides an opportunity for the subject to assess and potentially transform the experience of self. This panel draws on theories of transduction and the glitch to explore how digital writers and rhetors exploit the boundary between analog and digital to create occasions for self discovery. Specifically, each paper shows a different effect of technogenesis as related to the co-evolution of the environment and the subject.

Bringing posthumanism, glitch feminism, and queer digital humanities into conversation with theories of neuronal plasticity, Speaker 1 follows Hayles’s statement that digital media can “subvert and direct the dominant order” (83) to assert that glitches in works of electronic literature (such as Whitney Anne Trettien’s glitch lit webtext Gaffe/Stutter (2013)) position difference as fundamental to artistic creation and interpretation. Speaker 1 argues that the movement away from literary hegemony (an extension of Hayles’s argument) can perhaps best be seen when considering the self as writer and reader—the author/reader of electronic literature must reject the dominance of an established notion of “literariness” and create out of the destruction of the glitch.

Speaker 2 will explore technogenesis by considering how technology and teaching/communication practices have co-evolved through an increase of technology’s capabilities to track the human body. Speaker 2 argues that in today’s era of hands-free, natural-user interfaces, digital rhetors can contribute to a new approach to the digital canon of delivery—one that explores the ways the data associated with a user’s gestures and posture can be creatively transduced. With the help of depth cameras like Microsoft’s Kinect v2, users in front of the sensor are tracked as complex, three-dimensional skeletons. Each user-as-skeleton is divided among more than 20 joints, and each joint comprises three dimensions of real-time data. Showing concrete examples of technogenesis in the classroom, Speaker 2 asserts that motion-sensing technologies allow differently-abled bodies to participate in meaningful classroom activities that allow for unique experiences of self-reflection.

Speaker 3 explores how distortion in new media, particularly video essays, expresses difference from normative structures. Referencing Richard Misek’s "In Praise of Blur," Speaker 3 argues that as blur and distortion shape and reflect the embodied experiences of the viewer and the artist, these effects alter ideas of disembodiment as well. While distorted moving images defy normative structures of seeing, they call into question subjective experience of vision itself, whether by seeing exterior spaces or by witnessing one’s own interior world. Using examples of artistic blur and distortion, Speaker 3 shows how these techniques—which can be achieved by digital and/or print technologies—offer an occasion to represent altered states-of-mind.

 
1:40pm - 2:40pm#SK2: Space and Place Paper Session 3
Session Chair: Ruth M. Mostern
Marquis B, Marriott City Center 
 

Linking Pasts with Place Names and Gazetteers

Ruth M. Mostern1, Karl Grossner1, Ryan Matthew Horne1, Tom Elliot2, Ethan Gruber3

1University of Pittsburgh, United States of America; 2ISAW, New York University; 3Nomisma.org

This roundtable introduces four interrelated topics in the spatial humanities: 1) how to find, model and use historical place names - the not GIS-ready data form in which spatial information generally appears in unstructured text and maps; 2) gazetteer databases as resources for formalizing information about places, including names, relationships, and attributes; 3) the impact that linked open data methodologies are having in terms of connecting disparate specialist gazetteers and other resources replete with place name data; and 4) the World-Historical Gazetteer (WHG) initiative as an instantiation of these concepts.

Panelists will include members of the WHG team and representatives of related projects including Pelagios and PeriodO. A WHG advisory committee meeting is taking place just prior to the ACH meeting, bringing many specialists in this domain to Pittsburgh in July. The roundtable is intended to explore ontological, epistemological, and infrastructural questions, and should therefore appeal to people with either humanistic or technical orientations.

For instance, we will discuss how our linked gazetteer systems support research focused on cross-regional exchanges, connections, and comparisons by allowing users to contribute not only place data, but "trace" data – annotations of data records about historical entities of any kind with identifiers for places associated with them. Trace entities fall within broad categories of events, people, works, and artifacts. We will also briefly introduce two new data formats developed in collaboration with the Pelagios team and others: Linked Places format and Linked Traces annotation format. These developments are situated in the broader context of an emerging Linked Pasts initiative aimed at connecting not only places but all manner of historical data.

 
1:40pm - 2:40pm#SK3: Advancing Library Collections Data Panel
Session Chair: Tyrica Terry Kapral
Marquis C, Marriott City Center 
 

Advancing Library Collections Data: Scholar-Applied Data Layers, Humanistic Inquiry, and Reflective Practice

Kate Joranson, Tyrica Terry Kapral

University of Pittsburgh, United States of America

“Do you have any artists’ books by women of color?” A curator posed this question regarding the artist book collection at the Frick Fine Arts Library (FFAL) at the University of Pittsburgh (Pitt). This question has prompted three years of undergraduate humanities data work in the University Library System’s art library and is the foundation of our reflective practice of examining the intersection of discoverability, social justice, and ethical data practices. In these projects, undergraduate scholars in the Archival Scholar Research Awards (ASRA) program at Pitt have used the library’s archives, special collections, and primary sources to conduct original research, producing new metadata and scholarship for distinctive and diverse collections. Beginning with their own research questions, these scholars have generated metadata for materials that enable them to answer those questions. For example, former ASRA students have described the library’s holdings of the Black Panther publication to account for the presence of internationalism and issues regarding women’s health, women imprisonment, and the LGBT community in each issue. Currently, the work of these scholars is made available via library guides (LibGuides). This program has proven to be a robust avenue for engaging students in humanities data projects that explore the role of interpretive metadata in library collections data, and it demonstrates the great potential in collaboration between the library and humanities scholars. Building on the data of the traditional catalog records with scholar-applied data layers provides critical perspectives on materials that are not quite captured in the MARC record, which can enable scholars to ask different kinds of questions of collections and to pursue humanistic inquiry in new ways, including digital humanities methods.

Furthermore, the work of ASRA students extends the capacity of collections as data by generating descriptive metadata that can support computationally-driven research. Often, this metadata is interpretive, stemming from cultural and critical research questions concerning issues such as identity and embodiment. Because they are inherently subjective and not static, it is important to acknowledge them as such, which cannot be captured by collections records as they exist. In fact, it is not appropriate to incorporate this kind of interpretive data into collection records, since authorship of this metadata is key. The metadata itself becomes an artifact of particular inquiries at a particular moment in time.

As a result of the humanities data work of ASRA scholars, we are working to standardize the metadata and processes for creating scholar-applied metadata. Another related goal is establishing a means for hosting scholar-applied data layers that are linked to catalog records. We will briefly reflect on our work in progress toward these goals, including continued work with ASRA students, collaboration between library units (e.g., FFAL, Archives & Special Collections, Digital Scholarship Services, Metadata & Discovery), and navigation of the library’s organizational infrastructure. Questions we would like to discuss regard the challenges of interpretive metadata, integrating/linking scholar-applied metadata with library catalog records, the differences between library data creation practices and that of humanists, and the library’s role in facilitating the sharing of scholar-applied metadata.

 
1:40pm - 2:40pm#SK4: The Life of a Lab Roundtable
Session Chair: Aaron L Brenner
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

The Life of a Lab

Aaron Brenner1, Sarah Connell2, Jennifer Grayburn3, Matthew Hannah4, Brad Rittenhouse5, Brandon Walsh6

1University of Pittsburgh, United States of America; 2Northeastern University, United States of America; 3Union College, United States of America; 4Purdue University, United States of America; 5Georgia Institute of Technology, United States of America; 6University of Virginia, United States of America

The “Life of a Lab” roundtable will bring together lab leadership from six different institutions to give a cross-sectional view of DH Labs and the salient and transient concerns they face as they develop from embryonic to established. The labs vary widely in many aspects including their lifespans, funding structures, physical spaces (or lack thereof), pedagogical models, intellectual and project foci, and target audiences. We hope this will allow all attendees to find something useful in the session.

One of the major issues we will discuss is funding. Some of the labs are internally funded, others are grant funded, and several are unfunded or provisionally funded as yet. We will discuss the ways different funding models affect the ability of the labs to define and accomplish goals, find both literal and metaphorical space on campus, and serve their communities. One of the lab leaders, whose lab largely runs through a grant, will think about the advantages and disadvantages of being grant-funded and, therefore, largely uninstitutionalized.

The represented labs developed out of and serve a variety of disciplinary and pedagogical formations. We will discuss the effect this has on the programming, research, and infrastructure of the different institutions, and how the varying concerns of different labs intersect with overarching strategies for funding, leadership, and space. Additionally, we will talk about the ways these labs incorporate workers (student and otherwise) into their missions, and how they seek to both replicate and break out of their traditional pedagogical and methodological discourses.

Of course, strategies that work at one school may not translate to another, so we have included centers from a diverse slate of institutions: technical and liberal arts, state and private, large and small, and of varying levels and different kinds of sociocultural diversity. Several of the labs exist, willingly or unwillingly, outside what might be considered traditional humanities lab models. A leader at a prominent lab will grapple with the ways the established physical space of his lab influences student outreach and development while another participant, working to establish a brand new lab, challenges the necessity and desirability of a separate department and central space in the first place. Several panel participants from technical schools will think through the issues of developing a DH center at an institution dominated by STEM, while another will will discuss some of the challenges of bringing together research communities that identify as pursuing digital humanities, digital scholarship, and computational social science. The final panelist will discuss opportunities for redefining his lab's space and identity through a variety of pop-up and pilot labs.

Ultimately, this roundtable will not be about providing answers for the attendees, though all of the participants have been involved in the implementation of a variety of solutions in their roles in lab leadership. Rather, we hope to have a conversation in which we and the audience can think through the infinite intricacies of developing sustainable, diverse, and intellectually robust spaces for dh in the academy.

 
1:40pm - 2:40pm#SK5: Quantitative Textual Analysis Paper Session 2
Session Chair: Patrick Juola
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Named Entity Recognition in the Humanities: The Herodotos Project

Brian Daniel Joseph1, Christopher Gerard Brown1, Micha Elsner1, Marie-Catherine de Marneffe1, Alexander Erdmann1,2, James Wolfe1, Colleen Kron1, William Little1, Andrew Kessler1, Petra Ajaka1, Benjamin Allen1, Yukun Feng1, Morgan Amonett1, Amber Huskey1, Charles Woodrum1

1The Ohio State University, United States of America; 2New York University Abu Dhabi

Solutions to the computational linguistic problem known as Named Entity Recognition (NER) have much to offer humanists. In this presentation, we offer a “proof of concept” in support of this assertion by introducing the Herodotos Project and its development and use of NER systems for Classical languages.

NER is a machine learning technique that trains on large amounts of text in which humans have manually, accurately, and consistently identified the named entities of interest. Trained NER systems then learn to generalize, identifying and distinguishing the annotated classes of named entities in other, similar texts automatically. While high-quality NER systems are available for the “larger” high-resource languages like English and for domains like newswire, no such systems are readily available for Classical Latin or Greek , both low resource languages, and the range of text-types available in them.

In the context of the Herodotos Project, an ethnohistory project aiming to first develop a catalogue of ancient peoples named in classical sources (i.e., groups, tribes, empires, etc.), and then to compile all known information about them and to map and describe their interactions, we have developed Latin-based and Greek-based NER systems and with them we have been able to automatically identify -- with a high degree of accuracy (over 90%) -- the names of ancient groups of peoples in classical texts as well as other relevant named entities, specifically persons and places.

We demonstrate here our systems and discuss the value that their development can offer to humanistic research. First and foremost, we outline how they can be generalized so as to be applicable to other languages and to other entities of interest, e.g. dates or quantities. Second, since the training of our NER systems has involved annotation of considerable amounts of Latin and Greek text, we detail other uses that our annotated Classical texts can be put to. Finally, we discuss several issues in the manual annotation process regarding the identification and distinction of person, group, and place names which hold implications of wider interest for research into onomastics, which we see as a basic humanistic enterprise given the importance of names and naming — think of historical figures, their often eponymous significant achievements or political movements, literary characters, etc. — to much of the humanities.



Hacking Multi-word Named Entity Recognition on HathiTrust Extracted Features Data

Patrick J Burns

New York University, United States of America

Multi-word named entity recognition (NER)—the automated process of extracting from texts the names of people, places, and other lexical objects consisting of more than one word—is a core natural language processing (NLP) task. Current popular methods for NER used in Digital Humanities projects, such as those included in the Natural Language Toolkit or Stanford CoreNLP, use statistical models trained on sequential text, meaning that the context created by adjacent words in a text determines whether a word is tagged as an entity or as belonging to a named entity (NE). This short paper looks at a situation where statistical models cannot be used because sequential text is unavailable and suggests a “hack” for approximating multi-word NER under this constraint. Specifically, it looks at attempts to extract multi-word NEs in the HathiTrust Extracted Features (HTEF) dataset. The HTEF dataset provides page-level token counts for nearly 16 million volumes in the HathiTrust collection as an effort to provide “non-consumptive” access to book contents. That is, HTEF data is provided in a pseudo-random manner—a scrambled list of token counts—and not as words in a consecutive, readable word order. Accordingly, statistical NER methods cannot be used with HTEF data.

In recent projects involving HTEF data, I have had initial success with extracting multi-word NEs with a different approach, namely by 1. creating permutations of page-level tokens provided in the HTEF that are likely to be NE constituents; and 2. querying these permutations against an open knowledge base, specifically WikiData, in order to determine if they are valid NEs. The method, implemented in Python, can be summarized with the following example: 1. given the phrase “New York and New Jersey”, we construct a dictionary with the following word counts—{‘and’: 1, ’Jersey’: 1, ’New’: 2, ‘York’: 1}; 2. taking all of the permutations of potential NE constituents, here defined by capital letters, we construct the following list—[’Jersey New’, ‘Jersey York’, ‘New Jersey, ‘New York’, ‘York Jersey’, ‘York New’]; and, lastly, 3. querying Wikidata for all items in this list, we return only positive matches as valid entities, i.e. the following list—[‘New Jersey’, ‘New York’]. This method does not account for all possible multi-word NEs (for example, because of the capitalization constraint in step 2, “City of New York” would not be considered due to the lowercase ‘of’), but nevertheless represents a novel solution for performing a core NLP task on a pseudo-random text collection. Moreover, it represents a good example of using a general knowledge base, like WikiData, as a validation mechanism in NLP tasks. Lastly, it represents an attempt to push the boundaries of what can be derived from the HTEF dataset, while also respecting its non-consumptive nature, i.e. it is only trying to extract information from the dataset based on the likelihood of adjacent tokens, not to reconstruct entire sentences, paragraphs, or pages. With these points considered, the paper should be a useful model for other text-focused Digital Humanities projects looking to extend NLP methods to non-consumptive text collections or similarly challenging text-as-data datasets.



Analyzing the Effectiveness of Using Character N-grams to Perform Authorship Attribution in the English Language

David Gerald Berdik

Duquesne University, United States of America

Authorship attribution is a subfield of natural language processing which can be applied to practical issues such as copyright disputes. While there are many different methods that can be used to perform such an analysis, the effectiveness of these methods varies depending on the material that is being analyzed as well as the parameters chosen for the selected methods. One of these methods involves using groups of n consecutive characters, called n-grams, where n refers to the number of characters in the gram. It is expected that n-grams of different lengths will vary in their accuracy of attributing the correct author to a questioned document. Specifically, it is expected that as n-grams become larger, performance will improve, reach a peak, and then begin to degrade.

Using Patrick Juola’s Java Graphical Authorship Attribution Program (JGAAP), we performed an analysis on Koppel Schler’s blog corpus by taking all corpus entries with at least 300 sentences, separating their first 100 sentences and last 100 sentences into separate entries, and running n-gram tests from 1 to 50 to determine what an ideal size would be for performing authorship attribution using character n-grams. Based on the results of testing, we showed that contrary to the bell curve-like performance that was anticipated, n-gram accuracy peaks fairly early before beginning its decline. Future work will include character n-gram analysis on different languages to determine how much variance, if any, is present between languages.

 
2:40pm - 2:50pm#Break7: Break
Grand Ballroom Foyer A, Marriott City Center 
2:50pm - 3:50pm#SL1: Pedagogy Paper Session 2
Session Chair: Rachel Schnepper
Marquis A, Marriott City Center 
 

Comics in the Archive: Digital Approaches to the April 1956 Newsstand

Daniel Worden, Rebekah Walker

Rochester Institute of Technology, United States of America

This paper will present the results of a spring 2019 project course entitled “Comics in the Archive”, in which students will participate in the creation of a digital archive as well as data visualizations and analyses of that archive. The course will engage students with a particular collection, the 202 comics on American newsstands in April 1956. This unique collection offers a snapshot of the comics industry during a crucial moment in the medium’s history. The “Comics Code Authority” has only recently been put into effect in 1956, so comics publishers are learning how to function within a newly-censored industry. And, since the history of popular comic books has largely privileged singular, collectible moments in comics publishing -- such as the first appearance of major superhero characters, or the work of a particular artist -- it is rare to have a relatively comprehensive view of comics periodicals from a particular moment. We hope that this archive will give students and scholars access to comics history in a different way than has been typical in comics studies, where particular artists or characters tend to be emphasized rather than the broad swath of comics periodicals across genres and publishers.

In our “Comics in the Archive” project course, students will help produce a robust digital finding aid for the collection, engaging them with archival practice and asking them to consider how future researchers or fans will interact with the materials. Then, they will create prototype data visualizations and analyses of these unique visual and textual materials. The students will begin to analyze the comics with digital tools that will count, for example, the number of ads in each comic, the general length of each narrative in each comic, the number of panels per page, the number of words per page and per word balloon, and the prominence of certain genres in the comics. In so doing, we will seek to offer quantitative data and new historical analysis of a refined sample of comics publishing. We will be able to determine if the data lead us to new or surprising conclusions about comics history and the comics medium. For example, are page layouts and panel designs uniform and standardized, or is there a wide degree of variation in page layout and panel construction? How many pages of these 202 comics feature superhero narratives, and how many feature, for example, romance stories? By “drilling down” into this archive, we will be able to produce a robust snapshot of comics history.

The presenters will discuss the course structure and pedagogical methods, lessons learned, student work, and will present the working prototype of a digital edition which summarizes and collates findings. The intended audience is those who have vested interest in undergraduate learning outcomes, scaffolding of digital methods and assignments, and those interested in comics as unique print artifacts. Presenters are a comics studies scholar and a digital humanities librarian, both of whom collaborated on the creation and implementation of the project-based course.



Teaching the Digital Through the Ephemeral

Nora Christine Benedict

Princeton University, United States of America

How do we ethically teach the digital humanities when technologies are constantly changing, shifting, and leaving environmentally harmful footprints? What are the best practices for incorporating both theory and praxis into the classroom while also cultivating a sense of responsibility around the material realities of our digital culture? And more broadly, how do we present the digital humanities to students in non-Anglocentric contexts? In this paper I will provide a series of examples—and reflections—from my Latin American digital humanities course to help address these questions. More specifically, I will show how gathering data from Latin American ephemeral materials to use with various platforms, programs, and software allowed students to engage in meaningful conversations about the fragility of these rare documents, the regions that they emerge from, and the complex technological issues that plague much of the Global South. In this way, students acquired insight from a variety of materials ranging from Bolivian pamphlets about water conservation and indigenous land rights to Argentine flyers about legalizing abortion and Cuban “paquetes semanales” with the latest installments of news and popular culture. They then used the knowledge they gained from these documents to think deeply about the best ways to represent ephemeral materials with digitals tools in ways that account for differing worldviews and sensitive content. For instance, while using TEI to markup legal documents from El Salvador regarding human rights violations, students reflected on the value of semantics for capturing nuanced meaning while also questioning their unsettling position of power that arose with every single decision to identify and tag certain people or places in these documents. Above all, these material and digital juxtapositions help us think critically about the problems that digital research interventions do and do not resolve in regions of the Global South, and possible ways for rethinking our angles of approach.



Promoting Undergraduate Research with Digital Technology

Song Chen

Bucknell University, United States of America

Many educators have emphasized the pedagogical value of undergraduate research experiences. They describe undergraduate research as a high-impact educational practice, a key means of engaging students in academic work, and an effective way of developing their intellectual skills. Yet engaging undergraduates in research is no easy task. Among all others, language has been a formidable hurdle for teacher-scholars who want to engage students in research activities on pre-modern, non-Western history. Drawing on the author’s own experience designing and teaching a DH-enabled research-centered undergraduate course in pre-modern East Asian history, this paper explores the way in which digital technology may be utilized to promote undergraduate research and advance student-centered learning. It discusses in detail some key decisions that went into the design of the course and ensured its success: how tools and topics were chosen, how critical reflections on method and history are infused with technological instruction, and how assignments and activities are scaffolded to guide research novices. This paper also invites its reader to rethink the goal of undergraduate research. It argues that the primary objective of undergraduate research is not to train students into budding scholars in a specific academic discipline, but rather to engage them directly in the process of knowledge creation so as to help them become lifelong critical consumers of knowledge.

 
2:50pm - 3:50pm#SL2: Whose Infrastructure Is It Anyway Roundtable
Session Chair: Abigail Potter
Marquis B, Marriott City Center 
 

Whose Infrastructure Is It, Anyway: Cross-Atlantic Collaboration with DARIAH

Zoe Borovsky4, Quinn Dombrowski1, Thea Lindquist3, Abigail Potter2, Glen Worthey1

1Stanford University; 2Library of Congress; 3University of Colorado Boulder; 4University of California Los Angeles

Although the European and US-based digital humanities communities have long been joined under the ADHO umbrella, significant differences remain in their funding models and scale(s) of collective action. EADH, the organizational equivalent to ACH within ADHO, itself serves as an umbrella for five nation- or region-based associate organizations, each with its own website, branding, and membership structures (including, in some cases, the option of joining the local organization but not EADH).

This divergent structure of scholarly society organization is paralleled in the infrastructure efforts that have emerged (or not) on the two continents, most notably DARIAH (Digital Research Infrastructure for the Arts and Humanities) in Europe, and the defunct Project Bamboo in the US. DARIAH exists as one of the pan-European research infrastructures (known as ERICs), eligible for central EU funding, along with peers that include infrastructures for materials science, biotechnology, and ecosystem research. It also receives contributions (cash and in-kind) from its members and partners. Currently, DARIAH membership is structured around nations or intergovernmental organisations, and each participating country has its own national coordinator, projects, priorities, and resources. In situations where DARIAH works with a particular institution in a country that is not a DARIAH member, it lists that country as a “partner”.

WIth an eye towards sustainability, DARIAH has recently funded a set of outreach activities, DARIAH Beyond Europe. Through workshops held on the west and east coasts of the United States (fall 2018), and in Australia (winter 2019), DARIAH has facilitated knowledge exchanges between each workshop’s host region and DARIAH’s technical and social infrastructure (in the form of tools like TextGrid, a course registry, working groups, and pedagogical materials).

The presenters in this roundtable have accepted DARIAH’s invitation by participating in working groups and making use of DARIAH’s technical and pedagogical resources in their research and instruction. Nonetheless, we remain limited in our ability to shape the directions or priorities of DARIAH, due to our inability to “join” DARIAH under a nation-based membership structure. Bluntly put, with what US governmental agency would DARIAH be affiliated? DARIAH itself is open to exploring more flexible membership options, and roundtable participants are working towards reaching an agreement with DARIAH about what sort of organization(s) might be eligible for DARIAH membership.

We envision less a traditional roundtable, and more a discussion and debate with the audience following a brief, context-setting presentation on DARIAH. Is the ACH community interested in serving as a hub for DARIAH membership? Acknowledging US-European differences in culture, governance, values and priorities, would a US-European partnership with money and resources at stake lead to unproductive tension? How valuable is shared, supported, maintained infrastructure -- providing actual services, rather than simplistic “open-source-as-infrastructure” promises -- for US-based digital humanities? Should we as a community (within or outside of ACH) engage in strategic planning about whether and how to invest in infrastructure? What values should inform the decision to undertake our own infrastructure initiative, and/or to partner with European infrastructure?

 
2:50pm - 3:50pm#SL3: Latinxs in Digital Humanities Roundtable
Session Chair: Joel Zapata
Marquis C, Marriott City Center 
 

Manos a la obra: Latinxs in Digital Humanities

Joel Zapata1, María Esther Hammack2, Alexander Gil3, Carolina Villarroel4

1Southern Methodist University, Dallas, Texas, United States of America; 2University of Texas at Austin, Austin, Texas, United States of America; 3Columbia University, USA; 4University of Houston, USA

Borderlands are understood as intersections that share common land through historical, cultural, political and transnational systems. Digital humanists have begun to build projects that visualize, create alternate spaces and resources, and serve as counter-discourse interventions to negative representations of the US-Mexico border region, its communities and cultures. These projects highlight histories and stories that have been excluded from conventional border histories. In this roundtable, Latinas/os from various disciplines will present in Spanish and English. Each will discuss how the integration of DH practices in their humanities research has led them to dialogue and understand how “postcolonial digital humanities offers a language to ask of digital humanities important questions such as who is speaking, who is being spoken of, who is spoken for, which languages are being used, and what assumptions subtend its productions, distribution, and consumption” (Risam 346). Each speaker has 10 minutes, with 20 minutes open for discussion with the audience.

Firstly, it will address the first initiative designed towards the creation of a network of scholars working in borderlands/border digital humanities. This collaborative project “United Fronteras” brings together scholars from various disciplines and universities across North America. Through visualizations and a digital map it brings together works that use digital components to document the U.S.-Mexico Borderlands from multiple perspectives since colonial times to the twenty-first century. Following, the next presenter overviews the design and development of a research center focused on multilingual technology innovation on the Mexico/U.S. borderland. Sites of Translation User-Experience Research Center is an interdisciplinary, community and University-driven resource that facilitates the design of multilingual websites, software, and applications for a wide range of organizations. By collaborating with local organizations and training students to design, test, and disseminate technologies in multiple languages, this research center is a site of multilingual technology innovation.

Moreover, it will be examine how the project, Chicana/o Activism in the Southern Plains Through Time and Space, helps reveal an understudied portion of the Chicana/o Movement: the way it unfolded on the Southern Plains. This project centers around an approachable interactive map and timeline, and a curated collection of materials. It adds to scholarly and socially significant conversations showing that the region was home to a burgeoning wing of the Chicana/o Movement and that instances of police brutality largely spurred this wing of the social justice movement and united the plains’ Mexican population across political ideology. The final speaker will discuss a forthcoming project tied to their dissertation. South of Slavery is a bilingual platform that traces the journeys and lived experiences of Black people who left the United States for Mexico seeking freedom and opportunity. This resource reconceptualizes the Mexico-US borderlands as a site that played an important role in the history of US slavery and freedom in the long nineteenth century. These speakers articulate a vision of Latinas/os engaging in DH practices and their efforts to find ways to fill gaps, deconstruct discourses and resist coloniality within and beyond the US-Mexico borderlands.

 
2:50pm - 3:50pm#SL4: Digital Humanities and the Art Museum Roundtable
Session Chair: Benjamin Zweig
Salon 2 & 3, Grand Ballroom, Marriott City Center 
 

Digital Humanities and the Art Museum: Perspectives, Challenges, and Opportunities

Benjamin Zweig1, Ellen Prokop2, David Newbury3, Jane Alexander4

1National Gallery of Art, United States of America; 2Frick Art Reference Library; 3J. Paul Getty Trust; 4Cleveland Museum of Art

It is widely accepted among museum staff, from directors to docents, that a deep engagement with digital initiatives is crucial for American cultural heritage institutions to maintain their contemporary relevance and actively participate in society. However, there are diverse practices both between and within museums in regards to how investment in digital infrastructure, outreach, and research can be effectively deployed. In response to these problems, this proposed one-hour roundtable seeks to generate a conversation about how both digitization and computational methods are transforming art museums and public galleries.

The panel will consist of representatives leading major digital initiatives at the National Gallery of Art, Washington, the Frick Collection, the J. Paul Getty Trust, and the Cleveland Museum of Art. With the intention of generating a lively and audience-focused debate and discussion, this roundtable will explore the opportunities and challenges art museums face when balancing the needs and interests of internal and external constituencies in regards to digital practices.

The roundtable will focus on a few key practical and ethical, rather than technical, considerations, including: How is digital research undertaken at museums valued differently than that undertaken at universities? How are museums (or how are they not) encouraging, supporting, and disseminating digital art history methods and practices? Similarly, what can museums do to transform their holdings into usable data for computational research? Like universities, would art museums benefit from having centralized centers or labs for digital experimentation? How might technologies such as machine vision learning affect how art museums collect, organize, and disseminate their holdings? What are the benefits or drawbacks of museum collaboration with non-traditional cultural heritage and Open GLAM partners, such as the Wikimedia Foundation or Google? What efforts should be made by museum digitization efforts to raise the profile of underrepresented artists and subjects and to engage with underrepresented communities?

What we hope to achieve through this roundtable discussion are strategies for dealing with the multiple tensions inherent when introducing the digital into the art museum ecosystem. We further want to push the discussion beyond the notion of art museums as creators of digitized repositories upon which “real” digital humanities scholarship is produced. Instead, the art museum – with its dual focus on serving both specialized research interests and public engagement, combined with its role in creating and maintaining knowledge bases – can serve as a uniquely generative space for advances in the digital humanities.

 
2:50pm - 3:50pm#SL5: Environmental Justice and the Digital Humanities Roundtable
Session Chair: Jeffrey Moro
Salons 4 & 5, Grand Ballroom, Marriott City Center 
 

Environmental Justice and the Digital Humanities

Jeffrey Moro1, Purdom Lindblad1, Gabriela Baeza Ventura2, Kimberly Bain3

1University of Maryland, College Park; 2University of Houston; 3Princeton University

In her acclaimed 2014 keynote at Digital Humanities in Lausanne, Bethany Nowviskie called on digital humanists to “dwell with extinction,” and in doing so, to center the material and ethical realities of the global climate crises in their work. Four years on–a time frame that feels both far too fast and achingly slow—DH has still struggled to heed Nowviskie's call. Major challenges to a critically engaged environmental digital humanities practice still remain, from the massive project of transforming the institutional realities that limit such work, to the difficulty of grappling with planetary scale, to the need to develop ecologies that simultaneously decenter humans while engaging antiracist and decolonial calls to protect our most vulnerable populations, spaces, and cultures. This roundtable proposes to navigate these seemingly insurmountable obstacles with a deceptively minimal approach: one that centers insurgent and transdisciplinary practices of engaging environmental justice across a range of DH work.

We convene scholars and practitioners from across the DH community to answer the question of what an environmental DH centered on practices of justice might achieve. What does DH scholarship, broadly construed, look like in the face of planetary extinction? How might our individual scholarship work to produce the collective organization and belonging we need to intervene on structural scales? In particular, we return to what we see as Nowviskie’s useful pessimism: how placing extinction at the heart of scholarly practice transforms its attachment to the racist and colonial infrastructures that characterize the contemporary academy. Finally, how can we navigate the practical quotidian, and affective challenges of environmental DH work? How do we center extinction while avoiding hopelessness and despair?

This roundtable has two ends. First, to generate possible approaches from a range of critical and creative practices among speakers and audience alike. Second, to stage a conversation within the ACH conference on the shared values that might help the DH community navigate these challenges. Each speaker will prepare a brief response to these central questions, which will then set up thirty minutes of audience conversation and participation.

Wary as we are that conference panels are generative spaces for thought that can quickly dissolve upon the conference's completion, we will record notes, ideas, questions, and future projects from our roundtable for ongoing public participation. An example of such a “seed site,” as we term it, is the “Critical Infrastructure Studies” HumCommons site from the MLA panel of the same name: https://criticalinfrastructure.hcommons.org/.

Topics that speakers will engage in their preliminary remarks include but are not limited to:

- How attending to atmospheric materialities in media research opens up new avenues for work in affect, racialization, and toxicity in infrastructure and manufacturing studies

- Speculative archives and what the Anthropocene means for archival practices that center liberation and justice

- Transnational critiques of border ecologies; considerations of extinction and climate crisis on migration and refugee populations

Roundtable participants will share preliminary remarks in English and Spanish.

 
3:50pm - 4:00pm#Break8: Break
Grand Ballroom Foyer A, Marriott City Center 
4:00pm - 5:00pm#ClosingPlenary: Closing Plenary
The closing plenary features a series of invited lightning talks representing the broad range of scholarship presented at ACH2019.
Speakers:
Tatiana Bryant, Adelphi University, "Centering Black DH Pedagogy in a First Year Seminar Course"
Gregory Palermo, Northeastern University, "Visualizing Citations in Digital Humanities Quarterly's Biblio"
Arjun Sabharwal, University of Toledo, "Digital Curation for Social Justice"
Anna Perricci, Rhizome, "Ethics & Archiving the Web"
Marc Patti, Duquesne University, "Mapping Jazz Venues in Pittsburgh's Hill District"
Withering Systems (Loren Schmidt and Everest Pipkin), independent artists, "A City for Humans"
Salons 2-5, Grand Ballroom, Marriott City Center