What Historians Can Learn from Machine Learning and Vice Versa: The Case of the Civil Rights Movement
Carnegie Mellon University, United States of America
This paper will examine the history of the civil rights movement as a case study at the intersection of history and machine learning. My key question is this: what would it mean to understand a social movement as a process akin to machine learning? I will begin by asking a more traditional question (from the perspetive of academic history): what role did learning play in the civil rights movement? From Brown v. Board to the Little Rock Nine to the University of Mississippi, efforts to integrate educational facilities produced many of the most famous crises of the Civil Rights / Black Power era. In addition to chronicling such conflicts, historians have explored radical approaches to antiracist education, such as the Highlander Folk School, the Citizenship Schools, the Freedom Schools, and the community schools run by the Black Panthers. What remains unclear is the relationship between the goal of increased access to education and the methods of movement activists. What does it mean to understand the civil rights movement as itself a form of education? How can such a lens help us rethink where and how the movement was thought and learned?
At the core of my paper is the educational impact of the civil rights movement on activists themselves, what Bernard Lee called “the university of the movement.” While some scholars have tracked the transmission across generations of what sociologist Charles Payne calls the “organizing tradition,” others have recognized the disjunctures between different generations of activists (the work of Tomiko Brown-Nagin is exemplary in this regard). My paper will contribute to these debates by bringing original archival research on the civil rights movement into conversation with recent developments in the field of machine learning and related disciplines in the psychology and cognitive science of education. While the public persists in seeing the civil rights movement as the work of Martin Luther King, most scholars see a multifaceted struggle involving a variety of actors and organizations. Yet it remains unclear how ideas and knowledge flowed within the movement, and between the movement, its adversaries, and the general public. How can advances in machine learning provide novel approaches to the role of learning within the civil rights movement? And vice versa, how can a close study of a social movement offer a new vantage point on machine learning?
Validating Machine Learning Systems in the Humanities: Bayesian Explorations of the Encyclopedia Britannica from 1768-2010
Penn State, United States of America
In March of 2012, the Encyclopedia Britannica ceased printing paper editions of its handsomely bound reference books. The Encyclopedia Britannica, first published in Edinburgh, Scotland in 1768, remains the oldest English language encyclopedia in continuous production, but it will be updated only through its online offering in the following years. In the era of community based online encyclopedias like Wikipedia, now is an interesting time to reflect on the content of the complete print run of the Encyclopedia Britannica. This also represents an interesting moment to examine how past systems of defining “general knowledge” have worked to shape societal prejudices, beliefs, and assumptions. This paper will demonstrate how Natural Language Processing with Python can be used to track the evolution of popular conceptions of race and racialization across all 15 editions of the Encyclopedia released over its 244 year history. In particular, the presentation will describe the development of a Bayesian racial classifier and how it has been used to discover racialized passages relating to numerous topics, including music, agriculture, and religion. A critical step in using machine learning techniques relates to validating the results for users by offering a glimpse into the underlying system. Machine learning techniques are often misunderstood by audiences as either a panacea and deliverer of plain truth or a highly suspicious and untrustworthy method for understanding data; the project site, generalknowledgeproject.network, works to counter this perception by hosting a racial classification tool to help its audience gain a first hand experience of the classifier with the goal of validating the methods used. In addition to hosting interactive visualizations, the site contains descriptions of processing methods, a companion bibliography, as well as the source code for our analysis.
AI Interpretation of Violence Against Women in 20th Century Border Fictions
Stanford University, United States of America
The challenges that machines face when interpreting figurative language are a significant impediment to applying computational techniques to many significant themes in literature. Figurative language is commonly used, for instance, in depictions of violence. To overcome this obstacle, this project aims to utilize artificial intelligence techniques so that we can better understand the ways in which depictions of violence towards women in literature have changed over the 20th/21st centuries. Our team is focused on narratives centered around the US-Mexico border, although there is great potential to apply this study to other types of literature/text and other time periods. Our first step is to build a comprehensive corpus of border literature within our chosen time period including authors such as Roberto Bolaño, Carlos Fuentes, Sandra Cisneros, and others. This corpus serves as the body of study for our experimentation.The text is accurately labeled with details that help one discern emerging relationships between time and salient features based on the model's performance. After finalizing the data source, the project adopts existing technical methods commonly used to understand nuances and qualities in bodies of text: Bayesian analysis of underlying character features; latent Dirichlet allocation in topic modeling (for which we might look at topics like “Women” and “Violence,” among others), and natural language inference techniques for text comprehension. Through this complex textual analysis, we derive inputs for a machine learning model that identifies and highlights instances of violence towards women in literary works as outputs. A feature analysis of the model provides insights into patterns of this type of violence, especially how they are represented in text. By incorporating state-of-the-art technology in socially relevant humanistic inquiry, our research encourages immediate application of AI for Digital Humanities analysis and pursues to achieve a model more attuned to varying levels of subtextuality.
Creating Models of Influence at the Intersection of Dance and Digital Humanities: Embodied Transmission in the Performances of Katherine Dunham
1University of London, Royal Central School of Speech and Drama; 2The Ohio State University
This presentation comes from Dunham’s Data: Katherine Dunham and Digital Methods for Dance Historical Inquiry. The overarching project explores the kinds of questions and problems that make the analysis and visualization of data meaningful for dance history, pursued through the case study of 20th century African American choreographer Katherine Dunham. Dunham is an exemplary figure to pursue such an interdisciplinary inquiry into dance history and digital modes of analysis, due to her own model of research inquiry, which combined theoretical and print modalities across multiple fields, from anthropology to dance pedagogy. Dance is transmitted from body to body in communities, training and rehearsal studios, and theatres, and as such, it moves across transnational cultural and artistic networks. Here, dance studies functions as an interlocutor with imperatives of digital humanities to “bring back the bodies” in digital research (D’Ignazio and Klein 2019).
At ACH, we focus on modeling what we describe as traces of “influence” in and around dance touring, and reflect on the development of scalable digital analytical mesthods for studying influence that are shaped by approaches to embodiment from dance, critical race theory, and digital cultures. We further consider how to represent embodiment digitally, without reducing lived experience to data, as we bring our training as dance scholars to bear on those experiences that both underpin and haunt the data we have manually curated from Dunham’s archives.
Evaluating Dunham’s influence includes analyzing the direct and indirect circulation of dance gestures, forms, and practices spanning an 80-year career across six continents. This presentation focuses on the period 1947-1960. We employ spatial analysis to demonstrate how Dunham’s choreography materializes the influence of the many geographic places that infused her diasporic imagination. We also trace the flows of performers working together over time and space as a dynamic collective, and the embodied transmission of Dunham’s dance repertory. We further engage other computational methods to examine the relationship between touring and mobility, for example, how locations influenced one another as particular cities and theatres opened onto future sites of travel. Creating models of influence in these ways offers means to visually elaborate ephemeral corporeal practices of cultural transmission in dance. Such analysis enables deeper consideration of the dynamic relations of people and places through which Dunham’s diasporic aesthetic developed and circulated, in dance gestures, forms, and practices.