Scholars are increasingly concerned with integrating quantitative and qualitative methods of analysis to study emerging technologies. Such scholarship often teaches us as much about people’s individual and collective narratives as it does about technological change. In this panel, we ask: how can we draw on the sociohistorical context provided by humanistic knowledge in order to tell stories with (and about) data?
Two of our panelists are studying how media discourse travels intertextually online, influencing people’s views about political issues related to the Internet. The first is analyzing blog posts about government surveillance that were published by two elite think-tanks: the American Civil Liberties Union and the Brookings Institution. This study compares the keywords and rhetorical strategies that these posts tended to feature before the 2013 NSA leaks of Edward Snowden to those they tended to feature after the leaks, thus empirically distinguishing the surveillance rhetorics of civil-libertarians and national-securitarians. The second panelist is considering how the New York Times’s coverage of the Equifax and Cambridge Analytica data breaches was recontextualized through public discourse. This project draws on rhetorical corpus analysis and natural language processing to ask how narratives develop intertextually after information is circulated by major media institutions and people read and react to their stories.
Our third panelist is concerned with the ways that traditional normative grammar, informed by (philological) corpus analysis, often reduces the complexity of language variation. Linguistic features tend to be classified into discrete categories: correct, preferred, vulgar, obsolete, regional, colloquial, etc. What this conceals is the underlying debate: who gets to decide what things mean? In this panelist’s Twitter dataset (derived from the hashtag #RAEconsultas), such a debate is documented in a network of discussions allowing factions to be identified: gender-inclusive language advocates, language purists, social conservatives, etc. and, in between them all, the normative voice of the Royal Spanish Academy (Real Academia Española, or RAE). Using qualitative content analysis and network analysis, this case study explores how changing notions of natural gender, propagated online, are challenging traditional grammar.
Lastly, our fourth panelist is investigating the moral implications of an emerging technology for treating people with Essential Tremor, a progressive motor disorder that causes uncontrollable tremors in the limbs. The technology, adaptive Deep-Brain Stimulation, enables users to control their symptoms with their brains, with the potential to profoundly change their sense of self-identity, feelings of self-control, or even their close relationships. Through phenomenological interviews with new users, electrical engineers, neuroscientists, and medical practitioners, this project provides humanistic, narrative context for understanding this technology. By taking seriously the stories of people with disabilities and respecting their voices, this project encourages designers of future medical technologies to incorporate their insights.
We seek to push conversations in the Digital Humanities beyond simply showcasing data-driven methods and towards a more critical reflection upon the meaning and purpose of our work. Who is represented in our datasets, what were / are they doing, and why? And what stories do digital datasets allow us to tell that we might otherwise overlook?