Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).
1University of Michigan; 2University of California San Diego; 3Stevens Institute of Technology; 4University of Michigan
As computational, algorithmic and machine learning methods become increasingly used in the humanities and across society, these methods are bringing with them deep philosophical, epistemological and metaphysical claims about the relevance of probabilistic thinking to the world. While the use of statistical methods and probabilistic explorations of massive datasets have allowed the humanities and other fields to investigate structures and patterns at scales that would be otherwise inaccessible, the ways in which we deal with uncertainty, especially at scale, have massive political, social and economic implications. From historical commitments to eugenics by early statisticians such as Ronald Fisher to contemporary realizations that many algorithmic systems simply repackage extant social discrimination, these modes of thinking and processing data have never been neutral.
This panel will consist of four speakers, who will each provide short position statements/presentation on the history and implications of probability to the digital humanities, computation and society writ large, followed by a discussion amongst the panelists and attendees. We will encourage attendees to think through and share how the politics of uncertainty and probability intersects with their own work. The short presentations will focus on:
The shift from frequentist to Bayesian understandings of probability; how this change has influenced and been influenced by the rise of computational methods for science, social sciences and the humanities; and the implications of this shift for our work and society
The history of predigital probabilistic thinking in action; how data economies were generated through powerful uncertainty management regimes in various global trade programs. This will be represented through new visual mapping methods part and parcel of growing statistical mapping methods in digital humanities.
The history of how the use of probabilistic information coding to optimize communication channels allowed information to be reduced to syntax, and as such amenable to large-scale procedural analysis, the ability to understand the content of such information was at once impoverished and redeployed. Now empty of semantics and computable, the new information systems became sources of metadata, a quasi-semantic value for a world of coded data.
The potential of machine learning, AI and related technologies of probabilistic management to further automate labor and expel workers from the value relation in contemporary capitalism; Silicon Valley technologists’ advocacy for progressive policies like a Universal Basic Income as a means to mitigate the effects of unemployment that these technologies produce; and the implications of these proposals for a society that becomes increasingly dependent on corporate beneficence to manage uncertainty.
While the panelists backgrounds are largely in the study of science and technology; philosophy of technology; media studies; etc. our focus will be on how these historical and metaphysical developments underwrite the computational work that is being done in the digital humanities and larger systems of knowledge production--along with the ways in which work in the digital humanities can help reveal the larger political stakes of these changes, especially focusing on the sociopolitical implications of this shift.