#Demo: Demonstration Session
Quick Red Flag Check Tool
Museums, collectors, auction houses, dealers, and art historians have long expressed frustration with the difficulty of verifying the provenance of artwork against a large list of looted items, red-flagged individuals, victims or key words. This paper demonstrates how to use the free web-based document analysis software Voyant-Tools as a potential solution to this problem. A series of step by step slides and videos will be presented with the aim of introducing participants to the tool and enabling them to apply it to their own documents, databases and special cases.
In a series of proof-of-concept tests conducted in 2017-18, publicly available data sets concerning auction sales and museum collections were loaded into Voyant-Tools and analyzed automatically against 1000 last names drawn from the Art Looting Investigation Unit Final Report 1946 and saved in a reusable list. Data files were loaded “as is” without any additional formatting, classifying, cleansing, or harmonization of format. Voyant-Tools immediately created an interactive Word Cloud summary of red flag names in the provenance, as well as, at the granular level, the detailed text of each mention, essential for disambiguation and analysis. Flagged data were exportable and shareable. Furthermore, a whitelist, once created, could be named and shared for reuse by other users on other corpuses.
Successful tests with Voyant-Tools suggest that the identification of red flags in provenance can be performed quickly and inexpensively and, most important, by anyone. This could remove an important resource barrier for cash-strapped museums and collectors, as art historians need not be involved in this initial "gruntwork", but can be mobilized later in the process, for knowledge-intensive steps like disambiguation (identifying the individual referenced by the name), setting research priorities, and performing in-depth archival research on the flagged items. One of the side benefits observed was transparency, not only in the data but in the process itself.
The initial results were presented at the Art Crime Conference in Amelia, Italy in June 2018. Based on feedback, it appears that the main barrier to adopting the provenance red flag check tool is lack of familiarity and training. The purpose of the proposed workshop is to introduce the participants to Voyant and to ensure that anyone interested has sufficient knowledge to successfully use this powerful tool to check provenance texts, documents or data sets on their own.
Further information on the demo is available:1) Video: How to check for Red Flag Names with Voyant Whitelists https://youtu.be/KhXGcLBTUqM 2) Slides for the Red Flag Check Demo https://docs.google.com/presentation/d/1PjVR_GDJKCtTvFcc_MtkzqZWg3Shu_FyKf_wj7Q8eQk/edit?usp=sharing 3) Date Sources for the Red Flag Check Demo https://docs.google.com/document/d/16amV18LHTs8gymdv3UGSnLk7JdivWdW21_hHP8R81zE/edit?usp=sharing
CoAuthOR: Collaborative Authorship with an Opinionated Robot
JPMorgan Chase & Co., United States of America
The authoring of text has always been - and will continue to be - mediated by technology. Recent advances in natural language processing and machine learning, however, have fundamentally altered the character of this mediation. Text messaging applications predict our next few words. Commercial “spell-checkers” provide nuanced advice on our style and tone. Email services even craft complete responses to our emails. Increasingly, these computational interventions in our writing are too complex for all but the most sophisticated users to fully understand.
To focus attention on these complex interventions, I present an interactive demonstration of an outrageously intrusive web-based text editor. The key feature of this editor is a sophisticated, opinionated, and obstinate AI that edits documents in real time without the consent of its human collaborator. Built from supervised and unsupervised machine learning technologies, this AI does not content itself with grammatical corrections. Indeed, it tweaks the document to conform to its own inscrutable stylistic preferences and generates new text via a pre-trained neural network. Before launching the text editor, would-be human co-authors can configure the intensity and character of the computational interventions (there will be AI pre-trained to mimic Ernest Hemingway and other famously stylized authors), but once the writing begins, there is no turning back. The “undo” button is turned off.
I intend this interactive experience to be enlightening, amusing, and frustrating in equal parts. Furthermore, I hope it encourages participants to reflect on both the perils and opportunities of human-computer (collaborative?) text composition. For better or worse, our word processing tools are becoming cleverer and our bestselling authors (vis-à-vis Robin Sloan) more willing to dabble in natural language generation. If we are to be prepared for the future of composition, then we must meet these peculiar technologies on their own terms and inside their own text editors.
The Orignal Mobile Games: Playable Game History on Mobiles
Rochester Institute of Technology, United States of America
The Original Mobile Games is an app for hand-held, mobile platforms that features digital emulation of up to 27 handheld puzzle/maze games originally produced from the late 1800’s- mid 1940s. The app is a co-production of The Strong National Museum of Play, the Rochester Institute of Technology and educational games studio Second Avenue Learning, Inc.
The games were selected from the collections of The Strong and the App features brief historical profiles of each game and photographs of the actual items in the museum’s archives. The first game featured in the App, Pigs in Clover was the “Angry Birds“ of its day. The factory produced 8,000/day and was still 20 days behind in orders. An informal tournament for best time held between five US senators resulted in newspaper articles and a political cartoon. A satirical article from a Chicago paper claimed the game had brought life in the U.S. to a standstill. While life hadn’t stopped a wave of knockoffs followed and an enduring model of gameplay had arrived. These “dexterity games” or “ball-in-maze” puzzles feature a wide range of game play and themes. Pop culture events of the times like the birth of the Dione Quituplets, the launching of the Queen Mary, the international competition to reach the North Pole and more were some of the many moments celebrated in these games.
Fostering Open Scholarly Communities with Commons In A Box
1The Graduate Center, CUNY, United States of America; 2New York City College of Technology, CUNY; 3Michigan State University; 4SUNY Geneseo
Fostering Open Scholarly Communities with Commons In A Box
Commons In A Box (CBOX)(https://commonsinabox.org/)was developed by the team behind the CUNY Academic Commons, an academic social network for the 25 campuses of the City University of New York (CUNY). Built using the WordPress and BuddyPress open source publishing platforms, CBOX simplifies the process of creating commons spaces where members can discuss issues, collaborate, and share their work. The original version of the software, CBOX Classic, was developed with support from the Alfred P. Sloan Foundation, and powers sites for hundreds of groups and organizations worldwide, who use it to create social networks where members can collaborate on projects, build communities, publish research, and create repositories of knowledge.
Over the past two years, the Commons In A Box team has partnered with the New York City College of Technology, CUNY (City Tech) to create a new version of CBOX based on City Tech’s OpenLab (https://openlab.citytech.cuny.edu/), an open digital platform for teaching, learning, and collaboration that has been used by over 27,000 members of the City Tech community since its launch in 2011.
The result, CBOX OpenLab, offers a powerful and flexible open alternative to costly proprietary educational platforms, enabling faculty members, departments, and entire institutions to create commons spaces specifically designed for open learning.
Funded by a generous grant from the NEH Office of Digital Humanities, the CBOX OpenLab initiative seeks to enhance humanities education – and public understanding of humanities education – by enabling the work of students, and faculty, and staff to be more visible and connected to the outside world. It also seeks to deepen engagement between the digital humanities and pedagogy by providing a process by which digital humanities practitioners can contribute software (plugins) to the project.
During this session, participants will share their experiences of fostering open scholarly communities with CBOX. These include: the Humanities Commons (https://hcommons.org/), a trusted, nonprofit network serving more than 15,000 scholars and practitioners in the humanities; the Futures Initiative (https://futuresinitiative.org/), which pursues greater equity and innovation in higher education through student-centered pedagogies, graduate student preparation, research, and advocacy; and KNIT, a digital commons for the UC San Diego (https://knit.ucsd.edu/). We will introduce the CBOX OpenLab platform, illustrating its use cases with examples drawn from City Tech’s OpenLab. Finally, we will engage attendees in discussion of the benefits and challenges of open scholarship, pedagogy, and community-building, exploring how they might adopt CBOX at their own institutions and contribute to its future development.
The panel includes graduate students, alt-ac professionals, and faculty members drawn from institutions across the country, who bring a wide range of perspectives to the discussion, and have deep experience in working hands-on to build open scholarly communities.
The session argues that it is vital for educational institutions to lead the way in putting free, open-source software in the hands of our students, faculty, and staff and empowering them to create and customize vibrant, attractive spaces where they can share their work with one another and the world.
Wikibabel: Procedural Knowledge Generation using Epistemology, Encyclopedias, and Machine Learning
New York University, United States of America
Wikibabel is a digital art project that examines shifts in contemporary epistemology through an alternate version of Wikipedia. The site, a searchable database that is aesthetically and functionally similar to Wikipedia, is created with a process that uses machine learning and natural language processing to analyze the entirety of the existing online encyclopedia for its linguistic and structural style, and then creates new articles based on those patterns. The project employs parody and satirical critique to explore how the conflict between the Wikipedia's quest for "neutral point of view" and changes in credibility brought about by the social web struggle to convey complex and controversial topics within the rigid article template’s Harvard Outline format.
Wikibabel reinterprets a common practice in game design called procedural content generation, which refers to the programmatic generation of game content, such as game levels graphics and textures. When applied to knowledge rather than backdrops, Wikibabel creates pages that are grammatically correct, although they would find a home in most encyclopedias. However, as Aristotle noted, nature abhors a vacuum, and when reading a Wikibabel article, the reader takes cues from its appearance as an encyclopedia to fill in the missing meaning. As a digital sculpture that strips down the encyclopedia to its bare essentials - the aesthetic cues, the article template, and the linguistic style - Wikibabel challenges western society's emphasis on epistemology based on logic and pragmatism with results that are amusing, confusing, and if you are not paying close attention, entirely believable.
The project was my 2018 MA thesis project at the digital arts program at the Interactive Telecommunications Program (ITP) at New York University, and was deeply informed and guided by my experience as a librarian working in digital humanities. The site is accessible at http://wikibabel.world and is a creative approach to questions I regularly address in my professional work.
Seeker: A Visual Analytic Tool for Humanities Research
1Purdue University Fort Wayne; 2Indiana University-Purdue University Indianapolis
Seeker is a new digital humanities tool for document analysis and visualization. It is the product of an interdisciplinary collaboration between Computer Science and Humanities scholars. Seeker is designed for exploratory search of a large volume of data and offers multi-tiered interfaces and data management features that allow users to locate, contextualize, and visualize the terms and concepts most relevant to their research, to see the correlations between those terms and all other terms appearing in the document set, and to identify the most important passages and documents that merit further study. Seeker analyzes a document set (that can include thousands of documents) to identify the most commonly used terms, including their overall frequency and the number of unique documents within which they appear. Users can also search for specific terms, for which the program will identify the location, and a list of all associated terms based on the frequency with which they appear in the same paragraph and the same sentence, and the average proximity of those terms to the user-selected term. These results are displayed statistically as well as visually. Seeker allows the user to assess the overall content of a large document set as well as the specific context in which user-selected terms appear.
This dual approach, as well as filters that allow users to quickly categorize a large document set in user-defined ways organize a document set into multiple categories (e.g. by chronology or author), can also be marshaled to support more focused analytical work,enables users to perform a variety of analyses, including the identification of relevant themes; the comparison of multiple authors; and an assessment of historiographical, linguistic, and conceptual change over time. Seeker is unlike existing digital tools because scholars can use the program to determine which documents from a large set merit investigation, while most existing systems are geared toward organizing and analyzing a set of sources that has already been manually screened by the user.
Designing an Original Video Game in Academia
University of California-Irvine, United States of America
This paper will highlight the differences between designing a video game for industry and designing one in an academic environmen. The authors of this paper-a successful video game professional and an academic— successfully collaborated and completed an educational game on an historical topic. The game won a prize at a recent international competition. The challengers faced by industry and academia are distinct, and we will highlight the challenges of each, providing an introduction to the video game design process.
The MV Tool: Embodying Interdisciplinary Research
University of Illinois at Urbana-Champaign, United States of America
The movement visualization tool (mv tool) is a motion-capture playground that drives the mv lab, an embodied research environment to observe and analyze human movement patterns as rendered in cinematic space. The mv tool has two modes, each of which serves complementary research purposes. The capture mode allows users to record physical mover and virtual camera data, with the aim of progressively building a movement visualization database and relevant metadata. The interactive mode, which we will demonstrate, engenders an embodied research space for both rigorous and playful figure and camera movement experimentation. Real-time interaction with analytical concepts from Laban/Bartenieff Movement Studies (LBMS) and cinematographic frameworks is a central goal of this research, allowing participants a better understanding of their own movement patterns and the theoretical infrastructures shaping this exploratory space.
The mv tool’s interactive mode is operated by two agents: one moving in a Kinect-driven motion-capture space (the moving agent) and one operating the software and, when applicable, manipulating camera settings in the digital environment (computer/camera agent). A screen relaying live, interactive visualizations of an abstracted body “skeleton” allows the mover to see their movement pathways. Through a set of calibration movement sequences, the computer agent can visualize their movement interacting with various movement paradigms drawn from Laban Movement Analysis’ taxonomy of human movement and Rudolf Laban’s categorizations of where the body is moving in relation to its environment (Space).
The mv tool’s authors are also interested in studying how the camera moves in cinematic space, and the tool therefore allows both the moving agent and the computer/camera agent to manipulate the camera’s position relative to the mover in conjunction with cinematographic paradigms of camera movement. Camera movement in the mv tool can be restricted according to Laban’s categorization of movement directionality and pathways, allowing the abstracted skeleton and the frame to shift together in myriad ways. How does our observation of human movement change when the camera is still while capturing a movement phrase versus moving in parallel or in counterpoint—in any of the three dimensions—to that movement? The foundational concepts from LBMS serve as guiding principles for the tool’s creation, but we expect the development of this work to challenge and ultimately strengthen the empirical sustainability of the Laban material.
The mv lab participants are particularly interested in this tool’s ability to study historical research questions; beyond the experimentation with movement frameworks from Laban Movement Analysis, we seek to re-create examples of choreographed figure and camera movement in cinema to isolate those stylistic elements from other characteristics of film form, the complexities of mise-en-scène and sound, in particular. Camera movement’s range of motion in the mv tool can therefore be adapted and limited to better study historical cinematographic constraints, for example, by restricting the types of camera movement based on technological availability from a particular period. The accessibility and multifunctionality of the mv tool allows it to serve many academic purposes, from increased kinesthetic intelligence to the development of computer vision protocols for moving image analysis.
Scholarly Publishing with Manifold
1The Graduate Center, CUNY, United States of America; 2College of William & Mary; 3Dartmouth College; 4University of Oregon
This panel brings together authors, editors, publishers, and educators exploring the Manifold scholarly platform as a space for publishing digital scholarly monographs, journals, archival materials, and open educational resources. The Manifold Scholarship project, an open-source platform funded by the Mellon Foundation (http://manifoldapp.org), invites new networked and iterative forms that have strong ties to print and support rich digital multimedia publishing and post-publication annotation, highlighting, and resource annotation.. The presenters will outline how the choice of Manifold platform has impacted resource creation, curation, and engagement. Across a range of spaces -- the online publication of a book series; the publication of a collected edition, the creation of an OER repository for a university system; and the formation of a repository of archival materials of authors from underrepresented communities -- this panel will highlight how the Mmanifold platform has fostered new spaces for scholarly publishing.
Presenter 1 will provide an overview of the Manifold platform, locating its history in the open access movement and the turn towards web-based annotation of scholarly materials. This presentation will cover the origins of the project in the _Debates in the Digital Humanities_ book series and demonstrate some of the features of the platform, while also pointing to future directions that the project will undertake.
Presenters 2 and 3 will discuss how a platform can support the intersectional, feminist scholarship of the Reanimate publishing collective, particularly in terms of media accessibility. The Reaminate Project, part of the Manifold Digital Services pilot program, focuses on the publication of archival work by women working in media and engaged in activism from the 1930s to 1950s sheds; it sheds light on untold stories of the influences of race, gender, class, and other axes of identity and oppression on women in media. However, much of this writing has never been published and the market forces on academic publishing are structural obstacles to their recovery. This presentation will describe how the Manifold platform is being used to present these archival materials.
Presenters 4 and 5 will discuss the publication of _Bodies of Information: Intersectional Feminism and Digital Humanities_. Taking intersectional feminism as the starting point for doing digital humanities, _Bodies of Information_ is diverse in discipline, identity, location, and method. Presenters 4 & 5 will discuss the translation of the collected edition from print object to online publication, highlighting how the project highlights issues of materiality, values, embodiment, affect, labor, and situatedness.
Presenter 6 will discuss Manifold as a space for creating Open Education Resources. Building Manifold projects with other graduate teaching fellows at the CUNY Graduate Center (teaching across many CUNY campuses, Baruch, Queens, Hunter, John Jay), and for the presenter's own class at Brooklyn College, This presentation will describe the process of creating texts that can be used and annotated by students across multiple departments, colleges, and universities.
XR in the Digital Humanities
1CUNY Graduate Center, United States of America; 2Vanderbilt College, United States of America
Computing capabilities for rendering high-quality three-dimensional graphics have progressed remarkably in recent years, largely in response to competition in the gaming and defense industries. While public awareness and engagement with XR (Extended Reality), which comprises virtual reality (VR) and/or augmented reality (AR), has risen sharply, scholars are taking a measured and thoughtful approach. Engaging with the new technology, scholars remain meta-critical about how high-speed computational capabilities like XR can effectively represent the multiple dimensions in their digital humanities research. An increasing number of multidimensional projects by digital humanities scholars focus on the modeling and simulation of real, historical physical spaces, and/or the articulation of imaginary or data-derived spaces for pedagogy and research in the humanities. A common thread of the use of three-dimensional representations and techniques is that they are at once both extremely complex and stunningly intuitive, both to render and to interpret. The ability for DH to flourish while comprising such internal contradictions suggests the capabilities of multidimensional technology to distill and refine the essential points of complexity by articulating them in those dimensions. In this manner, scholarship in XR seeks to reveal the underlying essence of DH projects by employing rich, deep and immersive experiences in pedagogy, data visualization, modeling and simulation.
This proposal is envisioned as a walk-up booth/room with one-to-a few tables and/or headsets set up, allowing visitors to experience DH content in virtual reality. The content should demonstrate research and pedagogical content, 360° video and other forms of immersive content for the humanities. One panelist will demo the exploration of three-dimensional interactive spaces for data visualization and storytelling, another will present content on VR and embodiment to understand medieval textual transmission, and there may be additional in-perseon presenters. However, irrespective of the number of ‘panelists’ in person, the panel can include content provided by scholars who are unable to attend. The technical limitations may require this content be comparatively simple (360° video vs interactive, for example) but it would allow more scholars to have ACH members and attendees engage with their work remotely. If accepted the scope of the effort can and will be modified based on the available time frame and the presenters and/or sources for hardware and content that are identified.
A Digital Game for a Real-World French Course
Youngstown State University, United States of America
In this poster presentation, I describe the classroom implementation of a hybrid digital and face-to-face learning game in two face-to-face undergraduate French courses. It is of interest to instructors considering game-based learning and blended learning.
Previous classroom research indicated some of my French language students felt isolated, overwhelmed, and confused about how to learn a foreign language. Foreign language anxiety (Ewald, 2007; Levine, 2003; Macintyre, Burns, & Jessome, 2011) and lack of language learning strategies (Chamot, 2005; Tran & Moni 2015) have been shown to impair classroom language learning. Furthermore, upon completion of the two-semester foreign language requirement, some students lack language-learning strategies to support ongoing, independent maintenance and development of their cultural and linguistic skills. Drawing on Gee’s (2003) game learning principles, I created a game to address these problems in my context.
Created with free digital tools and resources, this game's main objectives are to support and augment the learning of French language and culture, and to promote student acquisition of effective language learning habits and strategies. To win the game, structured as a multi-week race with weekly progress check-ins, diverse teams of student players select French learning activities, carry them out (outside of the classroom), and finally document them on the digital game board shared with the class. The activities teams may select are designed to promote student-led out-of-school learning, peer collaboration, and co-regulation of learning, with an eye to improving and extending learning outcomes. All activities involve learner interaction with physical or virtual francophone cultural and linguistic resources.
The game’s effects are evaluated via learner survey responses and feedback, learner artifacts, and instructor observations. Excerpts from the completed set of game activities, and the basic digital game board, are available online for viewing. Implications of the study are discussed in terms of improving classroom learning and engagement via digital pedagogy application, incentivizing student-led learning, as well as practical observations for effective creation and implementation of similar learning games in other courses.
The Caselaw Access Project: A Complete Data Set of United States Caselaw
Harvard Library Innovation Lab, United States of America
This session presents the Caselaw Access Project (https://case.law), a complete collection of all precedential court decisions decided in United States jurisdictions between 1658 and 2018. The data is available as structured text and metadata, either by individual API calls or bulk download.
CAP is a significant data set for digital humanites, representing the entirety of United States common law. Setting aside the legal significance of caselaw, CAP also represents some 6.5 million instances, across three centuries, of a human describing a moral/ethical dispute and its resolution.
The session will present the practical use of the CAP data set and early applications, as well as the history of its creation, lessons for construction of similar data sets, and considerations and limitations in its use.
Psychasthenia 3: Dupes
1Duke University, United States of America; 2University of North Carolina, Chapel Hill, United States of America
Pyschasthenia 3: Dupes is a Unity-based videogame art project that explores the hard problem of how we can retain awareness of the narrow-casted nature of our everyday lives in the face of ubiquitous data-collection, analysis, and digital remediation of everyday life. The game draws upon challenging workplace relationships and gamified assessment environments, revealing the ubiquity of data shadow construction, the erosion of personal privacy, and the amplified power of the external instantiation of a avatar self. Dupesis set in a dystopic, yet banal, workplace environment, where every interaction, whether “in person” or online, is logged and judged against a series of internal evaluation factors. These success factors are in turn revealed at the end of the game in the form of a comprehensive Success Report, which resembles a credit report in its presentation and measures, and which forecasts what your ultimate workplace fate will be.
The seeming premise of the game is that you as the user must complete an HR personality test before the end of a gameworld workday. However, you are continually interrupted with other demands. Over the course of the day you visit the company shrink, attend a staff meeting, stop by the communal water cooler, and are summoned for a meeting with the boss, following by a dispiriting trudge back to your basement cube. During these side trips, your interactions with archetypal co-workers are secretly logged, the interactions themselves playing a critical part in building up your “success” profile. Each character reflects a different workplace archetype: The Psychotic, The Artiste, The Narcissist, The Celebrity, The Sophist, The Bombast, The Charismatic, The Dominator, The Ingenue, The Shopper, the Sycophant, the Melancholic, and the Egoist. Interactions with each character reveals his or her core attributes, with your responses increasingly limited as the day goes on. During the endgame, these archetypal figures recombine into a modern day tarot, augmenting and illuminating the success index ostensibly compiled from the formal test. The system reveals the characters representing your spiritual twin and your nemesis, with a numerical Success Index derived via the OCEAN Five Factors of Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism - but here re-imagined as Gullibility, Grinding, Gladhanding, Subjugation, and Internalization.
The revealed estrangement of the holistic individual from authentic human experience is predicated on the assumption that nothing within the workplace remains outside the evaluatory system. “Human” indirection within the game reveals the extent to which a gamified, logged quotidian experience becomes subject to exploitation and summary judgements reappropriated into an evaluatory matrix. Dupes also ultimately also complicates our ever-more-entangled relationships with computer mediated communication by positioning the user in the uneasy position of not being sure whether they themselves are only individuals playing a game, or if they are being logged and judged through their game interactions with a larger purpose in mind. The subject position of the user as a Dupe puts him or her back into the endless regression of surveillance and recuperation, making us as the game’s creators also inherently complicit with the system.
Co-creating Affect: Towards an Ontology of Joy
1Arizona State University, United States of America; 2Weber State University; 3George Mason University
Inspired by the literature and practices of posthumanism, feminist technoscience, and agential, relational, and sensorial objects, this project offers an initial inquiry into the deliberate and collaborative production of joy. Like knowledge, joy is communal, contextual, evolving, relational. Scholars like Rosi Braidotti and Katherine Hayles problematize the primacy of hierarchical, anthropocentric perspectives on relational networks. Inspired by these feminist/posthumanist approaches, we are moved to ask how joy can be a “valid” product of human-object entanglements. This experimental approach takes frameworks from STS and posthumanism and remixes them to place joy as a first-order product (opening the door for other affective experiences to also be valid productions of human-material entanglements, not to mention future sites of and partners to knowledge production).
Scholarly networks develop and transmit knowledge as their primary product, but sculptural, interactive objects create a platform for the affective, illogical, and uncanny to be developed and translated in ways that support transmission within social, material and, importantly, scholarly networks. This project proposes that the interaction between art and technology might unseat the hegemony of “knowledge” as the primary product of scholarly engagement.
Art exists in the liminal space between rational and affective experiences. Its role as a boundary object shifts visibility, provokes affective experience, translates textual, tactile, symbolic, and visual information simultaneously. Plastic art media both respond to stimuli and create new propositions in their material form. By introducing new responsive means, networked and algorithmic technologies extend art’s capacity for active participation in cultural production. As two artists working with technology and interaction, a software engineer, and a rhetorician, we embrace the non-linear, phenomenological power of physical interaction with objects and harness it in the production of joy. We borrow from the art world’s poetic, performative, and tangible discursive practices to create space for a theory of joy.
This demo offers an interruption - of the ideas of scholarship as knowledge production, of the unidirectional format of many sessions - inviting attendees to join us in creating an entangled, embodied, collective experience of joy. Joy as institutional concern now resides mainly in the entertainment sector and the research on joy is most often used in service of capitalism and enterprise (Wall Street sentiment analysis, Facebook quiz/online games as thinly veiled surveillance apparatus, etc.) We ask what knowledges may come when we center joy as necessary to the human inquiry the academy seeks to foster. This demo will take the form of a participatory sculptural assemblage of joy. As boundary objects, works like this might allow us to reflect on the multivalent experience of joy and begin to structure a theory of joy as integral to scholarship. We invite you to contribute to this initial inquiry through an accessible, participatory, feminist, and ultimately human(e) co-production of joy. While participants are invited to bring along their own materials and artifacts, diverse means for embodied joy collection will be available on site.
How to Keep Reading: An Interactive Panel on "Mediate: A Time-Based Media Annotation Tool"
University of Rochester, United States of America
Media literacy is one of the most pressing concerns for research and teaching in the humanities due to the centrality of multi-modal content—images, sounds, and text—in our culture. From film and television to video games, music videos, social media, music, and podcasts, multimodal content is ubiquitous in our everyday lives. Mediate is a web-based platform through which researchers and teachers can pursue scholarly inquiry and curricular development that enhances media literacy about time-based media. It allows users to upload video or audio, generate automated markers, annotate the content on the basis of customizable schema, and produce real-time notes. Once the annotation process is finished, users have a wealth of preserved data and observations about the media; these often result in data visualizations in addition to traditional print analyses. The utility of Mediate ranges from students—and, indeed, computers—learning to close-read time-based media narratively, visually, and sonically to scholars working individually or collectively to reevaluate central questions at both microscopic and macroscopic scales in the history of moving images or sound recording. While Mediate was developed for film and media researchers, we have expanded our use cases to include scholars and courses in linguistics, literature, music history and theory, data science, and visual and cultural studies.
During this presentation, we will discuss the development and use cases for Mediate in media studies, linguistics, literary studies, music theory and history, and data science contexts, and provide an interactive demonstration of the current prototype in which audience members can use the tool. We will thus show how Mediate enables scholars and students to read in the following ways: to annotate film and television in a way that yields data--and deep understanding--about the formal tendencies of both mediums across time; to analyze grammatical structures in advertising as they relate to images, soundtracks, and possibly gestures; to unpack how the vocal inflection of an individual singer relates to the lyric content of a particular song; and to teach computers how to recognize sound effects and visual devices that in some sense require the human ability to read. Through these cases, we intend to make an argument for how Mediate reveals the persistent necessity for humanist forms of reading that have come under increasing attack intellectually and politically.