FOCUSING ON VIRTUAL GROUPS: A METHOD FOR FOCUS GROUP INTERVIEWS IN XR/VR GROUP SETTINGS
Oskar Tadeusz Milik1, Dayeoun Jang1, Maxwell Foxman2, Brian Klebig3, David Beyea4, Alex Leith5, Rabindra Ratan1
1Michigan State University, United States of America; 2University of Oregon; 3Bethany Lutheran College; 4University of Wisconsin - Whitewater; 5Southern Illinois University Edwardsville
The art of Internet Research is always developing, while also firmly rooted in the theoretical frameworks of the past. As technologies have evolved over time, new ways of interacting have become pervasive, and with them, a need for a different means of analyzing such interactions. Through all of this change, internet researchers have discussed and developed effective methodologies to suit the needs of research. Significant work has been done on digital ethnographies, digital interviews, and surveys using online systems. One area not as widely discussed and developed is the use of online focus groups in synchronous video forums like Zoom or Extended Reality (XR) spaces such as ENGAGE or VRChat. This paper describes our approach to methodological systems for better understanding digital interaction through focus group interviews within such media.
Specifically, we present the theoretical underpinnings and design of an upcoming focus group study we are conducting within a virtual world setting. We focus on questions of how to incorporate different types of virtual avatars for the participants and how to collect observable data in the virtual space, including language and avatar behavior. Finally, we describe how such data can be used to understand questions of communication, power, individual agency, and identity that occur in these group settings.
Framing Mechanism as Method: A Critical Evaluation of Design Thinking’s Purported Universality
Maggie Rose Mustaklem
University of Oxford, United Kingdom
Over the past twenty years design thinking, like the technology industry, expanded from the Bay Area to develop a global footprint. Enmeshed with Big Tech’s ascendency, design thinking is expanding from its role in industry to the private sector and now higher education. Despite its constantly expanding, increasingly varied global footprint, there is relatively little critical evaluation of how design thinking’s implementation affects local communities and environments. As design thinking expands from corporate clients into the public sector, it faces understudied intersectional pressures. This paper complicates design thinking’s purported universality, drawing upon theoretical frameworks applied to critically evaluate fairness and diversity in the technology industry (Benjamin, 2019; Noble, 2018). In this paper I evaluate two case studies where design thinking was applied to Black and low income minority users. In treating design thinking as an applied mechanism, situated and embedded in local environments, it became apparent that design thinking’s methods failed to universally benefit users. Calling its purported universality into question, this paper argues that the abstract framing of design thinking contributes to a poor understanding of the theoretical approach underlying design thinking’s mechanisms.
THE POLITICS OF MACHINE-LEARNING EVALUATION: FROM LAB TO INDUSTRY
Anna Schjøtt Hansen, Dieuwertje Maria Rebecca Luitse
University of Amsterdam, Netherlands, The
Artificial Intelligence (AI) applications are today implemented across various societal sectors, ranging from health care and security to taking part in shaping the media environment we encounter online. In the last decade there has been a significant shift in the field of AI, as the development of AI applications is no longer confined to the laboratory, but rather widely used and tested in and on societies. With this rapid industrialisation of AI, there is an increased need to understand the implications of both the development and deployment of these systems. While critical scholars have started to scrutinize different components of AI development, the study of evaluative practices in AI has received limited attention. A few studies have highlighted the importance of benchmarking practices and how these methods become integral in establishing the validity of the system and its success, which then enables widespread application. This paper presents a research agenda that outlines how to study machine-learning evaluation practices that move beyond the laboratory into industry applications and standardised validation practices. Based on emerging research and illustrative empirical examples from recent fieldwork, we argue to study machine-learning evaluation as a sociotechnical and political phenomenon that requires multi-level scrutiny. Therefore, we provide three analytical entry points for future research that address the political dynamics of (1) standardised validation infrastructures, (2) the circulation of evaluation methods and (3) the situated enactment of evaluation in practice.
Jump to recipe? Context and portability in quali-quantitative approaches to online misinformation
Robert Topinka1, Scott Rodgers2
1Birkbeck, University of London, United Kingdom; 2Birkbeck, University of London, United Kingdom
Misinformation is widely seen as a fundamental flaw of social media, undermining public culture and democracy. Valuable responses to misinformation, such as fact-checking, content moderation or Big Data-driven monitoring often overlook what attracts users to misinformation, and how everyday habits, and their structuring via social media, contribute to its circulation. There is a growing consensus that blended quantitative and qualitative or “quali-quantitative” (Venturini and Latour, 2010) approaches are the answer. Typically beginning with digitally-generated datasets, quali-quantitative analysis moves nimbly between computational- and close-reading, devoting special attention to the social and technical workings of digital media as methodological tools and objects of study. Quali-quantitative approaches and methods are nevertheless challenging to describe to broader research communities. This paper reflects on a linked pair of intensive methods workshops, focused on experimenting with different quali-quantitative methods for researching misinformation on social media. Our argument is that methodological recipes need ways to account for research context as well as portability of methods. Users of such online advice might understandably feel an urge to figuratively press the “jump to recipe” button; that is, to advance more quickly to a desired solution. However, like in gastronomy, methodological recipes can take many forms, and be interpreted in many ways. We argue that methodological recipes for quali-quantitative approaches need to devote as much priority to inspiring and informing researchers (with contextual details around cases, concepts and enough flexibility to be portably trialled across other settings or platforms) as to describing a highly-specific or tightly-bound set of steps.
|