Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
Session Overview
Session
PaperSession-24: Automation
Time:
Friday, 04/Oct/2019:
11:00am - 12:30pm

Session Chair: Gabriel Pereira
Location: P505
(cap. 42)

Show help for 'Increase or decrease the abstract text size'
Presentations
11:00am - 11:20am

Notions of Evidence in AI for Healthcare: When Predictive Algorithms Meet Practice & Professions

Anne Henriksen, Anja Bechmann

Aarhus University, Denmark

With the continuous growth in Big Data, Artificial Intelligence (AI) is increasingly applied to unveil knowledge “hidden” in data and produce actionable predictions. This potential has been identified by public authorities who work on adopting AI to underpin key sectors in society, healthcare being one of them. In the light of this development, we argue for the need for critically analyzing and discussing the epistemic validity and reliability of AI’s probabilistic materializations of evidence: What is considered (‘good’) evidence in AI? How is that achieved? And when is a prediction deemed to be reasonable to guide action by means of adequate evidence?

Understanding evidence as that by which assertions are justified (Kim, 1988), this paper examines the evidential basis on which AI predictions are developed and deemed reasonable, and how this may affect human assessment and application of predictions in practice as different notions of evidence meet. By means of an ethnographic case study of the development of an AI system in the Danish health sector, we study how AI developers select datasets, label data, adjust weights, set thresholds, and interpret outcomes on the basis of various hypotheses and presumptions and in the context of a wider knowledge-making process. We preliminary find that by rendering visible this social evidence process in AI, and not merely the evidence process inside the machine, the key users of the AI system in the case are provided with a better foundation for assessing and using (and trusting) predictions in dialogue with their own knowledge.



11:20am - 11:40am

Is Artifical Intelligence an Oxymoron? Key questions in the age of data-essentialism

Jakob Svensson

Malmö University, Sweden

We are on the brink of entering into the dataverse as data is becoming the building blocks of our world. In this context, we have observed the rise of a new data orthodoxy that we suggest to label ‘data-essentialism’. Data-essentialism assumes that data is the essence of everything and thus provides the ideological underpinnings for the belief of a possibility of creating Artificial Intelligence that would make humans obsolete. This concept allows us to discuss if everything in the universe can be described in terms of data, if data provides an objective picture of humans and may predict the future accurately. Discussing data-essentialism critically we link it to a return of positivism, a trust in the objectivity of data and that predictions based on data correlations can be fully accurate. We end the article with a discussion of how these ideas have a history and roots in modernity.



11:40am - 12:00pm

TRUST IN THE MUSIC? AUTOMATED MUSIC DISCOVERY, MUSIC RECOMMENDATION SYSTEMS & ALGORITHMIC CULTURE.

SOPHIE OLIVIA FREEMAN

UNIVERSITY OF MELBOURNE, Australia

In this paper I argue that music recommendation algorithms are a complex element of contemporary digital culture. We trust music streaming and recommender systems like Spotify to ‘set the mood’ for us, to soundtrack our private lives and activities, to recommend & discover for us. These systems purport to ‘know’ us (alongside the millions of other users), and as such we let them into our most intimate listening spaces and moments. We fetishise and share the datafication of our listening habits, reflected to us annually in Spotify’s “Your 2018 Wrapped” and every Monday in ‘Discover Weekly’, even daily in the “playlists made for you”. As the accuracy of these recommendations increases, so too does our trust in these systems. ‘Bad’ or inaccurate recommendations feel like a betrayal, giving us the sense that the algorithms don’t really know us at all. Users speak of ‘their’ algorithm, as if it belonged to them and not a part of a complex machine learning recommendation system.

This paper builds on research which critically examined the music recommendation system that powers Spotify and its many discovery features. The research explored the process through which Spotify automates discovery by incorporating established methods of music consumption, and demonstrated that music recommendation systems such as Spotify are emblematic of the politics of algorithmic culture.



12:00pm - 12:20pm

HOW DOES CRYSTAL KNOW? FOLK THEORIES AND TRUST IN PREDICTIVE ALGORITHMS THAT ASSESS INDIVIDUAL PERSONALITY AND COMMUNICATION PREFERENCES

Tony Liao

University of Cincinnati, United States of America

In recent years, there has been a rise in predictive algorithms that focus on individual preferences and psychometric assessments. The idea is that an individual social media presence may give off unconscious cues or indicators of a person's personality. While there has been a growing body of research into people's reactions, perceptions, and folk theories of how algorithms work, there has been a growing need for research into these hyper-personal algorithms and profiles. This study focuses on a company called CrystalKnows, which purports to have the largest database of personality profiles in the world, many of which are generated without an individual's explicit consent. Through qualitative interviews (n=31) with people after being presented with their own profile, this study explores how people perceive the profiles, where they believe the information is coming from, and what contexts they would be comfortable with their profile being used. Crystal profiles also contain predictions about how people will communicate and potentially work together in teams with people of other personality dispositions, which also raises concerns about inaccurate assessments or discrimination based on these profiles. The findings from this study and how people rationalize these algorithms not only builds on our understanding of algorithmic perception and folk theories, but also has important practical implications for the trust in these systems and the continued deployment of hyper-personal predictive algorithms.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR 2019
Conference Software - ConfTool Pro 2.6.129
© 2001 - 2019 by Dr. H. Weinreich, Hamburg, Germany