Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
AI, Ethics, and the University
Time:
Wednesday, 30/Oct/2024:
1:00pm - 4:30pm

Location: Discovery Room 1

50 attendees

Show help for 'Increase or decrease the abstract text size'
Presentations

AI, Ethics, and the University

Sarah Florini, Alexander Halavais, Jaime Kirtz, Nicholas Proferes, Michael Simeone, Shawn Walker

Arizona State University, United States of America

Introduction

Like other organizations, universities are actively investigating the ways in which generative AI and large learning models can be integrated into their work. Initial concern over plagiarism and cheating has been joined by opportunities to personalize learning, to automate administrative and instructional processes, and perhaps most importantly to help individuals and organizations make use of these new technologies in ethical ways. Many universities--including that of the facilitators--are seeking to rapidly adopt and proliferate these nascent technologies, often in partnership with existing and emerging commercial providers. Earlier this year, Arizona State University entered into an agreement with OpenAI to provide a site license for ChatGTP Enterprise, and is actively implementing this into instruction, research, and administration. However, there are significant potential pitfalls in this new gold rush. For example, universities may fail to prioritize the safety and privacy of users (including those in vulnerable positions), or consider the potential dangers or deleterious effects of experimenting with these approaches. Universities must contend with the complex and troublesome political economy of these tools, in addition to their environmental consequences. And, universities must consider how adoption of these technologies creates ontological problems in terms of regimes of truth and expertise. Given that the ways universities engage in these new technologies are likely to act as a template for wider adoption, getting it right in these contexts is important.

We are at an inflection point. There is a limited window during which technology scholars can shape the deployment of these tools before they become obdurate (Pfaffenberger, 1992, p. 498). During a period when there are more questions than answers about how the use of these technologies affect legal structures, government action, and the structure and function of industries and knowledge work, there is a desperate need for scholars of technology to contextualize these changes against a broader history of technology adoption, to weigh the ethical challenges they present, and to act as a counterweight to calls to, once again, move fast and break things. To help build a network of scholars interested in shaping this inflection point, we propose a half-day preconference on AI, Ethics, and The University at AoIR.

Attendees & Organization

Many AoIR attendees are likely to be involved in how AI is being used at their own universities already, or wish to play a more active part in that work. Our aim is to gather these voices to share experiences, stances, and aims. By the end of the preconference we hope to have established a set of shared core questions we should be addressing as scholars and public intellectuals, and a way forward for establishing frameworks for adoption, appropriate restrictions on data collection and use, and guidelines for adoption within the university and how universities may leverage their social position to shape the ways in which publics, governments, and industry use AI

Attendance is open to all AoIR delegates. We will contact registrants ahead of the workshop and ask participants to provide answers to a short set of questions, along with a brief position statement. The facilitators will then use this initial information to organize a set of guided conversations.

We anticipate discussion points may include:

* In what ways may stakeholders in AI-mediated contexts be better informed about the ways in which their creative efforts can be used and misused by generative AI systems?

*How do we appropriately indicate our use of new AI tools in our own work as faculty, students, or administrators in ways that are easily discoverable?

*How might those stakeholders be better represented in decisions related to the adoption of AI systems?

* When should students or faculty be able to choose to use AI tools and what are the conditions under which they may choose not to use these tools?

* If contracting with commercial suppliers of AI, to what degree can and should we insist on elements of transparency, portability, intellectual property, privacy, and control?

What role should universities play in promoting non-commercial alternatives to various forms of artificial intelligence?

How best might we explore the possibilities of new AI technologies within constrained spaces before adopting them at scale?

Should universities play an important role in modeling ethical adoption and non-adoption of AI tools, and how do we better document and communicate these processes and their value to industry and to policymakers?

How could universities take a leadership role in evaluating AI systems for more than simply perceived performance? What non-performance related success measures should be taken into account when evaluating AI systems?

AoIR provides an ideally situated space in which to share these efforts and engage in coordination with global partners. Our aim is to close the workshop with a roadmap to move toward a collective or consensus statement that may be shared more widely.

Work Cited

Pfaffenberger, B. (1992). Social anthropology of technology. Annual review of Anthropology, 21(1), 491-516.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: AoIR2024
Conference Software: ConfTool Pro 2.6.153
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany