Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

 
 
Session Overview
Session
Parallel Session 4.6: Algorithmic Transparency and Accountability in the Workplace: Legal and Regulatory Approaches
Time:
Thursday, 03/July/2025:
9:00am - 10:30am


Show help for 'Increase or decrease the abstract text size'
Presentations

Regulation of the Legal Framework for AI Systems in the Field of Employment and Management of Workers Considered to Present a High Degree of Risk

Danuti Top

Association for the Study of Professional Labor Relations, Romania

The use of artificial intelligence (AI) by employers in employment relations to simplify and automate certain processes (e.g., candidate selection, employee performance monitoring) is not new. Given the tendency of certain AI systems to perpetuate discriminatory or prejudiced practices, it has proven necessary to regulate a higher level of protection, by introducing complex obligations for entities that develop or use AI systems in their activity.

Regulation 2024/1689, establishing harmonised rules on artificial intelligence, adopts a risk-based approach, classifying AI systems into four risk categories: unacceptable, high, limited and minimal.

Given the significant impact they can have on careers, livelihoods and workers' rights, some AI systems in the field of employment and workforce management are considered to present a high level of risk: those designed to recruit or select individuals, in particular to place specifically targeted job advertisements, to analyse and filter job applications and to assess candidates, as well as those designed to take decisions affecting the terms of employment relationships, to promote and terminate employment-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of individuals in such relationships.

By way of exception, these systems will not be considered to present a high risk if they do not present a significant risk of harm to the health, safety or fundamental rights of natural persons (e.g. a system that performs a limited procedural task, such as detecting duplicates in a large number of applications). Even in the exempted situations, these systems will always be considered to present a high risk if they create profiles of natural persons.

The use of the comparative method will allow an analysis of AI systems to be carried out to determine the risk categories in which they fall, and the sociological method can be used to analyse the appropriate training of employees and the effects of designating persons responsible for human supervision of the systems.

Since the regulations are very recent, there is no adequate specialized literature, and the number of published studies and articles is small (e.g. Jacek Woźniak, Workplace Monitoring and Technology, Ed. Routledge, 2023; R. Pettinelli, Diritto del lavoro e intelligenza artificiale. Giuffrè, Milan, 2024; Ana Garcia, Impactos de la gestión laboral algorítmica en las relaciones colectivas de trabajo, in Revista Internacional y Comparada de Relaciones laborales y derecho del empleo, no. 1/ 2024).



When Algorithms Rule the Workplace: Can Criminal Law Protect Workers' Rights?

Kamila Naumowicz1, Joanna Untershuetz2

1University of Warmia and Mazury in Olsztyn, Poland; 2University of Business and Administration in Gdynia, Poland

The integration of Artificial Intelligence (AI) into workplace management, particularly through algorithmic management systems, presents significant challenges concerning workers' rights. These systems, which oversee task allocation, performance evaluation, and disciplinary actions, often operate with little to no human intervention, leading to concerns over transparency, fairness, accountability, and data privacy. As AI-driven management becomes more pervasive, there is an increasing need to explore the role of criminal law in addressing potential violations of labour rights.

This paper investigates whether criminal sanctions can serve as effective, proportionate, and dissuasive measures against breaches of workers' rights within AI-managed workplaces. The research is interdisciplinary, combining labour law, organization theory, and criminal law to examine the legal implications of algorithmic management. The central research questions are: (1) How do AI-driven management systems impact workers' rights, particularly in relation to privacy, fairness, and accountability? (2) Can criminal law be effectively applied to address violations resulting from AI-driven decision-making? (3) Who should bear criminal liability when AI systems make decisions that infringe on labour rights?

Methodologically, this study employs legal doctrinal analysis to assess relevant European Union (EU) labour directives and case law from the Court of Justice of the European Union (CJEU). Comparative legal analysis is also used to examine variations in criminal liability frameworks across different Member States. Additionally, the study incorporates insights from organization theory to explore the broader implications of AI-driven decision-making in workplaces.

This paper contributes to the literature by highlighting the regulatory gaps in existing labour laws concerning AI-driven management systems and by critically assessing the feasibility of applying criminal sanctions to labour law violations. It examines different models of liability, including holding AI developers, corporate entities, or individual managers accountable for harmful outcomes. Furthermore, the study explores the controversial notion of "electronic personhood" as a potential legal framework for assigning liability to autonomous AI systems.

Findings indicate that while criminal sanctions can play a role in deterring labour law violations, significant legal and ethical challenges remain. The diversity of criminal liability doctrines across EU Member States complicates the establishment of a unified enforcement mechanism. Additionally, the evolving nature of AI poses difficulties in attributing responsibility, particularly when AI operates autonomously. While criminal law may serve as a deterrent, future legal developments must reconcile the demands of technological innovation with the fundamental principles of justice and accountability in labour relations.



Algorithmic Management: Worker Agency and Voice in Sweden

Carin Håkansta, Ruben Lind, Pille Strauss-Raats

Karolinska institutet, Sweden

Introduction and contribution

Algorithmic management (AM) has implications for working conditions, worker rights and co-determination. Despite much research on worker agency and voice related to AM in platform work, and the incremental spread of AM to non-platform work, there is a lack of literature focusing on worker voice in response to AM in non-platform work. The aim of this study is to explore how digital technologies and AM affect working conditions and the prospects for workplace co-determination and democracy in non-platform work.

Drawing on interview and document data, we asked the following questions:

• What is workers' response to AM-related issues in Sweden?

• What are the implications of for trade unions, solidarity and policy in Sweden?

Methodology

We interviewed 16 employees in various companies from three sectors where AM is known to be relatively prevalent (transport, warehousing, retail) and five full-time trade union employees from the Swedish Trade Union Federation (LO), Swedish Commercial Employees' Union (CEU) and Swedish Transport Workers’ Workers' Union (TWU). All but one of the interviewed workers were also active as trade union representatives for CEU or TWU.

Findings

We found notable differences between the three sectors. The weakest response was in warehousing despite unsustainable workloads, arbitrary structures of discipline, and conflicts on the floor. The transport sector displayed comparatively more agency but also resignation due to subcontracting blurring responsibilities. Workers in the retail sector were more positive to AM and if issues arose, union and/or OSH representatives got involved by advocating employers to take more active measures. However, workers with insecure employment contracts were more reluctant to act, fearing they would not be granted well-needed extra shifts or other benefits.

Despite strong institutions and a legacy of union technology optimism, AM posed challenges to the unions related to fragmentation, individualization, deskilling and decline in professional identity – all of which are problematic for a unified worker voice. Especially when affiliation rates are in decline. Swedish unions respond by defending workers’ right to co-determination and health & safety but struggle due to the complicated context of opaque and complex technology (the “black box”). Action includes e.g. participating in EU discussions and fact-finding exercises to learn about AM practices and effects among their members, with the development of guidance materials to support local unions and members in the process.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: RDW 2025
Conference Software: ConfTool Pro 2.6.154
© 2001–2025 by Dr. H. Weinreich, Hamburg, Germany