Regulating the Future of Work: Addressing the Challenges and Opportunities of AI and Emerging Technologies on Labour Rights and Access to Justice
Najib kalungi Kibirige
Nkumba University, Uganda
The rapid advancement of Artificial Intelligence (AI) and other emerging technologies is transforming the world of work, presenting both opportunities and challenges for labour rights and access to justice. This paper examines the necessary legal and regulatory frameworks to address these challenges and opportunities, with a focus on emerging regulatory trends and the application and enforcement of labour rights.
The paper argues that the current regulatory framework is inadequate to address the complexities of AI and emerging technologies in the workplace. It highlights the need for a comprehensive and nuanced approach that balances the benefits of technological innovation with the protection of labour rights and access to justice.
The paper draws on international labour standards, comparative labour law, and literature on AI and emerging technologies to propose a framework for regulating the future of work. It emphasizes the importance of:
1. Human-centred approach: Prioritizing human well-being, dignity, and rights in the design and deployment of AI and emerging technologies.
2. Regulatory agility: Encouraging regulatory frameworks that are adaptable, flexible, and responsive to the rapid evolution of AI and emerging technologies.
3. Social dialogue: Fostering collaboration and dialogue among governments, employers, workers, and civil society to ensure that labour rights and access to justice are protected
Literature Review:
- Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In The Cambridge Handbook of Artificial Intelligence (pp. 316-334). Cambridge University Press.
- Deakin, S., & Wilkinson, F. (2005). The law of the labour market: Industrialization, employment, and legal evolution. Oxford University Press.
Recommendations:
1. Governments, employers, and workers should engage in social dialogue to develop and implement policies that protect labour rights and access to justice in the face of AI and emerging technologies.
2. Regulatory frameworks should prioritize a human-centred approach, ensuring that AI and emerging technologies are designed and deployed in ways that promote human well-being, dignity, and rights.
Conclusion:
The future of work is being shaped by AI and emerging technologies, presenting both opportunities and challenges for labour rights and access to justice. A comprehensive and nuanced regulatory framework is necessary to address these challenges and opportunities. By prioritizing a human-centred approach, regulatory agility, social dialogue, and education and training, we can ensure cool mitigation and coping strategem
The Last Mile of AI Value Chains: Labor Standards for Data Work
Ritvik Gupta, Priyam Vadaliya
Aapti Institute
This paper focuses on bringing the norms of decent work to the growing “data work” profession by looking to labor for much needed direction.
The latest developments in artificial intelligence (AI) have presented the world with both possibilities and risks. Though governments, civil society, and the technology sector attempt to mitigate AI’s worst effects, one aspect remains painfully neglected- the plight of data workers. A substantial, often invisible workforce, operating under precarious conditions, serves as the unrecognized engine fueling the AI dataset production, tirelessly creating high-quality datasets to meet tech companies' data needs. Such labor helps train and hone AI technologies, making data workers critical to AI value chains.
Much of AI’s data requirements are met by people in the Global South. Here, workers endure painful challenges like low pay and substantial, unpaid “waiting periods.” Despite evidence of the exploitation inherent in data work, safeguards remain scarce, and avenues for workers to confront capital are limited. Existing work has shown that corporations can benefit from perspectives of their workers, underscoring the necessity of worker-led frameworks to secure their dignity and welfare within larger AI production apparatus.
Through this study in partnership with a data firm, Karya, we met data workers, businesses, and researchers to understand workers’ conditions and challenges, and the relationships between companies and their workforces. This work helped form a code of conduct for data work, consolidating key considerations for governing firm-level practices and policies essential to workers’ safety and well-being.
We used in-depth interviews, focus group discussions, and quantitative surveys to understand working conditions, problems in company processes, and pathways to improvement from data workers across diverse locations in India. Expert interviews with researchers and businesses helped explore the dominant logics driving firms’ governance, as well as the state of the market for datasets. These discussions were augmented by the review of secondary sources like prior scholarly research, company publications on business practices, and various existing charters and standards on business conduct and working conditions.
This paper presents two tools for pursuing decent work within data labor ecosystems. First, through direct engagement with workers, it foregrounds the precarities and promises inherent to data work, culminating in governance standards that speak to considerations like fair pay, working arrangements, and welfare. Second, and most crucially, it makes a case for a novel methodological framework for co-governing data work, building feedback loops for a sustainable and dignified data production economy.
The Effectiveness of Algorithmic Information Obligations in the Labor Relationship: Lessons Learnt from the Spanish Experience
Raquel Serrano Olivares1, Anna Ginès i Fabrellas2
1Barcelona University; 2Universitat Ramon Llull, Esade
The EU’s regulation regarding AI systems and algorithms for automated decision-making is based on the notion of transparency.
To ensure fair and transparent processing of personal data, the GDPR grants individuals the right to obtain meaningful information about the logic, the significance and the consequences of fully automated decisions.
The AI Act and the Platform Directive have overcome some of the regulatory deficiencies identified by the literature, by extending this information right to all forms of automated decisions, including those with human intervention. Furthermore, it specifically recognizes the right to obtain an explanation and information rights to workers representatives. The AI Act ensures workers can obtain meaningful explanations of AI’s role in decision-making and requires employers to inform workers’ representatives before deploying AI in the workplace. Furthermore, it mandates employers to ensure workers achieve sufficient AI literacy.
Spain introduced similar obligations in 2019, requiring employers to inform workers’ representatives about the “parameters, rules, and instructions” of AI systems used to make decisions affecting employment conditions, access, and maintenance of employment.
In this context, the paper examines the effectiveness of algorithmic information rights in current regulations. Using Spain as a case study, it draws on desk research and in-depth interviews with workers’ representatives and union leaders to assess company compliance, including digital platforms, with their obligation to disclose AI-driven decision-making processes.
The paper intends to contribute to the literature on algorithmic management and transparency, by identifying elements that enhance compliance with AI information obligations in the workplace. The paper highlights that transparency requires not only algorithmic explainability but also access to personal data used for training AI systems and for algorithmic management.
The main finding of the paper is that broader information rights in the AI Act and Platform Directive, which cover all AI-driven workplace decisions, can improve compliance by also including automated decisions with human intervention and clarifying the scope of the obligation. However, the AI Act’s definition of AI may hinder compliance due to uncertainty about which systems are included. Similarly, the lack of clarity on the required information that employers must disclose weakens effectiveness. The Platform Directive addresses this limitation by specifying the categories of data used, the actions that are monitored or evaluated, the key decision parameters, and their relative importance in the decision process. Nevertheless, it is a partial regulation, as it applies solely to platform work and does not cover all forms of algorithmic management.
AI in Human Resource Management: Enthusiasm without Empiricism?
Janine Berg1, Hannah Johnston2
1ILO, Switzerland; 2York University, Toronto, Canada
Human Resource Management emerged in the 1950s as a distinct field of study and practice concerned with the management of people in organizations. At its inception, human resource management posited that the effective management of workers was essential for business success. Conceptually, human resources were re-envisioned as an organizational asset rather than an expense. Administrative functions like recruitment, job evaluation, and compensation had been formerly viewed as second-order business considerations; however, with this shift, these tasks took on new strategic importance.
But along with this shift, came a recognition that there needed to be a more rigorous analysis of HR processes to improve decision quality. To overcome these pitfalls, digitally enabled workplace and work-related tools that capture, collect and analyze worker behavior and performance – commonly known as “people analytics” – have been seen as a newfound source of evidence for Human Resource practitioners. With more and higher-quality data and increases in computing power, people analytics is being propelled into the world of prediction, and Human Resource professionals, once responsible for executing a wide range of functions, are relinquishing these responsibilities to algorithms and AI.
This paper analyses the growing use of AI in performing HR functions throughout the work lifecycle, beginning with recruitment, but then covering different functions once a worker is on the job, including the setting of compensation, the organization of work and schedules, performance management, health and safety and training.
Drawing on case studies, each of the purpose-built systems for the discrete HR functions is assessed in terms of the three inter-related parameters of AI systems: (1) the system objective, (2) the data it is built on and relies on, and (3) how it is programmed.
This study contributes to the growing literature on algorithmic management but adds value by providing an analytic framework that can systematically assess the potential benefits but also pitfalls of relying on such systems. As such it provides a useful framework that can help managers, HR professionals, workers and workers’ organizations, better engage and negotiate on the design (or abandonment) of such systems, with a view to improving both job quality and performance.
|