Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Please note that all times are shown in the time zone of the conference. The current conference time is: 20th May 2024, 02:52:28pm CEST

 
 
Session Overview
Session
Digital Governance 03: Navigating the Regulatory Landscape: AI, Digital Policy, and the EU's Co-Regulatory Framework
Time:
Tuesday, 03/Sept/2024:
9:30am - 11:00am


Show help for 'Increase or decrease the abstract text size'
Presentations

Technology Sovereignty And AI Regulation In The EU: Regulatory Strategy And The Paradox Of Choice

Marton Varju

HUN-REN Centre for Social Sciences, Hungary

As a specific manifestation of the European Union’s current broader, geopolitically focussed political agenda, technology sovereignty has become a central notion and a supposed objective of EU technology policy and regulation. It has provided an important frame also for the impending regulation of technologies associated with the domain of Artificial Intelligence (AI). In the EU policy context, technology sovereignty is construed as a multifaceted target that involves the offering of a balanced response to the challenges of the production and the supply of crucial technologies in the current geoeconomic and geopolitical environment. It also incorporates a distinct strategy for the EU to realise its ambitions: the adoption of declaredly value- and rights-driven, high-standard technology regulation that determines the conditions of access to the European technology market. However, the EU’s policy target does not appear to have capacity to consider that the achieving of technology sovereignty involves dilemmas rather than straightforward choices, and that the choices made are likely to lead to paradoxical outcomes. Furthermore, as a result of this limitation, the technology sovereignty tools or strategies introduced may not be able to deliver on their promised aims and may fall short of the expectations of the relevant actors. This is a particular possibility in the EU context where, despite the centralised determination of general policy directions, in concrete decision-making situations the divergent interests, economic positions, industrial and technological visions of individual Member States unavoidably impact and may undermine the implementation of the proposed policy response. When placed in the ‘blender’ of EU decision-making, even proposals for detailed and concrete technology regulation can be reoriented and distanced from their original, ambitious objectives.



AI Reasoning and Attribution of Liability in the Light of the New EU AI Act

Tomasz Braun

Lazarski University, Poland

One of many facets of currently observed technology progress is a machine ability of data interpretation. This feature of Artificial Intelligence evokes a multitude of questions, and it does it with an unprecedented pace. None of these questions are solely of a technological, legal, ethical or societal character (Navas, 2020; Ebers, 2020). In the contrary, they are interdependent and hardly responded so far (Königs, 2022). The new UE AI Act to a large extend seems to account it.

Interpretation of data fed into the systems allows the machines for conclusion driving and – debatable by some – reasoning understood as a recognition of meaning of consequences of processing data fed into the system result of which is attributing the rationales to these consequences (Abott, 2018; Heinrichs, 2020). Here comes the problem: if AI is truly to be considered as an intelligence then its reasoning creates a basis for decisioning i.e. making choices. Machines that are both in possession of knowledge of the consequences and are attributed with a freedom to select are indeed taking decisions (Chagal-Feferkorn, 2018).

Within the decisions, apart from the easy ones, there are also those that for humans are considered to be hard - in other words, carrying ethical dilemmas. Usually this becomes complicated once it needs to be decided to what catalogue of values a given dilemma is referred to (Pepito, 2019; Claes & Herbosch, 2023). Therefore, AI decisioning brings a need for explaining and rationalising of its choices (Borgetti, 2019).

This opens a discussion for a necessary question of (limits of) autonomy of the AI, and of the protocols to be deployed in case of AI mistakes as well as the meaningful consequences of them (Barfield, 2018; Janal, 2020; Banteka, 2021). Until a (quasi?) personhood is legally attributed to AI and hence its autonomy is consented accountability of humans for AI mistakes will need to be determined (Wagner, 2019; Novelli, 2023; Dijk, 2020). At this moment, the EU legislators took a liberty to wait.

Despite of the timely question of whom to blame and where the liability lies in case AI is wrong or acts wrongfully (Hughes & Williamson, 2019). Various accountability concepts and liability regimes have been discussed (Karner, 2019; Rachum-Twaig, 2020; Panezzi, 2021). So far, they are much determined by the Member States legal systems and the worldviews from which the authors come from rather than by a unified EU approach.



The European Union Artificial Intelligence Act and Beyond: Strategies for Further Artificial Intelligence Regulation in Europe

Nicky Gillibrand1, Chris Draper2

1University College Dublin, Ireland; 2Indiana University, United States

The risks associated with widespread AI usage are becoming increasingly visible, with weekly headlines outlining the dangers associated with its use. As a result, governments in Europe and beyond are beginning to take note. Thus far, the most significant contribution to these efforts is arguably the European Union Artificial Intelligence Act (EU AI Act). Although the EU AI Act has made significant progress in terms of addressing the threats of AI use such as systemic bias, transparency/accountability and intellectual property infringement/theft, there remains certain limitations in its approach. For instance, the EU AI Act makes a distinction between AI usage cases that are considered high risk, without clearly defining the characteristics or rationale that make such use cases high risk. By taking this approach, vital regulatory iterations may prove challenging without clarifying an underlying strategy.

This paper posits that any regulation must accept that any AI tool is fundamentally dangerous. Whilst acknowledging that appropriate and well-regulated AI holds the potential to be a cornerstone of innovation across sectors, the core assumption is that AI is a fundamentally dangerous tool. Therefore, AI requires appropriate regulation which learns from other strategies pursued with other fundamentally dangerous technologies and products in order to reach its true potential in a safe and accountable manner. Regulatory strategies in these areas have ranged with respect to both carrot and stick approaches. For instance, the US commercial space industry offers indemnification from costly accidents if all appropriate regulations are followed, incentivising regulatory compliance, while lawnmowers without a safety deck may no longer be sold, operating as an effective deterrent through denying access to revenue. Building upon the foundations provided by the EU AI Act, this paper explores these regulatory approaches, amongst others, and their potential applicability to popular and emerging AI tools, before examining the appropriate sites and agencies to enact such regulation within the context of the EU in order to best ensure compliance and enforcement.



 
Contact and Legal Notice · Contact Address:
Privacy Statement · Conference: UACES 2024
Conference Software: ConfTool Pro 2.6.149+TC
© 2001–2024 by Dr. H. Weinreich, Hamburg, Germany