Unveiling the ideological Understandings of Future in the Geospatial Industry
Helena Atteneder1, Joan Ramon Rodriguez-Amat2
1Universität Tübingen, Germany; 2Sheffield Hallam University
This research critically examines representations of the future within the geospatial industry, focusing on the Environmental Systems Research Institute (ESRI) blog as a primary source. Geographic Information Systems (GIS) technology plays a pervasive role in modern society, influencing various spheres and demanding scrutiny of its envisioned future. Utilizing a combination of Latent Dirichlet Allocation (LDA) Topic Modeling and critical discourse analysis (CDA), the study aims to reveal the ideological principles underpinning ESRI's depictions of the geospatial industry's future and assess their societal impact.
The analysis uncovers two key findings. First, the ESRI blog predominantly embraces technological determinism, framing GIS advancements as an inevitable solution to societal challenges without adequately addressing potential risks such as privacy concerns and social inequalities. Second, the blog exhibits a one-sided narrative, lacking critical engagement with the drawbacks of geospatial technology. The optimistic portrayal raises concerns about hindering a comprehensive understanding of GIS implications.
This research contributes to geomedia studies and internet research by unveiling ideological content within industry representations. It emphasizes the importance of open, critical discussions about the benefits, limitations, and societal implications of emerging technologies. By promoting responsible and ethical dialogue, the study aims to foster a nuanced understanding of GIS technology's role in shaping our collective future, ensuring a balanced approach to its societal impact.
TRUST ISSUES AND RESPONSIBILITIES: SOCIAL IMAGINARIES, RISK, AND USER LABOUR IN DIGITAL BANKING APPS
Yuening Li, Aphra Kerr
National University of Ireland, Maynooth, Ireland
This paper draws upon conceptual frameworks of platformisation (van Dijck, Poell, and de Waal, 2018), media convergence (Jensen, 2022), trust in digital banking (Mezei and Verteș-Olteanu, 2020; van Esterik-Plasmeijer and van Raaij, 2017), and social imaginaries (James, 2019; Mansell, 2012; Gillespie, 2018). It views digital banking apps as platforms that enable personalised interactions (Poell, Nieborg, and van Dijck, 2019), and aim to investigate the datafication (van Dijck, 2014; Sadowski, 2019) and platformisation of banking. This approach underscores the transformation of service dynamics and the challenges brought by digital banking concerning public accessibility and social inclusion (Swartz, 2020). We ask: a) What are the dominant imaginaries of payment reflected by contemporary financial services? and b) How do the design and affordances of digital payment services impact trust, responsibility, and user labour?
This paper employs a modified walkthrough method (Light, Burgess, and Duguay, 2018) including detailed content analysis of the Terms and Conditions (T&Cs) documents required for initial access to seven digital banking apps in Ireland. The sampled banking apps include Bank of Ireland (BOI), N26, An Post Money, Revolut IE, Chase UK, Starling Bank UK, and Klarna. The modified walkthroughs highlight a significant convergence between the finance and media industries. Our analysis identified three dominant social imaginaries of payment leading to different designs for digital banking apps: a) the Institutional Imaginary, b) the Transactional Imaginary, and c) the Digital Imaginary.
Betting on (Un)certain Futures: Sociotechnical Imaginaries of AI and Varieties of Techno-developmentalism in Asia
Hiu-Fung Chung
University of Toronto, Canada
The proliferation of generative artificial intelligence (AI) has prompted the development of comprehensive AI developmental and governance frameworks globally. Yet, existing literature on AI innovation in non-Western societies often overlooks economically advanced but geographically non-dominant societies, instead focusing on large nation-states like China or developing regions in Global South such as South Africa. This paper examines the variegated sociotechnical imaginaries of AI in three Asian developmental societies - Singapore, Hong Kong and Taiwan - addressing two research questions: what are the desired forms of AI development and governance in small-size advanced economies? How does this desired form vary according to the historical, institutional, and geopolitical contexts of these societies?
Through discourse analysis of policy documents from the early 2010s to 2024, the paper identifies three imaginaries of techno-developmentalism: Singapore’s cybernetic pragmaticism to legitimize its neoliberal authoritarian rule, Hong Kong’s techno-entrepreneurship in refashioning financial capitalism, and Taiwan’s defensive survival modality against internal socio-economic instability and external threats posed by the rivalry of superpowers. Decision-makers in these societies must establish AI developmental frameworks capable of resource allocation, actor coordination, strategic coupling with the global tech economy, and managing uncertainties in specific AI-centric socio-economic reform.
By offering comparative case studies of these Asian societies, this paper contributes to understanding the heterogeneous narratives and practices of AI innovation, moving beyond simplistic narratives trapped in the Global North and South binary.
Spotlight on Deepfakes: Mapping Research and Regulatory Responses
Alena Birrer, Natascha Just
University of Zurich, Switzerland
The advent of deepfakes has raised widespread concerns among researchers, policymakers, and the public. However, many of these concerns stem from alarmism rather than well-founded evidence. While research has begun discussing deepfakes’ potential harms and whether existing law is sufficient to counteract them, there is a lack of consolidated knowledge regarding the empirical evidence supporting these concerns as well as the specific regulatory measures developed in response. To bridge these gaps, our methodological approach is two-fold: (1) we provide a systematic literature review to consolidate what is currently empirically known about deepfakes, and (2) a qualitative content analysis of the evolving regulatory landscape. This is to offer a more comprehensive understanding of the deepfake phenomenon and to provide directions for future research and policymaking. The findings highlight gaps in our knowledge of deepfakes, making it difficult to assess the appropriateness and need for regulatory action. While deepfake technology may not introduce entirely new and unique regulatory problems, it can amplify existing problems such as the spread of non-consensual pornography and disinformation. Effective oversight and enforcement of existing rules, along with careful consideration of required adjustments will therefore be crucial. The dynamic nature of deepfake technology also calls for adaptive policy approaches that aim to mitigate harm while protecting individual rights and addressing larger societal issues. Altogether, this highlights the importance of more empirical research to navigate and comprehend the regulatory challenges raised by deepfakes and to develop evidence-based countermeasures.
|