Technostress and Burnout in Daily Academic Life: An Empirical Investigation of Study-Related Stressors within the Study Demands and Resources Model
Annika Puhl, Ivonne Preusser
Technische Hochschule Köln, Germany
Understanding Technostress in Higher Education: Insights from an Online Survey Using the Study Demands and Resources Model
Relevance and Research Question Digitalization in higher education through learning platforms, collaboration tools, and AI applications creates new stressors that manifest as technostress and can increase the risk of burnout. Critical discrepancies arise between students and technologies (P-TEL), their social environment (PP), and study organization (PO). This study examines how multidimensional technostress affects burnout symptoms, whether general perceived stress mediates this relationship, and whether study-related self-efficacy serves as a personal resource.
Methods & Data The study is based on a quantitative online survey of students from German universities (N ≈ 210). Measures included technostress (Technostress-Misfit Scale with P-TEL, PP, and PO), general stress (PSQ), burnout (MBI-SS short version), and study-related self-efficacy (BSW-5-Rev). Analyses comprised reliability and factor analyses, multiple regressions, and moderated mediation analyses (PROCESS models 14 and 7), controlling for sociodemographic factors.
Results Technostress correlates strongly and positively with burnout symptoms. The misfit between students and institutional structures, platforms, and deadlines (PO misfit) is the strongest trigger for burnout. General stress partially mediates this relationship, while study-related self-efficacy represents a strong protective factor but does not significantly moderate the effects. Exploratory findings indicate higher burdens among women, bachelor’s students, and individuals with lower technology skills.
Added Value The study integrates technostress as a specific demand dimension into the Job Demands-Resources model and identifies organizational misalignments as central intervention points. It provides practical recommendations for universities: uniform digital platforms, transparent structures, and targeted promotion of self-efficacy.
Lesener, T., Pleiss, L. S., Gusy, B., & Wolter, C. (2020). The Study Demands-Resources Framework: An Empirical Introduction. International journal of environmental research and public health, 17(14), 5183. https://doi.org/10.3390/ijerph17145183 Schaufeli, W. B., Martinez, I. M., Pinto, A. M., Salanova, M., & Bakker, A. B. (2002). Burnout and engagement in university students: a cross-national study. Journal of Cross-Cultural Psychology, 33(5), 464–481. https://doi.org/10.1177/0022022102033005003
Estimating Economic Preferences from Search Queries
Maximilian Althaus, Kevin Bauer, Bernd Skiera
Goethe University, Germany
Relevance & Research Question Economic preferences are central to understanding consumer decision making, yet established measurement approaches remain costly and difficult to scale. At the same time, consumers generate rich digital traces in their day-to-day online behavior. This project asks whether modern language models can infer core economic preferences directly from search query histories. Methods & Data We recruit approximately 800 participants through the Datapods Platform. Each participant links a Google account and shares up to twenty years of pseudonymized search queries. They then complete incentivized tasks and matched survey items that elicit six preference parameters: risk preferences, time discounting, altruism, trust, positive reciprocity, and negative reciprocity. Large Language Models receive each participant’s full query history and generate preference predictions, which are evaluated against experimentally measured benchmarks.
Results The study is still ongoing. Initial trials have confirmed the functionality of the idea. Added Value The project offers the first large-scale test of preference prediction from search histories using incentivized ground truth. It informs the feasibility and limits of low-cost, behavior-based preference measurement, contributes to ongoing debates on digital profiling in markets, and highlights privacy-relevant implications of model-based inference from everyday online behavior.
Trait or State? Understanding Motivational Drivers of Straightlining in a Longitudinal Panel Survey
Çağla E. Yildiz
GESIS - Leibniz Institute for the Social Sciences, Germany
Relevance & Research Question
Straightlining—providing (nearly) identical responses in multi-item batteries—is a common indicator of satisficing in survey research. Compared with task difficulty and respondent ability, respondent motivation has received less systematic attention as a driver of satisficing. In longitudinal surveys, an important open question is whether motivational constructs reflect stable, trait-like characteristics or situational, state-like fluctuations across waves. Survey research also employs diverse operationalizations of motivation (e.g., topic interest, personality traits, survey attitudes), yet these are rarely compared in terms of temporal stability or predictive power. This study therefore examines the extent to which motivational measures display trait- versus state-like variation and how these components relate to straightlining over time.
Methods & Data
We use data from the GESIS Panel.pop, a probability-based mixed-mode panel in Germany, drawing on nine annual waves of the Social and Political Participation Longitudinal Core Study. Repeated measures are available for political interest, Big Five traits (agreeableness, conscientiousness), survey attitudes, and straightlining. To assess stability, we estimated separate random-intercept models for each motivational indicator and computed intraclass correlation coefficients (ICCs). To predict straightlining, we applied a within–between decomposition and estimated a multilevel binomial logistic regression with respondent random intercepts, controlling for demographics, cohort, mode, and survey year.
Results
Motivational indicators show considerable temporal stability. Political interest is the most trait-like (ICC = 0.76), followed by conscientiousness (0.64) and agreeableness (0.58). Survey attitudes display more moderate stability, with ICC values ranging from 0.53 (perceived burden) to 0.59 (survey value) and 0.62 (enjoyment). In the predictive model, political interest is the strongest determinant of straightlining: both higher average levels and within-person increases reduce straightlining. Survey attitudes show smaller, largely trait-level associations, and personality traits have modest effects.
Added Value
Findings indicate that straightlining is driven mainly by stable, between-person motivational differences, with political interest standing out as the strongest factor. By comparing multiple motivation measures and separating their trait and state components, the study provides practical insights for identifying respondents at risk of satisficing and for supporting data quality in longitudinal surveys. Extensions to additional satisficing indicators (e.g., item nonresponse, speeding) are planned.
AFGfluencers in Germany: Platforms, Actors, and Issues
Ramin Kamangar, Kefajat Hamidi, Abumoselm Khurasani
Leipzig University, Germany
Relevance & Research Question
The digital landscape of the Afghanistan diaspora in Germany is rapidly evolving, yet little is known about its online influencers, communication practices, and content narratives. Understanding this emerging microcosm is crucial for bridging knowledge gaps in digital media research and fostering social cohesion. This study investigates three core questions: Which digital platforms are most used by the Afghanistan diaspora in Germany? Who are the key influencers shaping this digital space? What topics and narratives dominate discussions and content creation among AFGfluencers?
Methods & Data We adopt a mixed-methods approach combining expert interviews and qualitative content analysis. First, we conduct interviews with domain experts to map the online ecosystem and identify key communicators. Next, we perform a qualitative content analysis of social media posts from prominent AFGfluencers, focusing on themes, narratives, and engagement patterns. Platforms under study include major social media networks where diaspora communication occurs. This approach allows for a comprehensive understanding of how digital interaction is structured within this community. Results
This research is ongoing, but preliminary work has mapped the main platforms and key influencers within the Afghanistan diaspora in Germany. Initial observations indicate active communication around cultural identity, migration experiences, and community issues. Analysis of content themes and engagement patterns is in progress, aiming to reveal how influencers connect members, shape narratives, and contribute to the formation of an online diaspora network.
Added Value
This study provides novel insights into an under-researched yet important digital community — the Afghanistan diaspora in Germany — enhancing understanding of online diaspora communication. By highlighting key actors, platforms, and narratives, it informs both academic research and policy discussions on integration, social cohesion, and misinformation mitigation. Methodologically, it demonstrates practical approaches for analyzing online communities, contributing to the broader field of digital social research. The findings have the potential to guide engagement with transnational publics and foster inclusive digital discourse.
Comparing Probability and Nonprobability Online Surveys: Data Quality and Fieldwork Processes
Emma Fössing1,2, Lukas Olbrich1, Stefan Zins1, Jörg Drechsler1,2,3
1Institute for Employment Research, Germany; 2Ludwig-Maximilians-Universität, Munich, Germany; 3University of Maryland, College Park, USA
Objectives - Which business question wanted the client to be answered? Online surveys based on nonprobability samples have become increasingly popular due to their cost efficiency and rapid fieldwork. However, nonprobability samples have a poor reputation regarding their data quality compared to probability-based samples. This study investigates the extent to which probability and nonprobability samples differ with regard to data quality. Method & Approaches & Innovation - How were the insights gathered? We conduct one self-administered online probability recruited via postal invitation letters and four nonprobability online surveys administered by commercial online-access panel providers. All surveys use an identical questionnaire. We compare the resulting datasets in terms of key statistical indicators of data quality such as the survey duration, passing or failing of an attention check and specific screen times for longer item texts. In addition, we compare operational aspects of the fieldwork process between probability and nonprobability samples. Results - What are striking and impactful insights? Preliminary findings indicate not only substantial differences between the probability and nonprobability datasets, but also among the nonprobability panels themselves. These differences highlight the methodological and practical challenges of relying on nonprobability data for inferential research. Impact - How did the project move the needle for the client? What was done differently afterwards? The study adds value by providing a systematic, multi-panel comparison of online survey methods covering a broad target population based on IAB administrative data and provides a strong data base for developing and testing statistical adjustment techniques to correct for biases.
|