P 1.1: Postersession
Follow me: Social media users and factors affecting agendas during Election
Academic College of Emek Yezreel, Israel
Relevance & Research Question
Controlling media agendas and public agendas are essential during election periods. Therefore, it is not surprising that agenda-setting theory often has studied these periods. Iyengar et al. (2008) have highlighted the significant role people's perceptions of current issues have in determining their selective exposure patterns to campaign information. Selective exposure is the idea that people will expose themselves to content and platforms according to their needs and inner worlds and avoid messages that might contradict these (Messing & Westwood 2014). Roessler (2008) has noted that studies concerning the individual-level effects of agenda-setting are rare compared to the extensive studies of agenda-setting's aggregate-level effects. Essentially, belonging to a specific group or community may change or mediate individuals' media agendas. Therefore, we inspect whether voters’ agendas vary as a function of their voting intentions? And do voters' agendas vary due to their following patterns on contenders' social media accounts?
Methods & Data
Based on data gathered a week before the March 2021 election for the Israeli parliament. The sample (N=543) was obtained from an online panel based on the Central Bureau of Statistics data. The mean age was 43.1 (SD= 14.7); 48.8% were men, and 51.2% were women.
Regarding voting intentions, 22.1% reported voting for the ‘Likud’ (Central-Right party) and 21.5% for ‘Yesh-Atid’ (Central-Left party). On social media, 40% of respondents reported following political candidates. Of those, 56% followed Likud leader (Prime Minister Benjamin Netanyahu) on social networks, and 38% followed 'Yesh-Atid' (opposition leader Yair Lapid). On the one hand, no significant correlation was found between the intention of voting for ‘Likud’ or ‘Yesh-Atid’ and the perceived importance of issues on the public agenda. On the other hand, we found a significant correlation between the exclusive followers of the ‘Likud’ leader or the ‘Yesh-Atid’ leader and the perceived importance of issues on the public agenda.
This research contributes to our understanding of the possible effects of following patterns of politicians on online social networks. It implies that the exclusive following contributes more to the perceived agenda than other effects measured concerning exposure to traditional media or digital media.
Top of Mind: how to ask in online questionnaires
1Demetra Opinioni.net, Italy; 2RAI Pubblicità S.p.A
Relevance & Research Question
Top of mind is a question set often used in market research: the first brand remembered is asked followed by other brands mentioned spontaneously.
Rai Pubblicità, with Demetra’s methodological advice, decided to test which is the best way to ask TOM question in an online context, in order to gather Brand KPI and Uplift as more reliable as possible.
Four different ways to ask the TOM question set are been tested:
. Wide text box: all brands are writed spontanously in one sole space. This is the method that Rai Pubblicità used still now.
. One space in the first page (TOM) and other spaces (other brands) in the following page. The question in the second page was: “If you remember other car brands, write it down in the following spaces"
. One space in the first page (TOM) and other spaces (other brands) in the following page. The question in the second page was: “Write in the following spaces other car brands that occur to you”
. Sole page with spaces that showing up when the previous is written.
The survey was carried out using the same questionnaire on two different online panel (Opinione.net and another provider used by Rai Pubblicità). The questionnaire was about cars and was computerised using two different platforms. For each group was reached 100 completes per panel (800 responses in total)
Results analisys shown that the best way to ask the question set is the one with the TOM in first page and other brands request in the following page (methods 2 and 3). This method leads to more brands mentioned (t-test pvalue: <0.05). Also the question text in the second page influences data quality: the question in the third option leads to mention more brands than the other one (t-test value: <0.05).
Our experiment starts from a real need and allow us to understand in an empirical way what is the best method to ask a set of questions widley used in market research. Using the same questionnaire in two different panels makes the study more stable and the results more reliables.
Validating the Survey Attitude Scale (SAS): Are Measurements Comparable Among Different Samples of Highly Qualified from German Higher Education?
1German Centre for Higher Education Research and Science Studies (DZHW) Hannover; 2University of Kassel
Relevance & Research Question
Besides others, general attitudes towards surveys are part of respondents’ motivation for survey participation. There is empirical evidence that these attitudes predict participants’ willingness to perform supportively during (online) surveys (de Leeuw et al. 2017; Jungermann et al. 2022). Hence, the Survey Attitude Scale (SAS) differentiates between three dimensions: (i) survey enjoyment, (ii) survey value and (iii) survey burden (de Leeuw et al. 2010, 2019). Based on de Leeuw and colleagues’ (2019) work, we investigate into the question whether the SAS measurements can be compared across different online survey samples of highly qualified population.
Methods & Data
To validate the SAS, we implemented its nine item shortform, adopted from the GESIS Online Panel (Struminskaya et al. 2015) at four different online surveys for German students, graduates and PhD students: (1) the HISBUS Online Access Panel (winter 2017/2018: n=4,895), (2) the seventh online survey of the National Educational Panel Study (NEPS) - Starting Cohort “First-Year Students” (winter 2018: n=5,109), (3) the third survey wave of the DZHW graduate panel 2009 (2019, n=664) and (4) a quantitative pre-test among PhD students within the National Academics Panel Study (Nacaps; spring 2018: n=2,424). The GESIS Online Panel functions as reference data for benchmarking. We first use confirmatory factor analysis (CFA) to validate the SAS. Thereafter, we perform multi-group CFA using an integrated dataset to ensure measurement invariance, evaluating it hierarchically on four levels (Chen 2007; Ender 2013).
First, the CFA results indicate that the latent structure of the SAS is reproducible in all four samples. Factor loadings as well as reliability scores support the theoretical structure adequately. For measurement equivalence our empirical findings secondly support construct and metric invariance among the four samples; however, scalar and strict invariance are not supported.
Since de Leeuw and colleagues’ (2019) analyses are based on general population surveys, we extend the picture specifically for young highly educated respondents. This is relevant, because higher education research also suffers from declining response rates and lacks empirical knowlegde whether instruments designed for the general population also work for this specific group of highly qualified.