Democratic Maturity and Institutionalisation of Public Policy Evaluation - A Qualitative Comparative Analysis (QCA) of 22 Countries in the Americas and Asia-Pacific
Julián D. SALAZAR J.
University of Bern, Switzerland
This paper examines whether and to what extent core democratic characteristics are necessary for the institutionalisation of public policy evaluation. Policy evaluation research is still largely Western-centric (Jacob, 2023) and merely focused on liberal democracies (Stockmann et al., 2020). Even recent research (Varone et al., 2023) seems to implicitly take liberal democracy for granted when studying policy evaluation, leaving a research gap on how democratic characteristics are necessary or sufficient for the institutionalisation of policy evaluation. This gap becomes even more significant in the current era of post-truth, rising populism and democratic regression described by Bauer (2023), Sedelmeier (2023) and Hodson (2021). If policy evaluation depends on certain democratic features, such trends could lead to a decline in policy evaluation activities or even render evaluation obsolete in a post-truth world (Picciotto, 2019), where evaluation would be required more than ever (cf. Dorren & Wolf (2023); Han (2023); Bundi & Trein (2022); Schlaufer (2018); Zwaan et al. (2016); Moynihan (2008); Harty (2006) for examples of positive impact of policy evaluation on democracy). Such a decline could, in turn, reduce liberal democracy's arsenal to face the prevailing Trumpian era of denial of facts and tolerance of political lies.
Several factors, according to Jacob (2023), drive the institutionalisation of evaluation. These include several related to democracy, such as the capacity of the political system to use scientific evidence in a policy process, where public interventions can be examined in a transparent manner (including the implicit democratic features of freedom of speech and the rule of law) as well as the political will of decision-makers related to the need for accountability, a ‘hallmark of democratic governance’ (Han, 2023). Nevertheless, existing studies fall short in analysing in a comparative and systematic way how distinct features of democratic systems determine the institutionalisation of policy evaluation. This article analyses the extent to which the presence of these democratic features is necessary or sufficient to ensure the institutionalisation of policy evaluation.
The two volumes by Stockmann et al. on the institutionalisation of evaluation in the Americas (2022) and in Asia-Pacific (2023) provide case studies that offer a comprehensive overview of the institutionalisation of evaluation in 11 American and 11 Asian and Pacific countries. This article uses this data as an empirical source to assess the different levels of institutionalisation of policy evaluation across these continents. The institutionalisation of evaluation is defined here as the process by which evaluation systems are created, modified or even abolished (Jacob, 2023). An evaluation system consists of formal or informal rules or procedures and organisations that create both a demand and a supply side for policy evaluation (Jacob, 2023). Relying on the Democracy Index of the V-Dem Institute, a fuzzy set qualitative comparative analysis (fsQCA) (Ragin, 2014) of these 22 countries reveals the necessity or sufficiency of democratic features for the institutionalisation of policy evaluation. The results shed light on the reciprocal dynamics of democracy and public policy evaluation.
Exploring complexity-informed evaluation in sustainability transformation: a semi-systematic literature review
Rose Thompson Coon, Jari Autioniemi, Ville-Pekka Niskanen
University of Vaasa, Finland
Due to the complex, diverse and necessary nature of sustainability transformation, a critical and rigorous evaluation framework is urgently needed. Evaluation is a key activity in sustainability transitions, to ensure that programmes and policies striving for a more sustainable society are meeting the targets to reduce social inequalities and to ensure development within planetary boundaries. Problematically, there is little thorough research on the topic, which is reflected in the lack of clear theoretical framing of complexity thinking for evaluation in the sustainability context. This systematic literature review explores the application of complexity theory as a critical lens for understanding and evaluating sustainability transitions across social, ecological, and technological systems.
Employing a multi-stage search strategy across 3 academic databases (Web of Science, Scopus, Greenfile), we analyzed a large number of peer-reviewed publications from 2000 to 2024. The analysis focuses on examining the role of complexity thinking in reframing traditional approaches to evaluating and assessing sustainability challenges in peer-reviewed scientific literature. Our analysis explores the potential of complexity-based perspectives for the evaluation practice, challenging linear, reductionist models of change by emphasizing emergent properties, non-linear dynamics, and adaptive interconnected systems. By mapping the theoretical and practical applications, the review synthesizes how complexity perspective has been utilized in evaluating sustainability transformation at policy, programming and local level.
The review identifies key theoretical contributions, including novel frameworks for analyzing and evaluating sustainability transformation, as well as methodological tools for carrying out complexity-informed evaluation. The findings contribute to the future designs of policy evaluations and adaptive policy frameworks. In addition, key research gaps from the literature are identified to facilitate and support the development of the evaluation of sustainability transitions. Specifically, systematic analysis of complexity-informed evaluation frameworks and approaches enhances the overall soundness of sustainability assessments and contributes to the implementation of environmental sustainability objectives as such.
“The game is afoot”? Promises and challenges in future-oriented evaluation
Lena LINDGREN
University of Gothenburg, Sweden
Evaluation is generally understood as the assessment of the worth or value of something, e.g. a policy, that has happened or is ongoing to provide guidance for future actions. Underlying this view is an assumption that a policy that has previously been effective (or ineffective) in addressing a policy problem will continue to be so in the future. It is also common to look at evaluation as something that mainly aims to improve what is already being done. This becomes problematic in the volatile and uncertain moment in which we find ourselves where policy problems seem to grow increasingly complex, involve multiple interdependent actors and external influences that make it difficult to predict the outcomes of different solutions.
Complex policy problems require policymakers and public administrators to rethink problem definitions, adopt new approaches to problem solving, even reassess which problems should fall under the public sector’s domain. For evaluators, there may thus be a need to contribute with reasoned assessments of problems that may happen and designs of policies to meet these before an action is decided and implemented. This shift in evaluative thinking is already underway. Schwandt (2019), referencing Shakespeare’s phrase “the game is afoot” (meaning that something new and exciting is about to happen), suggests that traditional evaluation can be complemented by “post-normal evaluation” in decision-making contexts where facts are uncertain, values are contested, stakes are high, and decisions are urgent.
Schwandt is surely not alone in these reflections. Over the past decade, researchers in evaluation, policy analysis, futures studies and planning have emphasized the need for creative and strategic thinking about the future and, not the least, for cross-fertilization between these fields. Titles such as “Look to the future, evaluators” (Ruedy & Clark, 2024), “Bridging foresight and evaluation” (Gardner et al., 2024), and “Thinking outside the box?” (Considine, 2012)are a few examples that illustrate this growing trend. The European Commission’s (2023) regulatory impact assessments for future-proof legislation, and the creation of design laboratories for public sector innovation further reflect this development (Peters, 2022).
My paper builds upon these discussions. First, I briefly describe a handful of approaches that may be relevant for future-oriented evaluation (needs assessment, policy design, what’s the problem represented to be, program theory evaluation, evaluation in planning and foresight). The paper concludes by problematizing these approaches through some key evaluation issues, specifically the temporal dimension of the object of evaluation, epistemology, valuation, and the role of the evaluator.
A Mixed Method Approach for Evaluation of a Policy Instrument – Assessing the Initial Use of the Policy Compass for Policymaking by Dutch National Government
Michael DUIJN, Joelle VAN DER MEER, Catherine VROON, Wouter SPEKKINK
Erasmus University Rotterdam, Netherlands, The
In the Netherlands, the national government introduced the so-called Policy Compass (hereafter abbreviated to PC) as an instrument to guarantee and enhance the quality of law and policymaking for all ministries (Ministry of Justice and Security, 2022). This instrument consists of a website that presents the key questions that, by answering them, legislation and policy-making professionals can use to formulate new laws and/or policies. The website offers an overview of requirements for quality assurance that new laws and regulations and policies must adhere to, as well as a broad array of tools and tests to formulate answers to the key questions. Almost one and a half years after its introduction the use of and the practical experiences with the PC have been evaluated. Our evaluation design was based on a reconstruction of the policy theory (Bongers, 2023; Leeuw, 1991) behind the design and implementation of the PC. What were the intentions behind this instrument? What kind of problem(s) does this instrument needed to solve with regard to new law and policymaking? How were its use and outcomes perceived? The reconstructed policy theory was then used as a basis for data collection and analysis on behalf of the first evaluation round (the baseline measurement) of the use of the PC.
The evaluation of the PC was conducted using a mixed methods approach (Pluye, 2023; Mertens, 2018). It composed of desk research (documents and websites regarding rationales behind the introduction of the PC), a survey among legislation and policy-making professionals and their supervisors (managers) at Dutch ministries (regarding the actual use of an practical experiences with the PC) (N= 664), in-depth interviews with representatives of these professionals and supervisors (N=16), an automated analysis of the content of the executed PC-implementation for new laws and policies (using AI-based software program) and two focus group meeting with members of the coordinating implementation working group (monitoring the implementation of the PC).
Hence, by collecting data through multiple quantitative and qualitative methods we applied triangulation (Campbell et al., 2028; Patton, 1987). In this way, we are able to illuminate and substantiate our findings from multiple angles and perspectives.
Based on the data collection and analysis we concluded that the PC is known and recognized by a small majority of both legislation and policy professionals as well as their supervisors. However, actual use is still limited to almost one fifth of them. Also, the quality of the actual use is not without difficulties either. Not all key questions are equally valuated as useful and meaningful by the professionals involved, with regard to its support for ‘writing good quality laws and policies’. Having said this, in general the content of the PC – its key questions, the supporting tools and tests and the requirements for quality assurance, are evaluated positively, indicating that professionals acknowledge its practical value.
|