Conference Agenda
| Session | ||
WG 1 - Education & Training Programs (3)
| ||
| Presentations | ||
Empowering Rural Local Bodies: Rethinking Capacity Building for Inclusive and Effective Grassroots Governance in India Vel Tech Rangarajan Dr. Sagunthala R&D Institute of Science and Technology, India The transformation of governance in India hinges significantly on the performance and inclusivity of rural local bodies, especially Panchayati Raj Institutions (PRIs), which are at the forefront of delivering essential public services and promoting democratic participation. While decentralization has constitutionally empowered these institutions, persistent capacity gaps in administration, planning, and service delivery continue to hinder their effectiveness. This paper explores the critical need to rethink and redesign capacity-building frameworks to ensure more inclusive and effective grassroots governance. Drawing on case studies, government initiatives, and field insights, the paper evaluates existing capacity-building models and their limitations in addressing the socio-political, technological, and institutional challenges faced by rural local bodies. It examines how transformative approaches — including digital training platforms, participatory planning tools, leadership development programs, and gender-sensitive interventions — can shift mindsets, enhance skills, and expand knowledge among local elected representatives and functionaries. The study highlights best practices from various Indian states and situates them within the broader vision of Mission Karmayogi and the Sustainable Development Goals (SDGs). It argues for a multi-stakeholder, context-specific, and continuous learning approach to capacity building that prioritizes inclusion, innovation, and accountability. By reimagining how rural governance actors are trained and supported, this paper contributes to the discourse on building future-ready governance systems that are responsive to local needs while aligned with national development objectives. From measuring what is easily measurable to quality-driven impact evaluation: how to overcome the challenges and pitfalls of Training Evaluation. Federal Public Service Policy & Support, Belgium 1. Problem Statement and Purpose All over the world, training schools and organisations responsible for competency development spend a lot of time and money in the evaluation of their training and learning activities. On a large scale, questionnaires are issued to gauge participants’ satisfaction with training courses. But does this tell us anything about whether or not participants have acquired the competences they were expected to learn? Or to what extend they apply what they have learnt in their real-life work environments? We tend to measure what is easily measurable. But how can we ensure that we measure the right things? And that we measure things right? Collecting tangible data regarding training transfer and learning impact, obtained in an evidence-based way, helps us align our training efforts with the strategic and operational goals of our teams and organizations. But how do we obtain these data? This paper aims to explore some major challenges and pitfalls of training evaluation within a professional context. Based on practical cases from inside the Belgian Federal Public Service, it explores how these challenges and pitfalls can be overcome by selecting the most suitable evaluation tools, processes and procedures and how training quality can be increased by installing a training evaluation system based on reliable Key Performance Indicators. 2. Methodology Based on practical examples and case studies set in the context of Learning & Development in the Belgian Federal Public Service, this paper aims to share some of our answers to the many challenges regarding training evaluation in a professional context. Through practical examples and evidence-based experiences, we will share our successes and our failures in the field of training evaluation in a professional context. Our evidence-based approach combines the use of existing evaluation models and tools with our practical experience as the Government service in charge of Competency & Talent Development within the Belgian Federal Administration. 3. Findings Training and competency development can be evaluated at different levels. On the lowest level, we evaluate participants’ satisfaction about the training action itself, indicating their appreciation of the training content, trainer, logistics etc. This level is quite easily measured but its added value is relatively small as it does not guarantee that learners have actually learnt anything, nor that they will apply what they learnt in the work environment. Evaluating at higher levels, trying to measure transfer and organizational impact of training is a lot more difficult to organize, yet provides much more valuable information regarding training outcomes. 4. Proposal If we want to evaluate training in a professional context, we need to be aware that evaluating participants’ satisfaction is easy but often gives little indication about what really matters. Measuring transfer in the work environment and organizational impact is harder, yet rewarding as it yields valuable information to improve training quality and to create training which is effective and in line with organizational challenges. Moreover, when creating KPI’s to evaluate if training-linked objectives have been met, we need to choose indicators wisely in order to avoid Key Performance Indicators being useless or even having perverse effects. Safeguarding Merit and Evidence: How Public Administration Education and Training Can Counter Populism ISCTE-Instituto Universitario de Lisboa, Portugal & Centre for Research and Studies in Sociology (CIES) Liberal democracies are currently confronting a populist wave that challenges technocratic elites, politicises the bureaucracy, and hinders evidence-based policymaking. In this context, the teaching of Public Administration (PA) becomes simultaneously more relevant and more vulnerable. Recent studies indicate that populist governments seek to capture mid-level layers of the state apparatus, thereby weakening bureaucratic professionalism and technical autonomy. At the same time, decision cycles accelerated by digital governance demand public managers capable of acting almost in real time, equipped with expertise in big data, data analytics, data literacy, and algorithmic ethics. Faced with the dual challenge of institutional erosion and decision-making immediacy, this paper investigates how academic and professional PA programmes do—or do not—respond to the pressure to train resilient, technically proficient, and socially legitimate cadres. The central research question is as follows: to what extent do programmes designed especially for public-sector leaders embed problem- or challenge-based learning, public data laboratories, and “triple-helix” partnerships, and, where such features exist, do they strengthen future administrators’ capacity to resist populist capture and uphold a culture of evidence and accountability? The study adopts an exploratory, qualitatively driven, comparative case-study design that triangulates three techniques: 1) Document analysis of degree plans, evaluation/accreditation reports, and course descriptors; 2)Semi-structured interviews with faculty, public decision makers, and students to capture perceptions of public-service values, academic autonomy, and digital competencies; 3)Process tracing of political debates and policy interventions regarding PA education and training, coded thematically. 4) Data will be triangulated through inductive and deductive coding in NVivo, with reliability enhanced by external auditing of codes and analytic memos. Case selection follows a logic of contrast, comparing mature democracies with varying degrees of democratic backsliding and administrative reform trajectories. The study expects to assess whether programmes that adopt Problem- and Challenge-Based Learning: - provide a better fit with contemporary governance challenges; - preserve the values of neutrality and merit in the face of politicisation; - accelerate the development of predictive-analytics skills and reputational-risk management. Findings will offer a reference framework for (re)designing PA curricula that reconcile scientific rigour, practical relevance, and the defence of representative institutions, ultimately contributing to more agile, transparent, and inclusive governance amid digital transformation and populist pressure. Acknowledgement. This paper is produced under a research project conducted as part of the StudiesDIG project (Models and Instruments for Transforming Higher Education Systems through Transnational Multi-Sector Links), funded by the European Union under the Horizon Europe programme (Marie Skłodowska-Curie RISE) (HORIZON-MSCA-2022-SE-01). We acknowledge the invaluable support from our institutional and international partners, whose insights and expertise have contributed to the development of this paper. | ||