Conference Agenda

Overview and details of the sessions of this conference. Please select a date or location to show only sessions at that day or location. Please select a single session for detailed view (with abstracts and downloads if available).

Session Overview
20-PM1-06: ST8.3 - Crowdsourcing and Online Communities: Challenges and New Perspectives
Thursday, 20/June/2019:
1:00pm - 2:30pm

Session Chair: Thomas Gillier, Grenoble Ecole de Management
Session Chair: Associate Professor SCHWEITZER Fiona, Ecole de Management
Session Chair: Jan Van den Ende, Rotterdam School of Management
Location: Room PC 21

Session Abstract

Firms increasingly employ IT-enabled crowdsourcing systems to identify valuable knowledge and collect new ideas for product and service innovations from customers, employees, or a larger group of online users and citizens. Beyond the paradigm shift towards open innovation, a plethora of novel types of IT-enabled crowdsourcing systems blossoms out that stretch from collaborative concept development to actual co-production and co-consumption (e.g., open source hardware or software, open design of physical goods, citizensourcing, etc.). This track focuses on the future challenges, opportunities, and trends of open innovation platforms. It notably encourages sharing insights on successful and failed IT-enabled crowdsourcing initiatives and looks for novel approaches to understand value creation and appropriation in IT-enabled crowdsourcing environments. This track welcomes both quantitative and qualitative studies and conceptual or empirical work. The main purpose of this track is to solicit contributions of research which shed light on new theoretical models, processes, tools, and factors of successful online collaborations that boost innovation.

Show help for 'Increase or decrease the abstract text size'


Thomas Gillier1, Dirk Deichmann2, Marco Tonellato3

1Grenoble Ecole de Management; 2Erasmus University; 3Ludwig-Maximilians-Universität München


In order to fill their innovation pipeline with creative ideas, organizations increasingly rely on internal crowdsourcing. Internal crowdsourcing is a way for firms to tap into the creative potential of their employees and to steer the further development of the ideas they generate and share.


We mainly draw from the literature on goal commitment and commitment to change to shed more light on when and how idea generators create commitment from others for their idea (Herscovitch & Meyer, 2002; Hill, Seo, Kang, & Taylor, 2012; Kathan, Hutter, Füller, & Hautz, 2015; Klein, Wesson, Hollenbeck, Wright, & DeShon, 2001).

Literature Gap

First, scholars studying the creativity process have developed an increasing understanding of the different factors that affect idea generation versus idea implementation. However, the conditions under which people commit their time and effort to someone else’s idea have received little attention.

Research Questions

How inventive employees create commitment for their ideas in internal crowdsourcing?


To test our hypotheses, we studied participants of an internal crowdsourcing platform of a multinational Fortune 500 company. As part of this crowdsourcing platform, participants could join someone else’s ideas if it were to be implemented. Idea commitment implied that people dedicated themselves digitally to the further development and implementation of the idea if it were to be selected.

Empirical Material

For this study, we collected data of an internal crowdsourcing platform of a Fortune 500 company which organized an ideation contest from February until March 2016. In the two months that the ideation contest was running, a total of 244 ideas was submitted by 144 people. There were 1,330 comments to all the ideas, 2,373 likes and 474 idea commitments. Most of our data stems from the crowdsourcing platform itself. We completed this data with expert evaluations regarding the feasibility, novelty, and clarity of the ideas that were submitted as well as information on the idea submitters which stems from archival records from the HR department of the company. Provided that we have some missing information for some of the idea generators, we ended up with 209 ideas submitted by 123 people.


Our findings confirm that idea generators who engage with others’ ideas on the crowdsourcing platform prior to submitting their own idea, were able to elicit more support from others for their own idea in the form of idea commitment.

In contrast to our predictions, however, the feasibility of an idea was not positively but negatively associated with the number of idea commitments. One interpretation of this finding could be that the motivation of participants of an ideation contest may be different than the motivation of decision-makers such as managers. On the one hand, managers might be more risk averse and have a preference for “low hanging fruit”—thus, conventional ideas that are feasible and easily implementable (Berg, 2016). On the other hand, participants of a crowdsourcing platform such as the one that we studied—many were actually R&D engineers—could feel more challenged and motivated to help develop an “unfeasible” idea. Committing time and energy to these types of ideas would also provide these participants with new learning opportunities.

Contribution to Scholarship

While the literature on commitment often concentrated on the effects of employee goal commitment or employee commitment to change, we shed light on early employee commitment to a very uncertain and possibly still changing object—a new idea. Studying this type of commitment is important because the innovative potential of new ideas is often not realized due to a lack of understanding of how to take ideas forward after they have been generated. On the one hand, the central focus of creativity research is on studying the conditions that help people to generate new ideas (Amabile, 1988; George, 2007). On the other hand, research on idea implementation concentrates on determining whose ideas are selected (Reitzig & Sorenson, 2013). Both streams of literature focus very little on how idea generators can attract buy-in, support, and commitment from others in an organization for their idea in order to improve and materialize that idea.

Contribution to Practice

By studying how idea generators attract buy-in, support, and commitment from others in an organization for their idea, our study offers important managerial recommendations. Scholars studying the creativity process have developed an increasing understanding of the different factors that affect idea generation versus idea implementation (Anderson et al., 2014). However, the conditions under which people commit their time and effort to someone else’s idea have received little attention. In particular, we provide recommendations regarding employee commitment to new and uncertain ideas.


A good fit with the key themes of the R&D Management conference (innovation , R&D, crowdsourcing) and with the Track 8.3 (Crowdsourcing and online communities: Challenges and new perspectives).


Amabile, T. M. 1988. A model of creativity and innovation in organizations. In B. M. Staw & L. L. Cummings (Eds.), Research in Organizational Behavior, vol. 10: 123–167. Greenwich, CT: JAI Press.

Anderson, N., Potocnik, K., & Zhou, J. 2014. Innovation and creativity in organizations: A state-of-the-science review, prospective commentary, and guiding framework. Journal of Management, 40(5): 1297–1333.

George, J. M. 2007. Creativity in organizations. Academy of Management Annals, 1: 439–477.

Herscovitch, L., & Meyer, J. P. 2002. Commitment to organizational change: Extension of a three-component model. Journal of Applied Psychology, 87(3): 474–487.

Hill, N. S., Seo, M.-G., Kang, J. H., & Taylor, M. S. 2012. Building employee commitment to change across organizational levels: The influence of hierarchical distance and direct managers’ transformational leadership. Organization Science, 23(3): 758–777.

Kathan, W., Hutter, K., Füller, J., & Hautz, J. 2015. Reciprocity vs. free-riding in innovation contest communities. Creativity and Innovation Management, 24(3): 537–549.

Klein, H. J., Wesson, M. J., Hollenbeck, J. R., Wright, P. M., & DeShon, R. P. 2001. The assessment of goal commitment: A measurement model meta-analysis. Organizational Behavior and Human Decision Processes, 85(1): 32–55.

Reitzig, M., & Sorenson, O. 2013. Biases in the selection stage of bottom-up strategy formulation. Strategic Management Journal, 34(7): 782–799.


Marian Garcia, Guihan Ko

University of Kent, United Kingdom


Crowdsourcing is largely based on an open competition model where a firm identifies a task, currently being performed in-house, and outsource it to an undefined network of people (the ‘crowd’) (Howe, 2008). Competitive pressures for the best solution drive solvers to keep trying new ideas (Garcia Martinez and Walton, 2014).


Crowdsourcing research suggests that a co-opetitive setting where solvers simultaneously collaborate and compete enhances innovation and creativity (Shih et al., 2006, Ritala and Hurmelinna-Laukkanen, 2009, Gnyawali and Park, 2011). Platforms’ community functionalities foster collaborative and supportive behaviours among solvers while they compete to submit the best solutions to win the contest. Research demonstrates that processes of intensive user collaboration enhance idea quality (Blohm et al., 2011), particularly among highly intrinsically motivated solvers whose contributions are activated due to social motives (Leimeister et al., 2009). Platforms’ community functionalities allow solvers to socialise crowdsourcing request requirements and suggest/ask for problem solutions in real time, which in turn advance the development of creativity and innovations (Blohm et al., 2011). Hence, a combination of competition and collaboration – co-opetition, coexist in the elaboration of creative solutions by the crowd, making crowdsourcing competitions an interesting research setting (Bullinger et al., 2010; Kathan et al., 2015).

Literature Gap

Few studies have focused on competition design characteristics that may contribute to enhancing the effectiveness of crowdsourcing, particularly whether cooperation tools influence creative performance. This paper aims at expanding crowdsourcing research by investigating how crowdsourcing platforms’ collaborative features influence solvers’ search process, and consequently, the related problem solving performance.

Research Questions

This paper investigates if the level of enhanced communication and interaction as well as sentiment supported by online discussion forums, as indicated by posts and views affects solvers’ creative performance in terms of solution quality and quantity.


This study utilizes the Linguistic Inquiry and Word Count (LIWC) text-analytics software as well as Hierarchical Linear Modelling (HLM) to investigate text data and relevant information from a well known crowdsourcing platform.

Empirical Material

We carry out this investigation in the context of prediction competitions given their potential to address the increasing problems faced by companies in trying to deal with “Big Data” (Manyika et al., 2011). The empirical setting of this paper is Kaggle (, the world’s leading online platform for predictive modelling competitions.


We investigate our hypotheses with the Kaggle data. Firstly, our preliminary data analysis highlight that solvers who participated in forum of the competition are more likely to achieve both quality and quantity of competition performance. In text analysis of forum, we explored the various aspects of comment writing style of text extracted by LIWC. Specifically, “we” related words and words under “drives” and “money” categories in LIWC show significant relationship with either quality or quantity of competition performance. However, in this context, unlike our expectation, comment sentiment is not significantly associated with competition performance.

Contribution to Scholarship

We expect this research expands the spectrum of coopetition literature in individual and network level under crowdsourcing context by answering previous research calls (Dorn et al., 2016; Majchrzak and Malhortra 2013). We argue that co-opetition among solvers provides a positive climate for creative behaviour assuming the level of intelligence in a team all equal. A setting of competition and collaborative community supports a social environment full of encouragement, challenge and support; all social factors considered in theories of organisational creativity and innovation identity as enablers of creativity (Amabile et al., 1996; Kanter, 1988).

Contribution to Practice

Our findings provide important implications for Web platform managers for the successful management of crowdsourcing communities.


The study contributes to the main theme by focusing on the link between industry and society. More specifically, it addresses the area of Crowdsourcing and online communities (Track 8.3), by focusing on the tools and factors of successful online collaborations that boost innovation.


AMABILE, T. M., CONTI, R., COON, H., LAZENBY, J. & HERRON, M. 1996. Assessing the Work Environment for Creativity. The Academy of Management Journal, 39, 1154-1184.

BLOHM, I., BRETSCHNEIDER, U., LEIMEISTER, J. M. & KRCMAR, H. 2011. Does collaboration among participants lead to better ideas in IT-based idea competitions? An empirical investigation. International Journal of Networking and Virtual Organisations, 9, 106-122.

GARCIA MARTINEZ, M. & WALTON, B. 2014. The wisdom of crowds: The potential of online communities as a tool for data analysis. Technovation, 34, 203-214.

HOWE, J. 2008. Crowdsourcing: Why the Power of the Crowd in Driving the Future of Business, New York, Crown Business.

KANTER, R. M. 1988. When a thousand flowers bloom: Structural, collective, and social conditions for innovation in organization. Research in Organizational Behavior, 10, 169-211.

Determinants of Leveraging Social Capital on Open Innovation Platforms

Anja Leckel, Kathleen Diener, Frank Piller

RWTH Aachen University, Germany


Recent studies emphasized that open innovation platforms do not function sustainably because they lack key principles of successful platform design and a corresponding crowdsourcing-based business model – leading to the risk of leaving one market side behind: The crowd of external contributors. This imbalance results in participants withdrawing from platform activities.


Open innovation (OI) researchers have devoted considerable attention to facilitating knowledge transfer beyond organizational boundaries via OI platforms (e.g., Chesbrough, 2012; Frey, Lüthje & Haag, 2011; Jeppesen & Lakhani, 2010; King & Lakhani, 2013; Terwiesch & Xu, 2008).

Extending the narrow view on economic exchange, we investigate the role of platform design and platform governance in the context of social capital theory in order to derive implications for sustainable OI platforms. Social capital theory suggests that building and securing social capital is key to activate and cultivate knowledge exchange in a network (Nahapiet & Ghoshal, 1998; Tsai & Ghoshal, 1998; Chiu et al., 2006; Coleman, 1988; Rost, 2011), where social capital is defined as the connections among individuals and the norms of reciprocity and trustworthiness that arise from these social networks (Putnam, 2001a, 2001b).

Literature Gap

The research on platform design parameters is rather isolating single factors or focusing on the design of single platforms, i.e. in case studies. There is a lack of empirical evidence on the aggregate of OI platforms and their characteristics to deepen the understanding of how platform design can sustain knowledge transfer.

Research Questions

The problem addressed is how to maintain OI platforms once they are established. More precisely, how do OI platforms need to be designed in order to build and sustain SC as basis for knowledge transfer between stakeholders? We aim to identify determinants of building and leveraging SC on OI platforms.


This empirical study is driven by a mixed methods approach. Quantitative survey data is used for scale analysis and regression analysis of the two main constructs (platform design and its impact on social capital), and the model extension, i.e. the mediation effect of decentralized control. The quantitative part benefits from multiple preparatory studies for scale development.

We extend the quantitative analysis with a qualitative examination in order to make the constructs and their relations more tangible and to better relate to practical implications. Specifically, we deepen the insights with extreme case sampling using data of detailed platform observations and interviews.

Empirical Material

While existing literature usually focused on only one platform or smaller subsamples of 10-15 platforms, the special feature of this data set is that we analyze survey data from 61 open innovation platforms. From a total of about 225 OI platforms that we invited to participate in the survey, 80 (partially) responded of which 61 complete data sets could be used in the analysis. Respondents inserted values by means of slide controls or numeric input.

Furthermore, we conducted 16 interviews with C-level managers of open innovation platforms and collected additional data on platform design parameters of 27 OI platforms that already participated in the survey and whose platform is accessible via self-registration. For this purpose, a total of 108 student study participants, four for each of the 27 platforms, completed detailed observation sheets on the platform design characteristics.


Building on existing empirical studies, we developed a measure for social capital in a digitally connected ecosystem and statistically proved its dependence on equitable platform design patterns regarding value creation, value capture and the distribution of control among stakeholder groups. Concisely, more equitable platform design regarding co-creation and co-capture leads to more social capital, which is partially mediated by a higher distribution of control among stakeholder groups on open innovation platforms. While the direct baseline effect of equitable platform design on social capital (0,51***) explains 29% of the variance, the mediation model explains 38% of the variance with a total effect of 0,51***, a direct effect 0,41*** and an indirect effect of 0,1***.

As part of the extreme case sampling, the descriptive analysis sheds light on a varying set of practices on different OI platforms to better understand the abovementioned relations of the constructs and their practical implementation.

Contribution to Scholarship

Examining social capital as an important but previously neglected success factor of open innovation, empirical contributions include (1) testing and confirming a theoretical linkage between two constructs (social capital and equitable platform design) that has not previously been tested, (2) determining the degree to which the distribution of control mediates the relationship between these two constructs, and (3) investigating the psychometric properties of social capital as an important scale that has most often not been measured in accordance with its three-dimensional theoretical foundation and has not been adapted to the digital context yet. We can therefore enhance the construct validity of that key measure through the use of pre-tested and refined multiple-item measures.

Contribution to Practice

Firstly, the study shows that equitable platform design for co-creation and co-capture leads to more social capital and therefore more network stability and knowledge transfer on OI platforms. Secondly, the following descriptive analysis of the specific platform design patterns and their effect on social capital highlights current challenges but also sheds light on applicable solutions for sustainable IT-enabled crowdsourcing systems for knowledge sharing.


Improving and maintaining OI platforms is at the heart of the innovation challenge of bridging research, industry and society since it is about connecting solution seeking organizations with marginal knowledge from unobvious others while knowledge sharing in a digitally connected ecosystem offers vast advantages but also inevitably involves challenges.


Alstyne, M. W., Di Fiore, A., & Schneider, S. (2017). 4 Mistakes That Kill Crowdsourcing Efforts. Harvard Business Review. Online source:

Chesbrough, H. (2012). Open innovation: Where we've been and where we're going. Research-Technology Management, 55(4), 20-27.

Chiu, C. M., Hsu, M. H., & Wang, E. T. (2006). Understanding knowledge sharing in virtual communities: An integration of social capital and social cognitive theories. Decision support systems, 42(3), 1872-1888.

Coleman, J. S. (1988). Social capital in the creation of human capital. American journal of sociology, 94, S95-S120.

Frey, K., Lüthje, C., & Haag, S. (2011). Whom should firms attract to open innovation platforms? The role of knowledge diversity and motivation. Long Range Planning, 44(5-6), 397-420.

Jeppesen, L. B., & Lakhani, K. R. (2010). Marginality and problem-solving effectiveness in broadcast search. Organization science, 21(5), 1016-1033.

King, A., & Lakhani, K. R. (2013). Using open innovation to identify the best ideas. MIT Sloan management review, 55(1), 41.

Kohler, T. (2015). Crowdsourcing-based business models: how to create and capture value. California Management Review, 57(4), 63-84.

Kohler, T. (2018). How to Scale Crowdsourcing Platforms. California Management Review, 60(2), 98-121.

Nahapiet, J., & Ghoshal, S. (1998). Social capital, intellectual capital, and the organizational advantage. Academy of management review, 23(2), 242-266.

Putnam, R. D. (2001a). Bowling alone: The collapse and revival of American community. Simon and Schuster.

Putnam, R. D. (2001b). Social capital: Measurement and consequences. Canadian journal of policy research, 2(1), 41-51.

Rost, K. (2011). The strength of strong ties in the creation of innovation. Research policy, 40(4), 588-604.

Terwiesch, C., & Xu, Y. (2008). Innovation contests, open innovation, and multiagent problem solving. Management science, 54(9), 1529-1543.

Tsai, W., & Ghoshal, S. (1998). Social capital and value creation: The role of intrafirm networks. Academy of management Journal, 41(4), 464-476.

What’s the problem? How crowdsourcing contributes to identifying scientific research questions

Susanne Beck1,2, Tiare-Maria Brasseur1,2, Marion Poetz2,1, Henry Sauermann3

1LBG Open Innovation in Science Center, Austria; 2Copenhagen Business School, Department of Strategy and Innovation; 3ESMT Berlin


We investigate the crowd’s ability and underlying mechanisms to engage in scientific problem finding (i.e., research question formulation) in a medical research context (traumatology). Our research thus relates to better understanding how leveraging the crowd’s (i.e., patients’ and medical professionals’) experiential knowledge contributes to increasing the impact of scientific research.


Boudreau, K. J., Guinan, E. C., Lakhani, K. R., & Riedl, C. 2016. Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science. Management Science, 62(10): 2765-2783.

Franzoni, C., & Sauermann, H. 2014. Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43(1): 1-20.

Lakhani, K. R., Jeppesen, L. B., Lohse, P. A., & Panetta, J. A. 2007. The Value of Openness in Scientific Problem Solving. Harvard Business School Working Paper No. 07–050.

Luo, L., & Toubia, O. 2015. Improving Online Idea Generation Platforms and Customizing the Task Structure on the Basis of Consumers' Domain-Specific Knowledge. Journal of Marketing, 79(5): 100-114.

Pols, J. 2014. Knowing patients: turning patient knowledge into science. Science, Technology, & Human Values, 39(1): 73-97.

Nickerson, J. A., Wuebker, R., & Zenger, T. 2017. Problems, theories, and governing the crowd. Strategic Organization, 15(2): 275-288.

Literature Gap

Most crowd science projects are “contributory”, i.e., crowds contribute to simple tasks in the research process (mainly data collection/analysis). We examine whether and to what extent crowdsourcing experiential knowledge is useful across more complex stages of the research process: scientific problem identification and the formulation of respective research questions.

Research Questions

1. How does experiential vs. science knowledge influence the quality of research questions generated by the crowd? 2. How does the quality of research questions generated by the crowd differ from that of research questions generated by professional scientists?


We used a multi-study mixed-method approach with first, qualitative (i.e., focus groups) and quantitative-experimental pilot studies to develop and compare different operationalizations related to the crowd’s knowledge about scientific research. Second, data from an online and a field experiment has been gathered to test individual- and crowd-level hypotheses focusing on the underlying mechanisms that influence crowd members’ performance in generating research questions. Finally, the research questions are evaluated by independent experts (e.g., editors and editorial board members from top-tier traumatology journals) along relevant quality dimensions of research questions.

Empirical Material

We first conducted a large-scale pilot study to develop, test and select knowledge treatment alternatives for the main experiments. The pilot study included four focus groups (N=17, 53% medical professionals and 47% patients) and an online experiment (N=613, 50% female and 50% male respondents). Secondly and based on the outcome of our pilot studies, we conducted a 2 (knowledge about scientific process yes/no) x 2 (scientific knowledge about traumatology yes/no) between-subjects-design online experiment (study 1; N=494; female=51%; average age=35) and a field experiment implemented in a real-world crowdsourcing project (study 2; N=90; female=63%; average age=42). Participants from 39 different countries were randomly assigned to one of four experimental groups answering all the same pre- and post-measures. Only the research question submission was differently facilitated (=knowledge treatments). Thirdly, we collected 100 early-stage research questions from professional scientists by screening conference submissions. Finally, international experts (i.e., editors and editorial board members from top-tier traumatology journals) evaluate all research questions (blind to the source) on their novelty, feasibility, scientific impact, practical impact and clarity, as well as their overall potential for being further investigated (ongoing; so far n=43; female=14%).


The evaluation of the research questions and hence, the measurement of our dependent variables is still ongoing, the results are work in progress. Thus, the results are work-in-progress and are going to be available within Q1/2019. We are looking forward to present our results at the R&D Management Conference 2019 in case this abstract is subject to a positive evaluation. Thus far, we can so report satisfying construct reliabilities (Cronbach’s alphas > .8) and factor loadings for all measures (i.e., lead userness (adapted from Franke, Poetz, and Schreier (2013b) and Franke, Keinz, and Klausberger (2013a); importance of science (Kind, Jones, and Barmby, 2007); motivation (Lakhani, Jeppesen, Lohse, Panetta (2006)); creativity (adapted from Im, Bayus, and Mason (2003) and Poetz and Schreier (2012)). The average variance extracted (AVE) indicates no issues with convergent validity (values > .5) and discriminant validity can be assumed following the Fornell-Larcker criterion (Fornell & Larcker, 1981). Moreover, initial randomization checks for participants’ characteristics, motivations, and attitudes supported the successful randomization into our four treatment groups.

Contribution to Scholarship

First, we contribute to the literature on crowd and citizen science (Bonney et al., 2014; Franzoni & Sauermann, 2014; Sanz, Holocher-Ertl, Kieslinger, Sanz Garcia, & Silva, 2014; West & Pateman, 2016) by providing evidence on the strengths and weaknesses of the crowd in a key stage of the research process that has thus far seen little crowd involvement. Second, we contribute to the crowdsourcing literature and the more general literature on open and distributed knowledge production by developing and testing a conceptual model that highlights different aspects of research question quality and ties them to distinct types of required knowledge. As such, we complement the prior literature’s focus on problem solving (e.g., Afuah & Tucci, 2012; Franke et al., 2013b; Jeppesen & Lakhani, 2010) and add to an upcoming stream of literature that investigates the relation between organizing problem finding and solution search (e.g., Nickerson, Wuebker, & Zenger, 2017).

Contribution to Practice

Our results should be of interest to practitioners and policy makers by providing pioneering experimental evidence on the opportunities and challenges associated with crowd involvement in more complex stages of the research process. Particularly, we investigate how crowd involvement in early research stages can influence the impact of scientific research. As such, we contribute to the ongoing policy debate on mission orientation and citizen involvement. This study also points to practical tools that can be used to increase research question quality, thus broadening the potential application of crowd and citizen science beyond the current focus of data collection and analysis.


Crowdsourcing as a central Open Innovation method aims to facilitate the interaction between society and industry. Applying this principle in Science while better understanding the crowd’s strengths and weaknesses allows to optimize the inclusion of the public’s valuable experiential knowledge into scientific research to address today’s societal and innovation challenges.


Acar, O. A., & van den Ende, J. 2016. Knowledge distance, cognitive-search processes, and creativity: the making of winning solutions in science contests. Psychological Science, 27(5): 692-699.

Afuah, A., & Tucci, C. L. 2012. Crowdsourcing As a Solution to Distant Search. Academy of Management Review, 37(3): 355-375.

Alvesson, M., & Sandberg, J. 2011. Generating research questions through problematization. Academy of management review, 36(2): 247-271.

Biesta, G. 2007. Bridging the gap between educational research and educational practice: The need for critical distance. Educational Research and Evaluation 13(3): 295-301.

Bonney, R., Shirk, J. L., Phillips, T. B., Wiggins, A., Ballard, H. L., Miller-Rushing, A. J., & Parrish, J. K. 2014. Next steps for citizen science. Science, 343(6178): 1436-1437.

Booth, W., Colomb, G., & Williams, J. 1995. Transaction Processing Models in Wireless Network. The Craft of Research, University Of Chicago Press.

Boudreau, K. J., Guinan, E. C., Lakhani, K. R., & Riedl, C. 2016. Looking across and looking beyond the knowledge frontier: Intellectual distance, novelty, and resource allocation in science. Management Science, 62(10): 2765-2783.

Bryman, A. 2007. The Research Question in Social Research: What is its Role? International Journal of Social Research Methodology, 10(1): 5-20.

Callon, M., & Rabeharisoa, V. 2008. The growing engagement of emergent concerned groups in political and economic life: Lessons from the French association of neuromuscular disease patients. Science, Technology, & Human Values, 33(2): 230-261.

Caron-Flinterman, J. F., Broerse, J. E., & Bunders, J. F. 2005. The experiential knowledge of patients: a new resource for biomedical research? Social science & medicine, 60(11): 2575-2584.

Chang, C. 2004. The interplay of product class knowledge and trial experience in attitude formation. Journal of Advertising, 33(1): 83-92.

Collins, H. M., & Evans, R. 2002. The third wave of science studies: Studies of expertise and experience. Social studies of science, 32(2): 235-296.

Connolly, T., Routhieaux, R. L., & Schneider, S. K. 1993. On the effectiveness of group brainstorming: Test of one underlying cognitive mechanism. Small Group Research, 24(4): 490-503.

Cummings, S. R., Browner, W. S., & Hulley, S. B. 2007. Conceiving the research question. In S. B. Hulley, S. R. Cummings, W. S. Browner, D. G. Grady, & T. B. Newman (Eds.), Designing Clinical Research, Vol. 3rd edition: Lippincott Williams & Wilkins

Davis, M. S. 1971. That's interesting! Towards a phenomenology of sociology and a sociology of phenomenology. Philosophy of the Social Sciences, 1(2): 309-344.

De Jong, T., & Ferguson-Hessler, M. G. 1996. Types and qualities of knowledge. Educational psychologist, 31(2): 105-113.

Dean, M. N., & Summers, A. P. 2006. Mineralized cartilage in the skeleton of chondrichthyan fishes. Zoology, 109(2): 164-168.

Durand, D. E., & VanHuss, S. H. 1992. Creativity software and DSS: Cautionary findings. Information & Management, 23(1): 1-6.

Estellés-Arolas, E., & González-Ladrón-De-Guevara, F. 2012. Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2): 189-200.

European Commission. 2018. Open Science Policy Platform Recommendations.

European Science Foundation. 2013. Science in Society: caring for our futures in turbulent times: European Science Foundation.

Felin, T., & Zenger, T. R. 2014. Closed or open innovation? Problem solving and the governance choice. Research Policy, 43(5): 914-925.

Fornell, C., & Larcker, D. F. (1981). Structural equation models with unobservable variables and measurement error: Algebra and statistics. Journal of Marketing Research, 18(3), 382–388.

Franke, N., Poetz, M. K., & Schreier, M. 2013. Integrating problem solvers from analogous markets in new product ideation. Management Science, 60(4): 1063-1081.

Franke, N., Von Hippel, E., & Schreier, M. 2006. Finding commercially attractive user innovations: A test of lead‐user theory. Journal of Product Innovation Management, 23(4): 301-315.

Franzoni, C., & Sauermann, H. 2014. Crowd science: The organization of scientific research in open collaborative projects. Research Policy, 43(1): 1-20.

Gerace, W. J., Dufresne, R., Leonard, W., & Mestre, J. 2001. Problem solving and conceptual understanding. Paper presented at the Physics education research conference.

Gibbons, M. 1994. The new production of knowledge: The dynamics of science and research in contemporary societies: Sage.

Gittelman, M. 2016. The revolution re-visited: Clinical and genetics research paradigms and the productivity paradox in drug discovery. Research Policy, 45(8): 1570-1585.

Goldenberg, J., Mazursky, D., & Solomon, S. 1999. Creative sparks. Science, 285(5433): 1495-1496.

Hand, E. 2010. People power. Nature, 466(7307): 685-687.

Hippel, E. v., & Krogh, G. v. 2003. Open source software and the “private-collective” innovation model: Issues for organization science. Organization Science, 14(2): 209-223.

Hoover, S. M., & Feldhusen, J. F. 1994. Scientific problem solving and problem finding: A theoretical model. In M. A. Runco (Ed.), Problem finding, problem solving, and creativity: 201-219. NJ: Ablex Publishing.

Huber, L. R., Sloof, R., & Van Praag, M. 2014. The effect of early entrepreneurship education: Evidence from a field experiment. European Economic Review, 72(11): 76-97.

Im, S., Bayus, B. L., & Mason, C. H. 2003. An empirical study of innate consumer innovativeness, personal characteristics, and new-product adoption behavior. Journal of the academy of marketing science, 31(1): 61-73.

Jahn, T. 2008. Transdisciplinarity in the practice of research, Transdisziplinäre Forschung: Integrative Forschungsprozesse verstehen und bewerten. : 21-37. Frankfurt/Main, Germany: Campus Verlag.

Jeppesen, L. B., & Frederiksen, L. 2006. Why do users contribute to firm-hosted user communities? The case of computer-controlled music instruments. Organization science, 17(1): 45-63.

Jeppesen, L. B., & Lakhani, K. R. 2010. Marginality and Problem-Solving Effectiveness in Broadcast Search. Organization Science, 21(5): 1016-1033.

Jones, B. 2009. The burden of knowledge and the “death of the renaissance man”: Is innovation getting harder? Review of Economic Studies, 76(1): 283-317.

Katila, R., & Ahuja, G. 2002. Something old, something new: A longitudinal study of search behavior and new product introduction. Academy of management journal, 45(6): 1183-1194.

Kind, P., Jones, K., & Barmby, P. 2007. Developing Attitudes towards Science Measures. International Journal of Science Education, 29(7): 871-893.

Krippendorff, K. 2004. Content analysis: An introduction to its methodology (2nd ed.). Thousand Oaks, CA: SAGE.

Kuhn, T. S. 1962. The Structure of Scientific Revolutions: University of Chicago Press.

Lakhani, K., Jeppesen, L., Lohse, P., & Panetta, J. 2006. The Value of Openness in Scientific Problem Solving (No. 07–050). Harvard Business School Working Knowledge.

Laursen, K., & Salter, A. 2006. Open for innovation: the role of openness in explaining innovation performance among UK manufacturing firms. Strategic Management Journal, 27(2): 131-150.

Lee, H., & Cho, Y. 2007. Factors affecting problem finding depending on degree of structure of problem situation. The Journal of Educational Research, 101(2): 113-123.

Luo, L., & Toubia, O. 2015. Improving Online Idea Generation Platforms and Customizing the Task Structure on the Basis of Consumers' Domain-Specific Knowledge. Journal of Marketing, 79(5): 100-114.

Lüthje, C., Herstatt, C., & von Hippel, E. 2005. User-innovators and “local” information: The case of mountain biking. Research Policy, 34(6): 951-965.

MacCrimmon, K. R., & Wagner, C. 1994. Stimulating ideas through creative software. Management Science, 40(11): 1514-1532.

Nickerson, J. A., Wuebker, R., & Zenger, T. 2017. Problems, theories, and governing the crowd. Strategic Organization, 15(2): 275-288.

Pammolli, F., Magazzini, L., & Riccaboni, M. 2011. The productivity crisis in pharmaceutical R&D. Nature Reviews Drug Discovery, 10: 428.

Paolacci, G., & Chandler, J. 2014. Inside the Turk: Understanding Mechanical Turk as a participant pool. Current Directions in Psychological Science, 23(3): 184-188.

Pisano, G. P., & Verganti, R. 2008. Which kind of collaboration is right for you. Harvard Business Review, 86(12): 78-86.

Poetz, M. K., & Schreier, M. 2012. The value of crowdsourcing: can users really compete with professionals in generating new product ideas? Journal of Product Innovation Management, 29(2): 245-256.

Pols, J. 2014. Knowing patients: turning patient knowledge into science. Science, Technology, & Human Values, 39(1): 73-97.

Polymath, D. 2012. A new proof of the density Hales-Jewett theorem. Annals of Mathematics, 175(3): 1283-1327.

Punch, K. F. 2013. Introduction to social research: Quantitative and qualitative approaches. Los Angeles: Sage Publications Ltd.

Raddick, M. J., Bracey, G., Gay, P. L., Lintott, C., Cardamone, C., Murray, P., Schawinski, K., Szalay, A., & Vandenberg, J. 2013. Galaxy Zoo: Motivations of Citizen Scientists. Astronomy Education Review, 12(1).

Runco, M. A. 1994. Problem finding, problem solving, and creativity: Greenwood Publishing Group.

Sanz, F. S., Holocher-Ertl, T., Kieslinger, B., Sanz Garcia, F., & Silva, C. G. 2014. White Paper on Citizen Science in Europe. In S. Consortium (Ed.): European Commission.

Sauermann, H., & Franzoni, C. 2015. Crowd science user contribution patterns and their implications. Proceedings of the National Academy of Sciences, 112(3): 679-684.

Scharrer, L., Rupieper, Y., Stadtler, M., & Bromme, R. 2017. When science becomes too easy: Science popularization inclines laypeople to underrate their dependence on experts. Public Understanding of Science, 26(8): 1003-1018.

Simon, H. A. 1973. The structure of ill structured problems. Artificial Intelligence, 4(3-4): 181-201.

Singh, J., & Fleming, L. 2010. Lone inventors as sources of breakthroughs: Myth or reality? Management Science, 56(1): 41-56.

Stepich, D. A., & Ertmer, P. A. 2009. " Teaching" Instructional Design Expertise: Strategies to Support Students' Problem-Finding Skills. Technology, Instruction, Cognition & Learning, 7(2): 147-170.

Stokes, D. 1997. Pasteur's Quadrant: Basic Science and Technological Innovation. Washington, DC: Brookings Institution Press.

Sullivan, B. L., Wood, C. L., Iliff, M. J., Bonney, R. E., Fink, D., & Kelling, S. 2009. eBird: A citizen-based bird observation network in the biological sciences. Biological Conservation, 142(10): 2282-2292.

SwafS. 2017. Citizen science policies in the European Commission, Science with and for Society Policy Brief.

Thabane, L., Thomas, T., Ye, C., & Paul, J. 2009. Posing the research question: not so simple. Canadian Journal of Anesthesia/Journal canadien d'anesthésie, 56(1): 71.

US Congress. 2016. Crowdsourcing and Citizen Science Act.

Van Brussel, S., & Huyse, H. 2018. Citizen science on speed? Realising the triple objective of scientific rigour, policy influence and deep citizen engagement in a large-scale citizen science project on ambient air quality in Antwerp. Journal of Environmental Planning and Management: 1-18.

Vernon, D., Hocking, I., & Tyler, T. C. 2016. An evidence-based review of creative problem solving tools: A practitioner’s resource. Human Resource Development Review, 15(2): 230-259.

Von Hippel, E. 1986. Lead users: a source of novel product concepts. Management Science, 32(7): 791-805.

West, S. E., & Pateman, R. M. 2016. Recruiting and Retaining Participants in Citizen Science: What Can Be Learned from the Volunteering Literature? Citizen Science: Theory and Practice, 1(2): 1-10.

Wiggins, A., & Crowston, K. 2011. From conservation to crowdsourcing: A typology of citizen science. Paper presented at the Proceedings of the 44th Hawaii International Conference on Systems Sciences (HICSS), Hawaii.