Acessibilidade / Reportar erro

An Aggregate Taxonomy for Crowdsourcing Platforms, their Characteristics, and Intents

ABSTRACT

This article intends to categorize different classifications used in the literature to distinguish among crowdsourcing platform types, based on their characteristics and intents. This is performed by means of a systematic literature review. The search for texts that combined the terms ‘crowdsourcing’ and ‘taxonomy,’ on the Google Scholar platform, resulted in 61 potential articles to be included in the corpus of the research, which were reduced to 13, after applying additional filtering. The study shows that taxonomies and classifications of platforms differ from author to author, each one of them adopting his/her own criteria and terminology. The 65 different crowdsourcing classifications that were found in the reviewed studies were reorganized in 16 groups, based on their characteristics. We believe that the current work contributes to the standardization of terminology and categorizations adopted in the literature and, therefore, to a better understanding of the crowdsourcing phenomenon.

Keywords:
crowdsourcing; taxonomy; classification; systematic literature review; types of platforms

INTRODUCTION

According to Rieder and Voβ (2010Rieder, K., Voβ, G. G. (2010). The working customer - an emerging new type of customer. Psychology of Everyday Activity, 3(2), 2-10. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.467.917
http://citeseerx.ist.psu.edu/viewdoc/sum...
), the collective intelligence of individuals connected to electronic networks has been explored in diverse ways in a wide variety of virtual environments. A new labor model emerges, known as ‘client-employee,’ characterized by an attempt to involve customers in active participation in value creation, by an organization, improving the efficiency of the production process, and having them to perform as if they were employees, when in fact they are customers.

This new model of customer participation in value generation occurs primarily in virtual environments, being mediated by ‘crowdsourcing platforms,’ which act as ‘bridges’ that bring together organizations, with their problems and challenges, and unrelated people, who have skills or the creativity to address them, in a faster and less costly fashion than regular employees would (Liu & Dang, 2014Liu, Y., & Dang, D. P. (2014). Research on the construction of crowdsourcing platform. Applied Mechanics and Materials, 602-605, 3198-3201. https://doi.org/10.4028/www.scientific.net/amm.602-605.3198
https://doi.org/10.4028/www.scientific.n...
). These platforms consist of ubiquitous, distributed, and innovation-enabling digital systems for the provision of services and products (Reuver, Sorensen, & Basole, 2018Reuver, M., Sørensen, C., & Basole, R. C. (2018). The digital platform: A research agenda. Journal of Information Technology, 33(2), 124-135. https://doi.org/10.1057/s41265-016-0033-3
https://doi.org/10.1057/s41265-016-0033-...
; Hein et al., 2020Hein, A., Schreieck, M., Riasanow, T., Setzke, D. S., Wiesche, M., Böhm, M., & Krcmar, H. (2020). Digital platform ecosystems. Electronic Markets, 30, 87-98. https://doi.org/10.1007/s12525-019-00377-4
https://doi.org/10.1007/s12525-019-00377...
; Rolland, Mathiassen, & Rai, 2018Rolland, K. H., Mathiassen, L., & Rai, A. (2018). Managing digital platforms in user organizations: The interactions between digital options and digital debt. Information Systems Research, 29(2), 419-443. https://doi.org/10.1287/isre.2018.0788
https://doi.org/10.1287/isre.2018.0788...
) to users, based on the users’ own efforts.

Phenomena as digital age, Industry 4.0 (Lin et al., 2018Lin, D., Lee, C. K., Lau, H., & Yang, Y. (2018). Strategic response to Industry 4.0: An empirical investigation on the Chinese automotive industry. Industrial Management & Data Systems, 118(3), 589-605. https://doi.org/10.1108/IMDS-09-2017-0403
https://doi.org/10.1108/IMDS-09-2017-040...
; Pilloni, 2018Pilloni, V. (2018). How data will transform industrial processes: Crowdsensing, crowdsourcing and big data as pillars of industry 4.0. Future Internet, 10(3), 24. https://doi.org/10.3390/fi10030024
https://doi.org/10.3390/fi10030024...
; Vianna, Graeml, & Peinado, 2020Vianna, F. R. P. M., Graeml, A. R., & Peinado, J. (2020). The role of crowdsourcing in industry 4.0: A systematic literature review. International Journal of Computer Integrated Manufacturing, 33(4), 411-427. https://doi.org/10.1080/0951192X.2020.1736714
https://doi.org/10.1080/0951192X.2020.17...
), and smart environments (Valerio, Passarella, & Conti, 2017Valerio, L., Passarella, A., & Conti, M. (2017). A communication efficient distributed learning framework for smart environments. Pervasive and Mobile Computing, 41, 46-68. https://doi.org/10.1016/j.pmcj.2017.07.014
https://doi.org/10.1016/j.pmcj.2017.07.0...
; Ullo & Sinha, 2020Ullo, S. L., & Sinha, G. R. (2020). Advances in smart environment monitoring systems using IoT and sensors. Sensors, 20(11), 3113. https://doi.org/10.3390/s20113113
https://doi.org/10.3390/s20113113...
) turned crowdsourcing platforms into essential tools for organizations, since they are the means for companies to reach data and improve problem solving, product development, and process innovations (Hartmann & Henkel, 2020Hartmann, P., & Henkel, J. (2020). The rise of corporate science in AI: Data as a strategic resource. Academy of Management Discoveries, 6(3), 359-381. https://doi.org/10.5465/amd.2019.0043
https://doi.org/10.5465/amd.2019.0043...
; Vianna et al., 2020). Thus, trying to better understand the characteristics of these platforms and their functioning, while seeking to consolidate the elements that comprise them, represents a relevant research effort.

For Estellés-Arolas and González-Ladrón-de-Guevara (2012Estellés-Arolas, E., González-Ladrón-De-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189-200. https://doi.org/10.1177/0165551512437638
https://doi.org/10.1177/0165551512437638...
), the numerous possibilities of using crowdsourcing platforms, taking advantage of the crowd in the performance of tasks, have increased the complexity of the phenomenon, and made it more difficult to interpret and define such platforms’ application. The diversity of denominations can be attributed to varied factors, such as particularities of the crowd, the outsourcing activity, or the social use made of the technological infrastructure. Taxonomies also vary, according to the crowdsourcing application, the role of the individuals, and tasks performed in problem solving or producing something (Saxton, Oh, & Kishore, 2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
; Prpic, Taeihagh, & Melton, 2015Prpic, J., Taeihagh, A., & Melton, J. (2015). The fundamentals of policy crowdsourcing. Policy & Internet, 7(3), 340-361. https://doi.org/10.1002/poi3.102
https://doi.org/10.1002/poi3.102...
).

Therefore, the research question that guided this work was: What is common among the various classifications and terminology used to name crowdsourcing platforms and their characteristics? Thus, the research sought to develop a taxonomy process on the types of crowdsourcing platforms. The authors aim to contribute, theoretically, to the consolidation of the understanding of classifications and characteristics of crowdsourcing platforms. The study also offers some practical results that may work as guidelines for managers in organizations on how to rely on the knowledge of connected crowds and their contribution to build the systems they need to generate value in the market.

This article presents a systematic literature review (SLR), addressing published works that discuss crowdsourcing and define terminology related to its characteristics. We sought to find the connection of the classifications attributed to crowdsourcing platforms and their characteristics, based on the cases and examples that were used by different authors to explain each type of crowdsourcing that appeared in their taxonomies or classifications. The obtained result allowed for the development of an overall taxonomy of different crowdsourcing platforms.

TAXONOMY, ONTOLOGIES, AND FOLKSONOMIES

According to Glass and Vessey (1995Glass, R. L., & Vessey, I. (1995). Contemporary application-domain taxonomies. IEEE software, 12(4), 63-76. https://doi.org/10.1109/52.391837
https://doi.org/10.1109/52.391837...
), with the increase of activities in a particular area, new concepts are developed, resulting in the need of new taxonomies also to be developed to organize the generated knowledge. Taxonomies and folksonomies are among the most prominent web content classification schemes (Noruzi, 2006Noruzi, A. (2006). Folksonomies: (Un)controlled vocabulary? Knowledge Organization, 33(4), 199-203. Retrieved from https://www.ergon-verlag.de/isko_ko/downloads/ko3320064c.pdf
https://www.ergon-verlag.de/isko_ko/down...
).

The term ‘folksonomy,’ as a combination of ‘folk’ and ‘taxonomy,’ was introduced by Vander Wal (2004Vander Wal, T. (2004). You down with folksonomy? Retrieved from http://www.vanderwal.net/random/entrysel.php?blog=1529
http://www.vanderwal.net/random/entrysel...
) through a post in his blog. According to him, a folksonomy is a user-generated content classification system that allows people to tag their favorite web resources with their chosen words or phrases, in natural language. According to Dotsika (2009Dotsika, F. (2009). Uniting formal and informal descriptive power: Reconciling ontologies with folksonomies. International Journal of Information Management, 29(5), 407-415. https://doi.org/10.1016/j.ijinfomgt.2009.02.002
https://doi.org/10.1016/j.ijinfomgt.2009...
), by combining and harnessing the distinct powers of ontology and folksonomy, web and information scientists are trying to integrate them, merging the flexibility, collaboration, and aggregation of folksonomies’ information with the standardization, automated validation, and interoperability of ontologies. Table 1 shows the main differences between taxonomies, ontologies and controlled vocabulary (first column), and folksonomies and free tags (second column), according to Binzabiah and Wade (2012Binzabiah, R., & Wade, S. (2012). Building an ontology based on folksonomy: An attempt to represent knowledge embedded in filmed materials. Journal of Internet Technology and Secured Transactions (JITST), 2(1/2). Retrieved from http://eprints.hud.ac.uk/id/eprint/16140
http://eprints.hud.ac.uk/id/eprint/16140...
).

Table 1
Contrasting ontology and taxonomies with folksonomies

Table 1 presents a relevant distinction between a taxonomy process, which relies on systematic procedures and depends on experts for its development, and the folksonomy process, which is treated less rigorously and applied, primarily, to tag content on websites, blogs, and the like. Table 1 shows that the taxonomy process demands hierarchical structures and controlled vocabulary, while the folksonomy or free tagging process is an organic process of classification (Binzabiah & Wade, 2012Binzabiah, R., & Wade, S. (2012). Building an ontology based on folksonomy: An attempt to represent knowledge embedded in filmed materials. Journal of Internet Technology and Secured Transactions (JITST), 2(1/2). Retrieved from http://eprints.hud.ac.uk/id/eprint/16140
http://eprints.hud.ac.uk/id/eprint/16140...
). Binzabiah and Wade (2012) remind us that folksonomy tagging makes information increasingly easy to search, discover, and navigate over time. It also has the advantage of being multidimensional, as users can assign many tags to express a concept and can combine them the way they want. However, uncontrolled tagging may result in a mixture of types of things, names, genres, and formats. Thus, the taxonomy process is usually considered a more adequate process for a categorization/classification process that requires scientific rigor.

The importance of the development of taxonomies was initially perceived in the biological sciences (Nickerson, Muntermann, Varshney, & Issac, 2009Nickerson, R., Muntermann, J., Varshney, U., & Issac, H. (2009, June). Taxonomy development in information systems: Developing a taxonomy of mobile applications. Proceedings of the European Conference in Information Systems, Verona, Italy, 17. Retrieved from http://halshs.archives-ouvertes.fr/halshs-00375103/en/
http://halshs.archives-ouvertes.fr/halsh...
). According to Bailey (1994Bailey, K. D. (1994). and taxonomies. Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412986397
https://doi.org/10.4135/9781412986397...
), the study of classifications in the social sciences only started receiving more attention after the works of Max Webber and John C. McKinney, with their concepts of ‘ideal type’ and ‘built type,’ respectively.1 1 The ideal type is an extreme or superior representation of all dimensions of a typology and may be equivalent to the best possible value chain while the ‘built-in type’ involves a moderate approach, using the most common characteristics found as a central tendency (Bailey, 1994).

According to Simpson (1961Simpson, G. G. (1961) Principles of animal taxonomy. New York: Columbia University Press. https://doi.org/10.7312/simp92414
https://doi.org/10.7312/simp92414...
) and Sneath and Sokal (1973Sneath, P. H. A., & Sokal, R. R. (1973) Numerical taxonomy: The principles and practice of numerical classification. San Francisco: Freeman.), a taxonomy involves the classification and identification of different activities that are developed as the basis of a phenomenon. Dogac, Laleci, Kabak and Cingil (2002Dogac, A., Laleci, G., Kabak, Y., & Cingil, I. (2002). Exploiting web service semantics: Taxonomies vs. ontologies. A Quarterly Bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering, 25(4), 10-16.) highlight the multidimensional characteristic of a taxonomy, something that had already been pointed out by Bailey (1994Bailey, K. D. (1994). and taxonomies. Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412986397
https://doi.org/10.4135/9781412986397...
), establishing a hierarchy among entities and classifications. The definition of a taxonomy in a field of research helps consolidating classifications and terminology that can be used by all those interested in it (Pitropakis, Panaousis, Giannetsos, Anastasiadis, & Loukas, 2019Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis, E., & Loukas, G. (2019). A taxonomy and survey of attacks against machine learning. Computer Science Review, 34, 100199. https://doi.org/10.1016/j.cosrev.2019.100199
https://doi.org/10.1016/j.cosrev.2019.10...
). For Bailey (1994) and Glass and Vessey (1995Glass, R. L., & Vessey, I. (1995). Contemporary application-domain taxonomies. IEEE software, 12(4), 63-76. https://doi.org/10.1109/52.391837
https://doi.org/10.1109/52.391837...
), taxonomies and classifications can refer to both processes and outcomes, with processes defining standards and outcomes being the standards themselves, when related to entities with similar characteristics.

The use of the knowledge of crowds (Surowiecki, 2005Surowiecki, J. (2005). The wisdom of crowds. New York: Anchor Books.), connected through electronic networks, to improve the value proposition of enterprises has led to a contemporary phenomenon, referred to as crowdsourcing (Howe, 2006Howe, J. (2006, June 01). The rise of crowdsourcing, Wired, 14(6) 1-4. Retrieved from https://www.wired.com/2006/06/crowds/
https://www.wired.com/2006/06/crowds/...
), discussed in a plethora of academic works, many of which propose taxonomies or other sorts of classifications to improve its understanding. Such phenomenon will be further explored in the next section. It still demands investigation, and presents research opportunities (Wazny, 2017Wazny, K. (2017). “Crowdsourcing” ten years in: A review. Journal of Global Health, 7(2), 020602. https://doi.org/10.7189/jogh.07.020601
https://doi.org/10.7189/jogh.07.020601...
), one of which is the organization of previously defined taxonomies to improve the understanding of the phenomenon and the definition of common grounds for future work.

CROWDSOURCING

The term ‘crowdsourcing’ was coined by Howe (2006Howe, J. (2006, June 01). The rise of crowdsourcing, Wired, 14(6) 1-4. Retrieved from https://www.wired.com/2006/06/crowds/
https://www.wired.com/2006/06/crowds/...
), when discussing the possibility of engaging crowds in the performance of tasks, with Web 2.0 support. For Borromeo and Toyama (2016Borromeo, R. M., & Toyama, M. (2016). An investigation of unpaid crowdsourcing. Human-centric Computing and Information Sciences, 6, 11. https://doi.org/10.1186/s13673-016-0068-z
https://doi.org/10.1186/s13673-016-0068-...
), crowdsourcing is a form of human computing, in which the effort of many individuals is requested to improve the quality of information or to provide a better service.

The use of crowdsourcing platforms allows for difficult problems to be solved in a much shorter time and at a reasonable cost, based on the support of many people (Hosseini, Phalp, Taylor, & Ali, 2015Hosseini, M., Phalp, K., Taylor, J., & Ali, R. (2015). On the configuration of crowdsourcing projects. International Journal of Information System Modeling and Design, 6(3), 27-45. https://doi.org/10.4018/IJISMD.2015070102
https://doi.org/10.4018/IJISMD.201507010...
). Those involved receive a reward, which may be financial or intangible, for performing the necessary activities. For Quinn and Bederson (2009Quinn, A. J., & Bederson, B. B. (2009). A taxonomy of distributed human computation. Human-Computer Interaction Lab Tech Report, University of Maryland.), platforms that monetarily reward individuals belong to a category other than those that reward efforts in a different fashion.

Even organizations with great financial power, such as those in the pharmaceutical industry, have opted to develop virtual platforms that try to engage internet users in performing activities of their interest. Innocentive is an example of platform used to propose difficult problems that challenge R&D teams in companies to specialists in the crowd (Albors, Ramos, & Hervas, 2008Albors, J., Ramos, J. C., & Hervas, J. L. (2008). New learning network paradigms: Communities of objectives, crowdsourcing, wikis and open source. International Journal of Information Management, 28(3), 194-202. https://doi.org/10.1016/j.ijinfomgt.2007.09.006
https://doi.org/10.1016/j.ijinfomgt.2007...
). The search for innovations, solutions of engineering problems, and categorization of images and tasks that require being present in specific geographic locations are all examples of situations in which companies benefit from outsourcing to crowds of internet users (Naroditskiy, Rahwan, Cebrian, & Jennings, 2012Naroditskiy, V., Rahwan, I., Cebrian, M., & Jennings, N. R. (2012). Verification in referral-based crowdsourcing. PloS One, 7(10), e45924. https://doi.org/10.1371/journal.pone.0045924
https://doi.org/10.1371/journal.pone.004...
; Ranard et al., 2014Ranard, B. L., Há, Y. P., Meisel, Z. F., Asch, D. A., Hill, S. S., Becker, L. B., & Merchant, R. M. (2014). Crowdsourcing: Harnessing the masses to advance health and medicine, a systematic review. Journal of General Internal Medicine, 29(1), 187-203. https://doi.org/10.1007/s11606-013-2536-8
https://doi.org/10.1007/s11606-013-2536-...
).

With the advent of the digital age and phenomena such as Industry 4.0 (Lin et al., 2018Lin, D., Lee, C. K., Lau, H., & Yang, Y. (2018). Strategic response to Industry 4.0: An empirical investigation on the Chinese automotive industry. Industrial Management & Data Systems, 118(3), 589-605. https://doi.org/10.1108/IMDS-09-2017-0403
https://doi.org/10.1108/IMDS-09-2017-040...
; Pilloni, 2018Pilloni, V. (2018). How data will transform industrial processes: Crowdsensing, crowdsourcing and big data as pillars of industry 4.0. Future Internet, 10(3), 24. https://doi.org/10.3390/fi10030024
https://doi.org/10.3390/fi10030024...
; Vianna et al., 2020Vianna, F. R. P. M., Graeml, A. R., & Peinado, J. (2020). The role of crowdsourcing in industry 4.0: A systematic literature review. International Journal of Computer Integrated Manufacturing, 33(4), 411-427. https://doi.org/10.1080/0951192X.2020.1736714
https://doi.org/10.1080/0951192X.2020.17...
) and smart environments (Ullo & Sinha, 2020Ullo, S. L., & Sinha, G. R. (2020). Advances in smart environment monitoring systems using IoT and sensors. Sensors, 20(11), 3113. https://doi.org/10.3390/s20113113
https://doi.org/10.3390/s20113113...
; Valerio et al., 2017Valerio, L., Passarella, A., & Conti, M. (2017). A communication efficient distributed learning framework for smart environments. Pervasive and Mobile Computing, 41, 46-68. https://doi.org/10.1016/j.pmcj.2017.07.014
https://doi.org/10.1016/j.pmcj.2017.07.0...
), there was an increase in the use of data as an input for organizational processes (Hartmann & Henkel, 2020Hartmann, P., & Henkel, J. (2020). The rise of corporate science in AI: Data as a strategic resource. Academy of Management Discoveries, 6(3), 359-381. https://doi.org/10.5465/amd.2019.0043
https://doi.org/10.5465/amd.2019.0043...
; Vianna et al., 2020). This data is processed through ubiquitous, distributed, and innovation-enabling digital systems that provide services and products (Reuver et al., 2018Reuver, M., Sørensen, C., & Basole, R. C. (2018). The digital platform: A research agenda. Journal of Information Technology, 33(2), 124-135. https://doi.org/10.1057/s41265-016-0033-3
https://doi.org/10.1057/s41265-016-0033-...
; Hein et al., 2020Hein, A., Schreieck, M., Riasanow, T., Setzke, D. S., Wiesche, M., Böhm, M., & Krcmar, H. (2020). Digital platform ecosystems. Electronic Markets, 30, 87-98. https://doi.org/10.1007/s12525-019-00377-4
https://doi.org/10.1007/s12525-019-00377...
; Rolland, Mathiassen, & Rai, 2018Rolland, K. H., Mathiassen, L., & Rai, A. (2018). Managing digital platforms in user organizations: The interactions between digital options and digital debt. Information Systems Research, 29(2), 419-443. https://doi.org/10.1287/isre.2018.0788
https://doi.org/10.1287/isre.2018.0788...
) to their users, many times referred to as digital platforms.

The importance of data in creative processes (Lin et al., 2018Lin, D., Lee, C. K., Lau, H., & Yang, Y. (2018). Strategic response to Industry 4.0: An empirical investigation on the Chinese automotive industry. Industrial Management & Data Systems, 118(3), 589-605. https://doi.org/10.1108/IMDS-09-2017-0403
https://doi.org/10.1108/IMDS-09-2017-040...
; Yang, Shen, & Wang, 2018Yang, C., Shen, W., & Wang, X. (2018). The internet of things in manufacturing: Key issues and potential applications. IEEE Systems, Man, and Cybernetics Magazine, 4(1), 6-15. https://doi.org/10.1109/MSMC.2017.2702391
https://doi.org/10.1109/MSMC.2017.270239...
) and in the development of solutions (Hassan, Gao, Jalal, & Arif, 2018Hassan, T. U., Gao, F., Jalal, B., & Arif, S. (2018). Interference management in femtocells by the adaptive network sensing power control technique. Future Internet, 10(3), 25. https://doi.org/10.3390/fi10030025
https://doi.org/10.3390/fi10030025...
; Zhang, Yang, Chen, & Li, 2018Zhang, Q., Yang, L. T., Chen, Z., & Li, P. (2018). A survey on deep learning for big data. Information Fusion, 42, 146-157. https://doi.org/10.1016/j.inffus.2017.10.006
https://doi.org/10.1016/j.inffus.2017.10...
) led to an exploration of varied factors to achieve the involvement of people in crowdsourcing activities. It also contributed to the generation of several taxonomies for collaborative activities, such as the classification of virtual works (Holts, 2013Holts, K. (2013). Towards a taxonomy of virtual work. Work Organisation, Labour and Globalisation, 7(1), 31-50. https://doi.org/10.13169/workorgalaboglob.7.1.0031
https://doi.org/10.13169/workorgalaboglo...
), classification of co-creation activities (Zwass, 2010Zwass, V. (2010). Co-creation: Toward a taxonomy and an integrated research perspective. International Journal of Electronic Commerce, 15(1), 11-48. https://doi.org/10.2753/JEC1086-4415150101
https://doi.org/10.2753/JEC1086-44151501...
), and specific platform features (Jiang & Wagner, 2014Jiang, L., & Wagner, C. (2014, June). Participation in micro-task crowdsourcing markets as work and leisure: The impact of motivation and micro-time structuring. Paper presented at Collective Intelligence 2014, United States. Retrieved from https://scholars.cityu.edu.hk/en/publications/publication(7b441cf2-4261-47b2-89db-da1a8f4bef03).html
https://scholars.cityu.edu.hk/en/publica...
).

As it usually happens when a new field of research is developing, most of these efforts happen autonomously, without taking each other into consideration. The SLR proposed in the current study is an attempt to integrate the different taxonomies that have been created over the years and improve the understanding of the crowdsourcing phenomenon. In the next section, the methodological procedures will be explained in detail.

METHODOLOGY

This article uses the systematic literature review (SLR) as a bibliographical survey tool, seeking to gather information about crowdsourcing platforms and analyzing the characteristics of the classifications adopted by the authors who studied them.

The search for papers to be included in the corpus of the SLR was performed in Google Scholar first and, when a potentially relevant paper was mentioned there but was not available for full consultation, other databases were used, to ensure access to the whole document.

Although there is still some prejudice against the use of Google Scholar as the main source for an SLR, this does not seem reasonable, when Google Scholar is used just with the purpose of defining the corpus of the research. Most other databases represent a fraction of what is available in the literature and Google Scholar is one of the broadest databases available. According to Jacsó (2005Jacsó, P. (2005). Google Scholar: The pros and the cons. Online Information Review, 29(2), 208-214. https://doi.org/10.1108/14684520510598066
https://doi.org/10.1108/1468452051059806...
), the relevance of Google Scholar is due to the large volume of articles and publications it refers to, from various academic sources, and the research refinement tools that this database provides. In addition, the Google Scholar platform identifies the most cited works in a researched field (Martin-Martin, Orduna-Malea, Harzing, & López-Cózar, 2017Martin-Martin, A., Orduna-Malea, E., Harzing, A.-W. & López-Cózar, E. D. (2017). Can we use Google Scholar to identify highly-cited documents? Journal of Informetrics, 11(1), 152-163. https://doi.org/10.1016/j.joi.2016.11.008
https://doi.org/10.1016/j.joi.2016.11.00...
), with tools that increase the efficiency of the search, when searching for specific content in the abstract, the title, or the whole text.

When a specific study is interested in works published in different fields, including business, engineering, computer science, or the social sciences, the Google Scholar database is also attractive because it is not segmented by area. This is useful when researchers are in search of interdisciplinarity or multidisciplinarity studies that may not be available in segmented field databases. Repanovici (2011Repanovici, A. (2011). Measuring the visibility of the university's scientific production through scientometric methods: An exploratory study at the Transilvania University of Brasov, Romania. Performance Measurement and Metrics, 12(2), 106-117. https://doi.org/10.1108/14678041111149345
https://doi.org/10.1108/1467804111114934...
) highlights the quality and volume of citations in Google Scholar as one of its main advantages. In a comparative study conducted including Web of Science, Scopus, and Google Scholar, Harzing and Alakangas (2016Harzing, A.-W., & Alakangas, S. (2016). Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison. Scientometrics, 106(2), 787-804. https://doi.org/10.1007/s11192-015-1798-9
https://doi.org/10.1007/s11192-015-1798-...
) found out that the Google Scholar platform brought broader and more comprehensive results than any of the other platforms.

One disadvantage of Google Scholar is that, sometimes, it does not provide access to the full content of proprietary material, i.e., there are papers that are referred to in the database, but to which the researcher will not have access if s/he does not have other means to reach them, which may involve the additional use of Web of Science, Scopus, or other specific databases for that purpose. However, as mentioned before, this was precisely the procedure adopted by the authors in situations where an entry was relevant, but Google Scholar did not provide access to the full document. In those cases, papers that were referred to by Google Scholar were obtained, in their full version, from other sources.

An SLR is characterized by rigor in the application of a scientific strategy, “critical evaluation and synthesizing relevant studies on a specific topic” (Botelho, Cunha, & Macedo, 2011Botelho, L. L. R., Cunha, C. C. A., & Macedo, M. (2011). O método da revisão integrativa nos estudos organizacionais. Gestão e Sociedade, 5(11), 121-136. https://doi.org/10.21171/ges.v5i11.1220
https://doi.org/10.21171/ges.v5i11.1220...
). For Higgins and Green (2011Higgins, J. P. T., & Green, S. (2011). Cochrane handbook for systematic review of interventions (v. 4). Hoboken, NJ: John Willey & Sons), the SLR allows for the collection of information through previously defined criteria. The methodological rigor of an SLR enables relevant and consistent works to be collected and analyzed, allowing for deepening the knowledge about the researched topic. Its results help non-experts and other researchers develop a better understanding of a particular issue without having to go through the laborious research performed by those who did the review (Petticrew & Roberts, 2008Petticrew, M., & Roberts, H. (2008). Systematic reviews in the social sciences: A practical guide. Hoboken, NJ: John Wiley & Sons.).

Filtering steps to define the corpus of the SLR

The terms ‘crowdsourcing’ and ‘taxonomy’ were used in the preliminary search in the Google Scholar academic articles database, which was conducted in June 2021. We were interested in the term ‘taxonomy’ because it relates to the assignment of labels and names, in a practical way, based on previous classifications and categorizations, according to Bailey (1994Bailey, K. D. (1994). and taxonomies. Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412986397
https://doi.org/10.4135/9781412986397...
).

The order of presentation of the articles in the results screens in the Google Scholar platform combined a decreasing ‘number of citations,’ as explained by Harzing and Van der Wal (2008Harzing, A.-W. K., & Van der Wal, R. (2008). Google Scholar as a new source for citation analysis. Ethics in Science and Environmental Politics, 8(1), 61-73. https://doi.org/10.3354/esep00076
https://doi.org/10.3354/esep00076...
), and their five-year h-index and h-median (Google Scholar, 2021), with results referring to patents or citations being filtered out. Only entries written in the English language were considered, which is justified by the fact that this has become the universal language of science (Bolton & Kuteeva, 2012Bolton, K., & Kuteeva, M. (2012). English as an academic language at a Swedish university: Parallel language use and the ‘threat’ of English. Journal of Multilingual and Multicultural Development, 33(5), 429-447. https://doi.org/10.1080/01434632.2012.670241
https://doi.org/10.1080/01434632.2012.67...
).

The search combined the terms ‘crowdsourcing’ and ‘taxonomy,’ returning approximately 21,000 entries. The review of entries was conducted on pages that presented 10 results each, and it was halted after a sequence of two consecutive search pages (20 results) did not offer any new relevant entry relating to an article that could potentially be included in the SLR corpus. Padilha and Graeml (2015Padilha, M., & Graeml, A. (2015). Inteligência coletiva e gestão do conhecimento: Quem é meio e quem é fim? Proceedings of the Americas Conference on Information Systems, 21, Puerto Rico. Retrieved from https://gvpesquisa.fgv.br/sites/gvpesquisa.fgv.br/files/arquivos/amcis_2015_icxgc.pdf
https://gvpesquisa.fgv.br/sites/gvpesqui...
) had applied the same procedure of analyzing all entries that were output by a Google Scholar search until two pages of entries did not provide any additional useful entry. Only entries relating to articles published in scientific journals were considered.

The results involved papers from 2006 onward, as this was the year the term ‘crowdsourcing’ was originally coined. As mentioned before, the first filtering step to determine the corpus for this SLR returned a total number of 21,000 entries. On the second step, the first 31 pages of Google Scholar output entries for the search were carefully analyzed, comprising a total of 310 entries. On pages 30 and 31, no potentially relevant results were found, which led to the interruption of the search.

The Google Scholar platform does not have a specific filter that allows for automatic separation of papers published in scientific journals from other works, such as books, theses, dissertations, academic papers presented in conferences, and patent applications. Each analyzed page presented, on average, three articles published in scientific journals, six articles from conferences, and one entry that related to a dissertation, thesis, or other type of source. However, not all these articles had explicit crowdsourcing categorizations. Thus, a third filtering step involved selecting the 61 journal articles that were found by means of the filtering of those first 31 pages of Google Scholar entries. The abstracts of those papers were then read on a fourth filtering step, their keywords were analyzed, and, in some cases, the whole paper was read just to decide if it should be included in the corpus of the research. Only papers that explicitly included a classification of models of crowdsourcing platforms were kept in the corpus. At the end, 13 articles were kept for an in-depth analysis, as shown on Figure 1.

Figure 1
Filtering steps to select papers to be included in the corpus of the systematic literature review.

Procedures for the analysis

In our attempt to generate a taxonomy based on the 13 articles that were included in the corpus of the SLR, we followed the guidance provided by Binzabiah and Wade (2012Binzabiah, R., & Wade, S. (2012). Building an ontology based on folksonomy: An attempt to represent knowledge embedded in filmed materials. Journal of Internet Technology and Secured Transactions (JITST), 2(1/2). Retrieved from http://eprints.hud.ac.uk/id/eprint/16140
http://eprints.hud.ac.uk/id/eprint/16140...
) with respect to the characteristics of a taxonomy (see Table 1). In doing that, we also obeyed to the following recommendations: (a) Identify approximations between the characteristics of the developed activities (Simpson, 1961Simpson, G. G. (1961) Principles of animal taxonomy. New York: Columbia University Press. https://doi.org/10.7312/simp92414
https://doi.org/10.7312/simp92414...
; Sneath & Sokal, 1973Sneath, P. H. A., & Sokal, R. R. (1973) Numerical taxonomy: The principles and practice of numerical classification. San Francisco: Freeman.); (b) Develop a hierarchy for the classification, terms used in the literature, and the characteristics of crowdsourcing activities (Bailey, 1994Bailey, K. D. (1994). and taxonomies. Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412986397
https://doi.org/10.4135/9781412986397...
; Dogac, Laleci, Kabak, & Cingil, 2002Dogac, A., Laleci, G., Kabak, Y., & Cingil, I. (2002). Exploiting web service semantics: Taxonomies vs. ontologies. A Quarterly Bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering, 25(4), 10-16.), through the identification of patterns in such activities (Glass & Vessey, 1995Glass, R. L., & Vessey, I. (1995). Contemporary application-domain taxonomies. IEEE software, 12(4), 63-76. https://doi.org/10.1109/52.391837
https://doi.org/10.1109/52.391837...
); and (c) Consolidate the classification of types of crowdsourcing, given the importance of the phenomenon (Pitropakis et al., 2019Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis, E., & Loukas, G. (2019). A taxonomy and survey of attacks against machine learning. Computer Science Review, 34, 100199. https://doi.org/10.1016/j.cosrev.2019.100199
https://doi.org/10.1016/j.cosrev.2019.10...
) for studies in business and technology.

To accomplish that, we took several steps, which are described in the following subsections. The first three of those steps involved: (a) identifying the terms used to refer to the types of crowdsourcing in the reviewed articles; (b) identifying classification schemes used to differentiate among several types of platforms and performed activities; and (c) listing the examples given in each article to illustrate the suggested classifications (see section ‘Relationship among different types of classification, platform classifications, and examples’).

The other three steps involved: (d) consolidating the terms and characteristics found in the analyzed articles; (e) summarizing the types of classification and presenting their characteristics; and (f) consolidating the results in a table that lists digital platforms, crowdsourcing activities, and rankings (see section ‘Platform ratings and descriptions’).

The operationalization of the analysis consisted, initially, in including in a spreadsheet the types of classification and classification of platforms, as presented in each of the works. Each platform classification went through two steps: the first involved relating platform classifications to the examples given in the analyzed articles. The second consisted in opening a new tab in the spreadsheet to list the characteristics mentioned by the authors of the analyzed articles. Then, the characteristics and classifications of the platforms were analyzed, seeking for approximations among these elements, following the guidelines in the literature, and obtaining the consolidated classifications. Finally, the following were hierarchically established: platform classification, terms used by the authors of the works, and characteristics.

PRESENTATION OF RESULTS AND DISCUSSION

Relationship among different types of classification, platform classifications, and examples

Table 2 presents the analyzed articles in the first column, the constructs used in the classification (type of classification) in the second column, the crowdsourcing platform classification in the third column, and the examples of platforms of crowdsourcing presented in various articles in the fourth column.

Table 2
Types of classification, classifications of crowdsourcing platforms, and examples used

When analyzing the type of classification (second column), the term ‘crowdsourcing types’ appeared in three studies (Faber & Matthes, 2016Faber, A., & Matthes, F. (2016). Crowdsourcing and crowd innovation. In A. Faber, F. Matthes, & F. Michel (Eds.), Digital mobility platforms and ecosystems: State of the art report (Report, pp. 36-48). Munich: Technische Universität München. https://doi.org/10.14459/2016md1324021
https://doi.org/10.14459/2016md1324021...
; Poblet, García-Cuesta, & Casanovas, 2018Poblet, M., García-Cuesta, E., & Casanovas, P. (2018). Crowdsourcing roles, methods and tools for data-intensive disaster management. Information Systems Frontiers, 20(6), 1363-1379. https://doi.org/10.1007/s10796-017-9734-6
https://doi.org/10.1007/s10796-017-9734-...
; Schuurman, Baccarne, Marez, & Mechant, 2012Schuurman, D., Baccarne, B., Marez, L., & Mechant, P. (2012). Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. Journal of Theoretical and Applied Electronic Commerce Research, 7(3), 49-62. https://doi.org/10.4067/S0718-18762012000300006
https://doi.org/10.4067/S0718-1876201200...
). Nevertheless, different crowdsourcing platform classifications appeared in the reviewed papers. Sivula and Kantola (2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
) and Faber and Matthes (2016) used different terms to refer to the type of classification (‘implementation models’ and ‘crowdsourcing types’), though using the same term to define one of the crowdsourcing platform’ classifications (third column): ‘crowdcreation.’ Zogaj, Bretschneider, and Leimeister (2014Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
https://doi.org/10.1007/s11573-014-0721-...
), on the other hand, used four different terms to refer to the types of classification: ‘fields of application,’ ‘crowdsourcing intermediaries,’ ‘crowdsourcing platforms,’ and ‘crowdsourcing processes.’

This summary on relationships among different types of classification and platform classifications is relevant to show how different terms or categorizations are used for the same purpose. This becomes clearer when one analyzes the fourth column, which presents the crowdsourcing platform examples, and compares its content with the content of the previous columns. Amazon Mechanical Turk, for example, appears on Hossain and Kauranen (2015Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
https://doi.org/10.1108/SO-12-2014-0029...
) as a ‘microtask’ platform and, as for the types of classifications, it is listed as ‘crowdsourcing applications.’ However, Amazon Mechanical Turk also appears in Good and Su (2013Good, B. M., & Su, A. I. (2013). Crowdsourcing for bioinformatics. Bioinformatics, 29(16), 1925-1933. https://doi.org/10.1093/bioinformatics/btt333
https://doi.org/10.1093/bioinformatics/b...
) and Prpic et al. (2015Prpic, J., Taeihagh, A., & Melton, J. (2015). The fundamentals of policy crowdsourcing. Policy & Internet, 7(3), 340-361. https://doi.org/10.1002/poi3.102
https://doi.org/10.1002/poi3.102...
), where the crowdsourcing types are ‘crowdsourcing systems’ and ‘crowdsourcing techniques.’ There, this platform is classified as a ‘virtual labor marketplace’ (Prpic et al., 2015) and ‘volunteer microtask’ (Good & Su, 2013).

Because of that, it is necessary to analyze the possible approximations, based on the terminology and the characteristics that are described by different authors (Simpson, 1961Simpson, G. G. (1961) Principles of animal taxonomy. New York: Columbia University Press. https://doi.org/10.7312/simp92414
https://doi.org/10.7312/simp92414...
; Sneath & Sokal, 1973Sneath, P. H. A., & Sokal, R. R. (1973) Numerical taxonomy: The principles and practice of numerical classification. San Francisco: Freeman.). Considering that the crowdsourcing phenomenon is part of the activities that happen through the platforms in the so-called digitized society (Olesen, 2018Olesen, M. (2018). Balancing media environments: Design principles for digital learning in Danish upper secondary schools. First Monday, 23(12). https://doi.org/10.5210/fm.v23i12.8266
https://doi.org/10.5210/fm.v23i12.8266...
) and in Industry 4.0 (Vianna et al., 2020Vianna, F. R. P. M., Graeml, A. R., & Peinado, J. (2020). The role of crowdsourcing in industry 4.0: A systematic literature review. International Journal of Computer Integrated Manufacturing, 33(4), 411-427. https://doi.org/10.1080/0951192X.2020.1736714
https://doi.org/10.1080/0951192X.2020.17...
), we directed our efforts to identifying combinations involving the observed elements and the hierarchizations, classifications, and characterizations of platforms and crowdsourcing activities.

Platform ratings and descriptions

The analysis of the articles included in the SLR led to the conclusion that different terms are sometimes used to refer to the same situation, while, in other cases, the same term is used to explain different situations.

It was possible to group similar classifications and those using the same terms in characterizations, reducing the number of distinct categories proposed in the reviewed studies from 65 to 16. In addition to reducing the number of distinct categories, we sought to highlight patterns of crowdsourcing activities presented in the literature (Bailey, 1994Bailey, K. D. (1994). and taxonomies. Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412986397
https://doi.org/10.4135/9781412986397...
; Glass & Vessey, 1995Glass, R. L., & Vessey, I. (1995). Contemporary application-domain taxonomies. IEEE software, 12(4), 63-76. https://doi.org/10.1109/52.391837
https://doi.org/10.1109/52.391837...
) and to consolidate, in the first column and third column of Table 3, the classification and its characteristics (Pitropakis et al., 2019Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis, E., & Loukas, G. (2019). A taxonomy and survey of attacks against machine learning. Computer Science Review, 34, 100199. https://doi.org/10.1016/j.cosrev.2019.100199
https://doi.org/10.1016/j.cosrev.2019.10...
). Thus, Table 3 presents the result of the categorization, using a taxonomy process, of crowdsourcing platforms. The first column presents the purpose of the platforms. The second column lists the classifications used by the authors, which were used to decide on their inclusion in the category presented in the first column. And, finally, the third column brings a compilation of the common/shared characteristics found in different works. Table 3 is the result of the attempt to group the categories found by different authors into an integrated taxonomy, with a reduced number of categories.

Table 3
Classification of platforms, required activities, and characteristics

Next, we try to explain in more detail the points of convergence among the ideas of the various authors that allowed the aggregation of all their categories into the sixteen categories that are proposed here, even when the used terminology could lead one to believe that the number of categories was much higher.

Microtasks: The classification of microtasks, when mentioned by Sivula and Kantola (2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
), Good and Su (2013Good, B. M., & Su, A. I. (2013). Crowdsourcing for bioinformatics. Bioinformatics, 29(16), 1925-1933. https://doi.org/10.1093/bioinformatics/btt333
https://doi.org/10.1093/bioinformatics/b...
), Hossain and Kauranen (2015Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
https://doi.org/10.1108/SO-12-2014-0029...
), and Bhatti et al. (2020Bhatti, S. S., Gao, X., & Chen, G. (2020). General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey. Journal of Systems and Software, 167(3), 110611. https://doi.org/10.1016/j.jss.2020.110611
https://doi.org/10.1016/j.jss.2020.11061...
), presents similar and complementary characteristics, being it possible to identify the existence of monetary reward, in some cases, i.e., the performance of a task in exchange for a small payment, or voluntary activity. Similar characteristics were described in the work of Schenk and Guittard (2011Schenk, E., & Guittard, C. (2011). Towards a characterization of crowdsourcing practices. Journal of Innovation Economics & Management, 7(1), 93-107. https://doi.org/10.3917/jie.007.0093
https://doi.org/10.3917/jie.007.0093...
) and referred to as ‘crowdsourcing of simple tasks.’ Such activities were characterized by low involvement and low pay. In the works by Saxton et al. (2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
), Zogaj et al. (2014Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
https://doi.org/10.1007/s11573-014-0721-...
), and Schuurman et al. (2012Schuurman, D., Baccarne, B., Marez, L., & Mechant, P. (2012). Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. Journal of Theoretical and Applied Electronic Commerce Research, 7(3), 49-62. https://doi.org/10.4067/S0718-18762012000300006
https://doi.org/10.4067/S0718-1876201200...
), although there is mention of the term ‘microtask’ as a platform classification, it is presented as an expression in the description of other classifications. Finally, Prpic et al. (2015Prpic, J., Taeihagh, A., & Melton, J. (2015). The fundamentals of policy crowdsourcing. Policy & Internet, 7(3), 340-361. https://doi.org/10.1002/poi3.102
https://doi.org/10.1002/poi3.102...
) present such an expression relating it to an example of the classification of a microtasking platform.

Competitions: The classification of platforms that are characterized by the participation of individuals or groups of individuals in competitions is found in the works of Prpic et al. (2015Prpic, J., Taeihagh, A., & Melton, J. (2015). The fundamentals of policy crowdsourcing. Policy & Internet, 7(3), 340-361. https://doi.org/10.1002/poi3.102
https://doi.org/10.1002/poi3.102...
) and Good and Su (2013Good, B. M., & Su, A. I. (2013). Crowdsourcing for bioinformatics. Bioinformatics, 29(16), 1925-1933. https://doi.org/10.1093/bioinformatics/btt333
https://doi.org/10.1093/bioinformatics/b...
). Although the constructs used in the classifications of Majchrzak and Malhotra (2013Majchrzak, A., & Malhotra, A. (2013). Towards an information systems perspective and research agenda on crowdsourcing for innovation. The Journal of Strategic Information Systems, 22(4), 257-268. https://doi.org/10.1016/j.jsis.2013.07.004
https://doi.org/10.1016/j.jsis.2013.07.0...
), Schuurman et al. (2012Schuurman, D., Baccarne, B., Marez, L., & Mechant, P. (2012). Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. Journal of Theoretical and Applied Electronic Commerce Research, 7(3), 49-62. https://doi.org/10.4067/S0718-18762012000300006
https://doi.org/10.4067/S0718-1876201200...
), and Zogaj et al. (2014Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
https://doi.org/10.1007/s11573-014-0721-...
) are different in the descriptions or examples of classifications, these authors make use of terms such as ‘competition,’ ‘contest,’ and ‘tournament.’ The tasks involved in that type of activity are performed, according to Prpic et al. (2015), Schuurman et al. (2012), Zogaj et al. (2014), and Nakatsu et al. (2014Nakatsu, R. T., Grossman E. B., & Iacovou C. L. (2014). A taxonomy of crowdsourcing based on task complexity. Journal of Information Science, 40(6), 823-834. https://doi.org/10.1177/0165551514550140
https://doi.org/10.1177/0165551514550140...
), by a varying number of individuals, depending on the complexity and degree of specialization required for their execution. Competitions focused on the development of algorithms, products and services, and problem solving are presented by Good and Su (2013), Majchrzak and Malhotra (2013), and Prpic et al. (2015) as examples of the application of this type of crowdsourcing activity.

Evaluations: According to Sivula and Kantola (2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
), Saxton et al. (2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
), and Nakatsu et al. (2014Nakatsu, R. T., Grossman E. B., & Iacovou C. L. (2014). A taxonomy of crowdsourcing based on task complexity. Journal of Information Science, 40(6), 823-834. https://doi.org/10.1177/0165551514550140
https://doi.org/10.1177/0165551514550140...
), this type of crowdsourcing activity is performed by individuals interested in monetary rewards, as well as by volunteers. The terms used by the authors suggest the service evaluation itself (Sivula & Kantola, 2016), the reporting of improvements and suggestions to the organization (Saxton et al., 2013), and a form of screening and innovation directed by consumers themselves (Nakatsu et al., 2014; Sivula & Kantola, 2016).

Complex tasks: Complex tasks, according to Faber and Matthes (2016Faber, A., & Matthes, F. (2016). Crowdsourcing and crowd innovation. In A. Faber, F. Matthes, & F. Michel (Eds.), Digital mobility platforms and ecosystems: State of the art report (Report, pp. 36-48). Munich: Technische Universität München. https://doi.org/10.14459/2016md1324021
https://doi.org/10.14459/2016md1324021...
), Schenk and Guittard (2011Schenk, E., & Guittard, C. (2011). Towards a characterization of crowdsourcing practices. Journal of Innovation Economics & Management, 7(1), 93-107. https://doi.org/10.3917/jie.007.0093
https://doi.org/10.3917/jie.007.0093...
), and Bhatti et al. (2020Bhatti, S. S., Gao, X., & Chen, G. (2020). General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey. Journal of Systems and Software, 167(3), 110611. https://doi.org/10.1016/j.jss.2020.110611
https://doi.org/10.1016/j.jss.2020.11061...
), involve the solution of complex and challenging problems, justifying Schenk and Guittard’s (2011) and Bhatti et al. (2020) assertion that such tasks are usually paid for, in money, as they require greater involvement of the participants and possible decomposition of complex tasks into smaller subtasks. The authors use different terms to refer to this type of task: complex tasks (Bhatti et al., 2020; Schenk & Guittard, 2011; Sivula & Kantola, 2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
), difficult tasks (Good & Su, 2013Good, B. M., & Su, A. I. (2013). Crowdsourcing for bioinformatics. Bioinformatics, 29(16), 1925-1933. https://doi.org/10.1093/bioinformatics/btt333
https://doi.org/10.1093/bioinformatics/b...
), challenges related to innovation, process improvements, and radical improvements (Majchrzak & Malhotra, 2013Majchrzak, A., & Malhotra, A. (2013). Towards an information systems perspective and research agenda on crowdsourcing for innovation. The Journal of Strategic Information Systems, 22(4), 257-268. https://doi.org/10.1016/j.jsis.2013.07.004
https://doi.org/10.1016/j.jsis.2013.07.0...
), and solutions such as algorithm development (Faber & Matthes, 2016).

Financing: In this type of crowdsourcing activity, all the classifications use the terms crowdfunding or financing, already characterizing the performed activity. According to Sivula and Kantola (2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
), Zogaj et al. (2014Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
https://doi.org/10.1007/s11573-014-0721-...
), Nakatsu et al. (2014Nakatsu, R. T., Grossman E. B., & Iacovou C. L. (2014). A taxonomy of crowdsourcing based on task complexity. Journal of Information Science, 40(6), 823-834. https://doi.org/10.1177/0165551514550140
https://doi.org/10.1177/0165551514550140...
), and Saxton et al. (2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
), the crowd serves as a collective sponsor of a project for the provision of service, development of a product, or settlement of an organization.

Design: Classifications used in the literature with similar characteristics, referring to the development of crowdsourcing activities aimed at creating music, art for mugs and other items, as well as the design of products, T-shirts, page prints, or web resources (Schenk & Guittard, 2011Schenk, E., & Guittard, C. (2011). Towards a characterization of crowdsourcing practices. Journal of Innovation Economics & Management, 7(1), 93-107. https://doi.org/10.3917/jie.007.0093
https://doi.org/10.3917/jie.007.0093...
; Sivula & Kantola, 2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
), design (Saxton et al., 2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
; Zogaj et al., 2014Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
https://doi.org/10.1007/s11573-014-0721-...
) and generation of ideas (Hossain & Kauranen, 2015Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
https://doi.org/10.1108/SO-12-2014-0029...
). As for Schuurman et al. (2012Schuurman, D., Baccarne, B., Marez, L., & Mechant, P. (2012). Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. Journal of Theoretical and Applied Electronic Commerce Research, 7(3), 49-62. https://doi.org/10.4067/S0718-18762012000300006
https://doi.org/10.4067/S0718-1876201200...
), they refer to the creation and design platform where people are compensated based on the success of their interventions. The main example of such is the Threadless platform, which allows users to develop their own T-shirt silkscreen art (Saxton et al., 2013; Zogaj et al., 2014). As quoted by Schuurman et al. (2012), Schenk and Guittard (2011), and Zogaj et al. (2014), work is usually financially rewarded. They all agree that specific knowledge may be required from those participating in this kind of crowdsourcing effort.

Content development: Schuurman et al. (2012Schuurman, D., Baccarne, B., Marez, L., & Mechant, P. (2012). Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. Journal of Theoretical and Applied Electronic Commerce Research, 7(3), 49-62. https://doi.org/10.4067/S0718-18762012000300006
https://doi.org/10.4067/S0718-1876201200...
) mention content development platforms, such as YouTube and Wikipedia, to exemplify integrative unpaid search platforms. The terms ‘knowledge building’ and ‘crowd creation system’ are used by Faber and Matthes (2016Faber, A., & Matthes, F. (2016). Crowdsourcing and crowd innovation. In A. Faber, F. Matthes, & F. Michel (Eds.), Digital mobility platforms and ecosystems: State of the art report (Report, pp. 36-48). Munich: Technische Universität München. https://doi.org/10.14459/2016md1324021
https://doi.org/10.14459/2016md1324021...
) and Saxton et al. (2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
), respectively, to characterize these same platforms (YouTube and Wikis). Hossain and Kauranen (2015Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
https://doi.org/10.1108/SO-12-2014-0029...
) mention ‘wiki constructs’ and ‘citizen journalism’ as ways development platforms can benefit from amateur content. According to Faber and Matthes (2016), Saxton et al. (2013) and Hossain and Kauranen (2015), such platforms allow for several types of content construction and editing by any individual, who usually participates voluntarily in the effort of crowdsourcing.

Software development: The terms used by Saxton et al. (2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
) and Hossain and Kauranen (2015Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
https://doi.org/10.1108/SO-12-2014-0029...
) suggest a free software development relationship. The performance of this crowdsourcing activity occurs voluntarily, involving motivated individuals in the field of computing. In some cases, when specific knowledge is demanded, work may be remunerated (Hossain & Kauranen, 2015).

Voting: According to Sivula and Kantola (2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
) and Faber and Matthes (2016Faber, A., & Matthes, F. (2016). Crowdsourcing and crowd innovation. In A. Faber, F. Matthes, & F. Michel (Eds.), Digital mobility platforms and ecosystems: State of the art report (Report, pp. 36-48). Munich: Technische Universität München. https://doi.org/10.14459/2016md1324021
https://doi.org/10.14459/2016md1324021...
), voting or ranking activities are usually not financially rewarded, involving a large volume of service users, providing feedback on the quality of the service supplied to them.

Knowledge diffusion: A paid or voluntary activity used for the diffusion of pre-existing knowledge, or the diffusion of knowledge related to a new idea or solution (Sivula & Kantola, 2016Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
https://doi.org/10.1080/2287108X.2016.12...
). It is like the work of a reporter, when information is provided, firsthand, through a platform to other users (Poblet et al., 2018Poblet, M., García-Cuesta, E., & Casanovas, P. (2018). Crowdsourcing roles, methods and tools for data-intensive disaster management. Information Systems Frontiers, 20(6), 1363-1379. https://doi.org/10.1007/s10796-017-9734-6
https://doi.org/10.1007/s10796-017-9734-...
).

Open collaboration: According to Prpic et al. (2015Prpic, J., Taeihagh, A., & Melton, J. (2015). The fundamentals of policy crowdsourcing. Policy & Internet, 7(3), 340-361. https://doi.org/10.1002/poi3.102
https://doi.org/10.1002/poi3.102...
), open collaboration happens in platforms that summon solvers interested in a specific issue, usually not financially rewarding them.

Sales: Activities focused on the involvement of individuals, through specific platforms, in promotional activities and sales of products and services, typically financially rewarded (Zogaj et al., 2014Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
https://doi.org/10.1007/s11573-014-0721-...
).

Collaboration intermediates: Crowdsourcing activity characterized by Saxton et al. (2013Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
https://doi.org/10.1080/10580530.2013.73...
) as ‘models with intermediaries.’ They function as a bridge between solvers and organizations seeking solutions. High remuneration is expected, in some cases, primarily related to R&D or innovation activities.

Public projects: The activities classified as public projects involve the participation of individuals in actions related to public interest (Hossain & Kauranen, 2015Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
https://doi.org/10.1108/SO-12-2014-0029...
). Usually, they involve no remuneration and count on individuals who are interested in collaborating with this kind of project.

Citizen science: Characterized by the voluntary involvement of individuals, this type of crowdsourcing activity may involve various levels of expertise. The activities performed may be as simple as just collecting data, or they may be complex activities, such as solutions in medical research (Hossain & Kauranen, 2015Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
https://doi.org/10.1108/SO-12-2014-0029...
).

Sharing: According to Nakatsu et al. (2014Nakatsu, R. T., Grossman E. B., & Iacovou C. L. (2014). A taxonomy of crowdsourcing based on task complexity. Journal of Information Science, 40(6), 823-834. https://doi.org/10.1177/0165551514550140
https://doi.org/10.1177/0165551514550140...
), this type of crowdsourcing activity involves the sharing of spaces and goods, such as houses and bicycles, or the sharing of knowledge, through the development of websites, always using a platform as a means of business.

It is important to emphasize that the classifications presented here may have elements of one another. For example, there is a lot in common among the classification of ‘competition’ platforms, ‘design’ platforms, and ‘complex tasks’ platforms. Some platforms that use competition in collaborative activities can do this to define the best design of a product, in situations involving complex tasks, as explained by Schuurman et al. (2012Schuurman, D., Baccarne, B., Marez, L., & Mechant, P. (2012). Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. Journal of Theoretical and Applied Electronic Commerce Research, 7(3), 49-62. https://doi.org/10.4067/S0718-18762012000300006
https://doi.org/10.4067/S0718-1876201200...
) and Zogaj et al. (2014Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
https://doi.org/10.1007/s11573-014-0721-...
), when referring to the Threadless platform, in which the work of individuals involves, initially, the development of T-shirt designs and other design-related challenges (in a competition, and complex task). In a later stage, individuals engage in a peer review process, on which the success of the collective undertaking also depends.

We have also observed situations in which the same terms are used to treat completely different constructs. This happens for Sivula and Kantola’s (2015) and Faber and Matthes’ (2016Faber, A., & Matthes, F. (2016). Crowdsourcing and crowd innovation. In A. Faber, F. Matthes, & F. Michel (Eds.), Digital mobility platforms and ecosystems: State of the art report (Report, pp. 36-48). Munich: Technische Universität München. https://doi.org/10.14459/2016md1324021
https://doi.org/10.14459/2016md1324021...
) classifications of ‘crowdcreation’ and ‘crowdcreation systems,’ respectively, where it is clear that the former suggests using such classification to refer to the participation of individuals in the execution of tasks together with the producers, in a comprehensive and functioning manner, as if they were employees, while the latter suggests the creation of a result by a large heterogeneous group, not mentioning the relationship between producers and the organization. There is no characterization of employment for the performance of the activity. Those involved are autonomous in their actions.

Once the classification of the crowdsourcing platforms and activities has been defined, with a proposition of terms for each set of characteristics and their explanations, as presented above, the next step of this work is consolidating the operationalized classification and its relationship with the platforms shown in the studies that comprised the corpus of research (Pitropakis et al., 2019Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis, E., & Loukas, G. (2019). A taxonomy and survey of attacks against machine learning. Computer Science Review, 34, 100199. https://doi.org/10.1016/j.cosrev.2019.100199
https://doi.org/10.1016/j.cosrev.2019.10...
).

Classifications and examples of crowdsourcing platforms

The varied classifications presented in the works bring similarities in their characteristics and use the same examples to illustrate most of these characteristics.

Throughout the research, it was possible to observe the repetition of the citation of the same platforms in different works, with distinct classifications. Eighty-eight platforms were mentioned in the 13 articles that comprised the corpus of the research, and among those, several were mentioned repeatedly. The most cited platforms were: Innocentive (eight quotes); Amazon Mechanical Turk (six quotes); Wikipedia (five quotes); Twitter, ReCaptcha, TopCoder, and YouTube (three quotes each).

Although a crowdsourcing platform may present characteristics of several types, Table 2 seems to suggest that some specialization occurs, especially in the case of mediation platforms. The authors perceived Innocentive as a platform for fostering involvement in complex tasks in a competitive environment and as an intermediary platform for collaboration. Amazon Mechanical Turk is mostly seen as an environment of low involvement and simple tasks.

Table 4 shows the relationships between the classifications given in Table 3 and the platforms mentioned in at least three different analyzed articles, according to the characteristics of each platform. It is possible to observe that even different taxonomies show the same well-defined characteristics for some of the platforms, such as Wikipedia, that is characterized as a content developer. Although the seven platforms cited in at least three articles have 31 different classifications in the analyzed papers, only 10 of the 16 aggregate classifications produced in Table 3 are needed to characterize them, as shown in Table 4. Evaluations, funding, voting, public projects, and sharing are not mentioned by the authors of the articles analyzed as important types of crowdsourcing activities in the case of these more prominent platforms.

Table 4
Relationship between crowdsourcing platform and types of performed activities

CONCLUSIONS

We developed a taxonomy process to raise shared characteristics among the various classifications and the terminology used in the literature to refer to crowdsourcing platforms and their characteristics, or the types of crowdsourcing platforms. In this process, 13 articles extracted from the Google Scholar platform were analyzed, with publication dates ranging from 2011 to 2020, showing that the concern with taxonomies for crowdsourcing has motivated researchers over the years. Despite this development of the theme over time, we noticed that the terminology and classifications still need to be consolidated and standardized, especially since it is a phenomenon that accompanies the technological evolution that is part of the organizational environment.

The study was based on a well-defined process, beginning with presenting the typologies and designations adopted by the works that comprised the SLR corpus. After that, we organized such classifications, and, finally, through a taxonomy process, we aggregated them based on their congruences, reducing the 65 categories that were mentioned in the literature to 16 major categories. Then, we used the examples of crowdsourcing platforms cited in the analyzed papers to help checking the robustness of the aggregate taxonomy results.

With the examples of platforms mentioned in at least three different articles that comprised the SLR, it was possible to establish the relationship among the platforms and the 16 aggregate crowdsourcing classifications resulting from the study. In this way, the seven platforms, which were initially related to 27 different classifications, fit 10 types of platforms in the proposed aggregate taxonomy.

The authors of the analyzed articles did not emphasize any formative constructs used to build their own classifications, focusing their attention on the results of the classification process, rather than on the process itself. In the four papers that used the expression ‘types of crowdsourcing,’ for example, the conceptual basis for the adopted definitions was not found, which would be a distinguishing characteristic of a typology classification process, according to Bailey (1994Bailey, K. D. (1994). and taxonomies. Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412986397
https://doi.org/10.4135/9781412986397...
).

The lack of methodological rigidity in the articles that were used as the grounds for the currently proposed categorization may impose a limitation to the result of this study. Although we wish to contribute to the definition of a standard categorization for studies concerned with the classification of crowdsourcing platforms, we may still have not reached ‘stable grounds’ for our own results to be more reliable. Additional effort may be required in the attempt to establish designation processes and patterns that will help researchers develop common grounds on which to develop their research on this topic.

In a globalized organizational environment, severely affected by digital transformation over time, in which digital forms of interactions with customers and other stakeholders become the rule, better understanding the crowdsourcing phenomenon and the possibilities that can be brought to organizations by making use of crowdsourcing platforms may provide them with an edge in the market. To achieve such better understanding of the phenomenon, it is paramount to clarify the distinctive characteristics of different types of platforms. We hope this paper has brought us a step closer to standardizing the language used to discuss such characteristics and the possibilities such platforms offer. Building a common language to discuss a specific matter is essential for the debate to happen and for richer conclusions about the matter to be obtained. In that sense, the attempted standardization of language to refer to crowdsourcing and its characteristics may contribute to deepen the discussion about it, helping organizations develop systems that rely on the crowd and connected devices to improve their value propositions.

The main contribution of this study was the proposition of a simpler aggregate categorization of crowdsourcing platforms, based on a thorough process of literature review, which led to a classification based on characteristics raised in different but complementary previous works. This taxonomy process and the obtained result were not found in any previous work. The analysis started from the categorizations presented in the literature, involving types of crowdsourcing efforts, platform categorizations, and crowdsourcing platform examples. Based on that, an aggregate categorization was proposed comprising the different dimensions that appeared in earlier crowdsourcing classification attempts.

Future research may involve other types of platforms, developed based on new demands that only appeared during and because of the COVID-19 pandemic, which made organizations around the globe to review their processes, their value chains, and the ways they interact with other stakeholders and customers. This will have an impact on the future of crowdsourcing as in many other aspects of businesses, in ways this study was still not able to depict.

REFERENCES

  • Albors, J., Ramos, J. C., & Hervas, J. L. (2008). New learning network paradigms: Communities of objectives, crowdsourcing, wikis and open source. International Journal of Information Management, 28(3), 194-202. https://doi.org/10.1016/j.ijinfomgt.2007.09.006
    » https://doi.org/10.1016/j.ijinfomgt.2007.09.006
  • Bailey, K. D. (1994). and taxonomies. Thousand Oaks, CA: SAGE Publications. https://doi.org/10.4135/9781412986397
    » https://doi.org/10.4135/9781412986397
  • Bhatti, S. S., Gao, X., & Chen, G. (2020). General framework, opportunities and challenges for crowdsourcing techniques: A comprehensive survey. Journal of Systems and Software, 167(3), 110611. https://doi.org/10.1016/j.jss.2020.110611
    » https://doi.org/10.1016/j.jss.2020.110611
  • Binzabiah, R., & Wade, S. (2012). Building an ontology based on folksonomy: An attempt to represent knowledge embedded in filmed materials. Journal of Internet Technology and Secured Transactions (JITST), 2(1/2). Retrieved from http://eprints.hud.ac.uk/id/eprint/16140
    » http://eprints.hud.ac.uk/id/eprint/16140
  • Bolton, K., & Kuteeva, M. (2012). English as an academic language at a Swedish university: Parallel language use and the ‘threat’ of English. Journal of Multilingual and Multicultural Development, 33(5), 429-447. https://doi.org/10.1080/01434632.2012.670241
    » https://doi.org/10.1080/01434632.2012.670241
  • Borromeo, R. M., & Toyama, M. (2016). An investigation of unpaid crowdsourcing. Human-centric Computing and Information Sciences, 6, 11. https://doi.org/10.1186/s13673-016-0068-z
    » https://doi.org/10.1186/s13673-016-0068-z
  • Botelho, L. L. R., Cunha, C. C. A., & Macedo, M. (2011). O método da revisão integrativa nos estudos organizacionais. Gestão e Sociedade, 5(11), 121-136. https://doi.org/10.21171/ges.v5i11.1220
    » https://doi.org/10.21171/ges.v5i11.1220
  • Dogac, A., Laleci, G., Kabak, Y., & Cingil, I. (2002). Exploiting web service semantics: Taxonomies vs. ontologies. A Quarterly Bulletin of the Computer Society of the IEEE Technical Committee on Data Engineering, 25(4), 10-16.
  • Dotsika, F. (2009). Uniting formal and informal descriptive power: Reconciling ontologies with folksonomies. International Journal of Information Management, 29(5), 407-415. https://doi.org/10.1016/j.ijinfomgt.2009.02.002
    » https://doi.org/10.1016/j.ijinfomgt.2009.02.002
  • Estellés-Arolas, E., González-Ladrón-De-Guevara, F. (2012). Towards an integrated crowdsourcing definition. Journal of Information Science, 38(2), 189-200. https://doi.org/10.1177/0165551512437638
    » https://doi.org/10.1177/0165551512437638
  • Faber, A., & Matthes, F. (2016). Crowdsourcing and crowd innovation. In A. Faber, F. Matthes, & F. Michel (Eds.), Digital mobility platforms and ecosystems: State of the art report (Report, pp. 36-48). Munich: Technische Universität München. https://doi.org/10.14459/2016md1324021
    » https://doi.org/10.14459/2016md1324021
  • Glass, R. L., & Vessey, I. (1995). Contemporary application-domain taxonomies. IEEE software, 12(4), 63-76. https://doi.org/10.1109/52.391837
    » https://doi.org/10.1109/52.391837
  • Good, B. M., & Su, A. I. (2013). Crowdsourcing for bioinformatics. Bioinformatics, 29(16), 1925-1933. https://doi.org/10.1093/bioinformatics/btt333
    » https://doi.org/10.1093/bioinformatics/btt333
  • Google Scholar. (2021). Google Scholar Metrics. Retrieved from https://scholar.google.com/intl/en/scholar/metrics.html
    » https://scholar.google.com/intl/en/scholar/metrics.html
  • Hartmann, P., & Henkel, J. (2020). The rise of corporate science in AI: Data as a strategic resource. Academy of Management Discoveries, 6(3), 359-381. https://doi.org/10.5465/amd.2019.0043
    » https://doi.org/10.5465/amd.2019.0043
  • Harzing, A.-W., & Alakangas, S. (2016). Google Scholar, Scopus and the Web of Science: A longitudinal and cross-disciplinary comparison. Scientometrics, 106(2), 787-804. https://doi.org/10.1007/s11192-015-1798-9
    » https://doi.org/10.1007/s11192-015-1798-9
  • Harzing, A.-W. K., & Van der Wal, R. (2008). Google Scholar as a new source for citation analysis. Ethics in Science and Environmental Politics, 8(1), 61-73. https://doi.org/10.3354/esep00076
    » https://doi.org/10.3354/esep00076
  • Hassan, T. U., Gao, F., Jalal, B., & Arif, S. (2018). Interference management in femtocells by the adaptive network sensing power control technique. Future Internet, 10(3), 25. https://doi.org/10.3390/fi10030025
    » https://doi.org/10.3390/fi10030025
  • Hein, A., Schreieck, M., Riasanow, T., Setzke, D. S., Wiesche, M., Böhm, M., & Krcmar, H. (2020). Digital platform ecosystems. Electronic Markets, 30, 87-98. https://doi.org/10.1007/s12525-019-00377-4
    » https://doi.org/10.1007/s12525-019-00377-4
  • Higgins, J. P. T., & Green, S. (2011). Cochrane handbook for systematic review of interventions (v. 4). Hoboken, NJ: John Willey & Sons
  • Holts, K. (2013). Towards a taxonomy of virtual work. Work Organisation, Labour and Globalisation, 7(1), 31-50. https://doi.org/10.13169/workorgalaboglob.7.1.0031
    » https://doi.org/10.13169/workorgalaboglob.7.1.0031
  • Howe, J. (2006, June 01). The rise of crowdsourcing, Wired, 14(6) 1-4. Retrieved from https://www.wired.com/2006/06/crowds/
    » https://www.wired.com/2006/06/crowds/
  • Hossain, M., & Kauranen, I. (2015). Crowdsourcing: A comprehensive literature review. Strategic Outsourcing: An International Journal, 8(1), 2-22. https://doi.org/10.1108/SO-12-2014-0029
    » https://doi.org/10.1108/SO-12-2014-0029
  • Hosseini, M., Phalp, K., Taylor, J., & Ali, R. (2015). On the configuration of crowdsourcing projects. International Journal of Information System Modeling and Design, 6(3), 27-45. https://doi.org/10.4018/IJISMD.2015070102
    » https://doi.org/10.4018/IJISMD.2015070102
  • Jacsó, P. (2005). Google Scholar: The pros and the cons. Online Information Review, 29(2), 208-214. https://doi.org/10.1108/14684520510598066
    » https://doi.org/10.1108/14684520510598066
  • Jiang, L., & Wagner, C. (2014, June). Participation in micro-task crowdsourcing markets as work and leisure: The impact of motivation and micro-time structuring. Paper presented at Collective Intelligence 2014, United States. Retrieved from https://scholars.cityu.edu.hk/en/publications/publication(7b441cf2-4261-47b2-89db-da1a8f4bef03).html
    » https://scholars.cityu.edu.hk/en/publications/publication(7b441cf2-4261-47b2-89db-da1a8f4bef03).html
  • Lin, D., Lee, C. K., Lau, H., & Yang, Y. (2018). Strategic response to Industry 4.0: An empirical investigation on the Chinese automotive industry. Industrial Management & Data Systems, 118(3), 589-605. https://doi.org/10.1108/IMDS-09-2017-0403
    » https://doi.org/10.1108/IMDS-09-2017-0403
  • Liu, Y., & Dang, D. P. (2014). Research on the construction of crowdsourcing platform. Applied Mechanics and Materials, 602-605, 3198-3201. https://doi.org/10.4028/www.scientific.net/amm.602-605.3198
    » https://doi.org/10.4028/www.scientific.net/amm.602-605.3198
  • Majchrzak, A., & Malhotra, A. (2013). Towards an information systems perspective and research agenda on crowdsourcing for innovation. The Journal of Strategic Information Systems, 22(4), 257-268. https://doi.org/10.1016/j.jsis.2013.07.004
    » https://doi.org/10.1016/j.jsis.2013.07.004
  • Martin-Martin, A., Orduna-Malea, E., Harzing, A.-W. & López-Cózar, E. D. (2017). Can we use Google Scholar to identify highly-cited documents? Journal of Informetrics, 11(1), 152-163. https://doi.org/10.1016/j.joi.2016.11.008
    » https://doi.org/10.1016/j.joi.2016.11.008
  • Nakatsu, R. T., Grossman E. B., & Iacovou C. L. (2014). A taxonomy of crowdsourcing based on task complexity. Journal of Information Science, 40(6), 823-834. https://doi.org/10.1177/0165551514550140
    » https://doi.org/10.1177/0165551514550140
  • Naroditskiy, V., Rahwan, I., Cebrian, M., & Jennings, N. R. (2012). Verification in referral-based crowdsourcing. PloS One, 7(10), e45924. https://doi.org/10.1371/journal.pone.0045924
    » https://doi.org/10.1371/journal.pone.0045924
  • Nickerson, R., Muntermann, J., Varshney, U., & Issac, H. (2009, June). Taxonomy development in information systems: Developing a taxonomy of mobile applications. Proceedings of the European Conference in Information Systems, Verona, Italy, 17. Retrieved from http://halshs.archives-ouvertes.fr/halshs-00375103/en/
    » http://halshs.archives-ouvertes.fr/halshs-00375103/en/
  • Noruzi, A. (2006). Folksonomies: (Un)controlled vocabulary? Knowledge Organization, 33(4), 199-203. Retrieved from https://www.ergon-verlag.de/isko_ko/downloads/ko3320064c.pdf
    » https://www.ergon-verlag.de/isko_ko/downloads/ko3320064c.pdf
  • Olesen, M. (2018). Balancing media environments: Design principles for digital learning in Danish upper secondary schools. First Monday, 23(12). https://doi.org/10.5210/fm.v23i12.8266
    » https://doi.org/10.5210/fm.v23i12.8266
  • Padilha, M., & Graeml, A. (2015). Inteligência coletiva e gestão do conhecimento: Quem é meio e quem é fim? Proceedings of the Americas Conference on Information Systems, 21, Puerto Rico. Retrieved from https://gvpesquisa.fgv.br/sites/gvpesquisa.fgv.br/files/arquivos/amcis_2015_icxgc.pdf
    » https://gvpesquisa.fgv.br/sites/gvpesquisa.fgv.br/files/arquivos/amcis_2015_icxgc.pdf
  • Petticrew, M., & Roberts, H. (2008). Systematic reviews in the social sciences: A practical guide. Hoboken, NJ: John Wiley & Sons.
  • Pilloni, V. (2018). How data will transform industrial processes: Crowdsensing, crowdsourcing and big data as pillars of industry 4.0. Future Internet, 10(3), 24. https://doi.org/10.3390/fi10030024
    » https://doi.org/10.3390/fi10030024
  • Pitropakis, N., Panaousis, E., Giannetsos, T., Anastasiadis, E., & Loukas, G. (2019). A taxonomy and survey of attacks against machine learning. Computer Science Review, 34, 100199. https://doi.org/10.1016/j.cosrev.2019.100199
    » https://doi.org/10.1016/j.cosrev.2019.100199
  • Poblet, M., García-Cuesta, E., & Casanovas, P. (2018). Crowdsourcing roles, methods and tools for data-intensive disaster management. Information Systems Frontiers, 20(6), 1363-1379. https://doi.org/10.1007/s10796-017-9734-6
    » https://doi.org/10.1007/s10796-017-9734-6
  • Prpic, J., Taeihagh, A., & Melton, J. (2015). The fundamentals of policy crowdsourcing. Policy & Internet, 7(3), 340-361. https://doi.org/10.1002/poi3.102
    » https://doi.org/10.1002/poi3.102
  • Quinn, A. J., & Bederson, B. B. (2009). A taxonomy of distributed human computation. Human-Computer Interaction Lab Tech Report, University of Maryland.
  • Ranard, B. L., Há, Y. P., Meisel, Z. F., Asch, D. A., Hill, S. S., Becker, L. B., & Merchant, R. M. (2014). Crowdsourcing: Harnessing the masses to advance health and medicine, a systematic review. Journal of General Internal Medicine, 29(1), 187-203. https://doi.org/10.1007/s11606-013-2536-8
    » https://doi.org/10.1007/s11606-013-2536-8
  • Repanovici, A. (2011). Measuring the visibility of the university's scientific production through scientometric methods: An exploratory study at the Transilvania University of Brasov, Romania. Performance Measurement and Metrics, 12(2), 106-117. https://doi.org/10.1108/14678041111149345
    » https://doi.org/10.1108/14678041111149345
  • Reuver, M., Sørensen, C., & Basole, R. C. (2018). The digital platform: A research agenda. Journal of Information Technology, 33(2), 124-135. https://doi.org/10.1057/s41265-016-0033-3
    » https://doi.org/10.1057/s41265-016-0033-3
  • Rieder, K., Voβ, G. G. (2010). The working customer - an emerging new type of customer. Psychology of Everyday Activity, 3(2), 2-10. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.467.917
    » http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.467.917
  • Rolland, K. H., Mathiassen, L., & Rai, A. (2018). Managing digital platforms in user organizations: The interactions between digital options and digital debt. Information Systems Research, 29(2), 419-443. https://doi.org/10.1287/isre.2018.0788
    » https://doi.org/10.1287/isre.2018.0788
  • Saxton, G. D., Oh, O., & Kishore, R. (2013). Rules of crowdsourcing: Models, issues, and systems of control. Information Systems Management, 30(1), 2-20. https://doi.org/10.1080/10580530.2013.739883
    » https://doi.org/10.1080/10580530.2013.739883
  • Schenk, E., & Guittard, C. (2011). Towards a characterization of crowdsourcing practices. Journal of Innovation Economics & Management, 7(1), 93-107. https://doi.org/10.3917/jie.007.0093
    » https://doi.org/10.3917/jie.007.0093
  • Schuurman, D., Baccarne, B., Marez, L., & Mechant, P. (2012). Smart ideas for smart cities: Investigating crowdsourcing for generating and selecting ideas for ICT innovation in a city context. Journal of Theoretical and Applied Electronic Commerce Research, 7(3), 49-62. https://doi.org/10.4067/S0718-18762012000300006
    » https://doi.org/10.4067/S0718-18762012000300006
  • Simpson, G. G. (1961) Principles of animal taxonomy. New York: Columbia University Press. https://doi.org/10.7312/simp92414
    » https://doi.org/10.7312/simp92414
  • Sneath, P. H. A., & Sokal, R. R. (1973) Numerical taxonomy: The principles and practice of numerical classification. San Francisco: Freeman.
  • Sivula, A, & Kantola, J. (2016). Integrating crowdsourcing with holistic innovation management. International Journal of Advanced Logistics, 5(3-4), 153-164. https://doi.org/10.1080/2287108X.2016.1221590
    » https://doi.org/10.1080/2287108X.2016.1221590
  • Surowiecki, J. (2005). The wisdom of crowds. New York: Anchor Books.
  • Ullo, S. L., & Sinha, G. R. (2020). Advances in smart environment monitoring systems using IoT and sensors. Sensors, 20(11), 3113. https://doi.org/10.3390/s20113113
    » https://doi.org/10.3390/s20113113
  • Valerio, L., Passarella, A., & Conti, M. (2017). A communication efficient distributed learning framework for smart environments. Pervasive and Mobile Computing, 41, 46-68. https://doi.org/10.1016/j.pmcj.2017.07.014
    » https://doi.org/10.1016/j.pmcj.2017.07.014
  • Vander Wal, T. (2004). You down with folksonomy? Retrieved from http://www.vanderwal.net/random/entrysel.php?blog=1529
    » http://www.vanderwal.net/random/entrysel.php?blog=1529
  • Vianna, F. R. P. M., Graeml, A. R., & Peinado, J. (2020). The role of crowdsourcing in industry 4.0: A systematic literature review. International Journal of Computer Integrated Manufacturing, 33(4), 411-427. https://doi.org/10.1080/0951192X.2020.1736714
    » https://doi.org/10.1080/0951192X.2020.1736714
  • Wazny, K. (2017). “Crowdsourcing” ten years in: A review. Journal of Global Health, 7(2), 020602. https://doi.org/10.7189/jogh.07.020601
    » https://doi.org/10.7189/jogh.07.020601
  • Yang, C., Shen, W., & Wang, X. (2018). The internet of things in manufacturing: Key issues and potential applications. IEEE Systems, Man, and Cybernetics Magazine, 4(1), 6-15. https://doi.org/10.1109/MSMC.2017.2702391
    » https://doi.org/10.1109/MSMC.2017.2702391
  • Zhang, Q., Yang, L. T., Chen, Z., & Li, P. (2018). A survey on deep learning for big data. Information Fusion, 42, 146-157. https://doi.org/10.1016/j.inffus.2017.10.006
    » https://doi.org/10.1016/j.inffus.2017.10.006
  • Zogaj, S., Bretschneider, U., & Leimeister, J. M. (2014). Managing crowdsourced software testing: A case study based insight on the challenges of a crowdsourcing intermediary. Journal of Business Economics, 84(3), 375-405. https://doi.org/10.1007/s11573-014-0721-9
    » https://doi.org/10.1007/s11573-014-0721-9
  • Zwass, V. (2010). Co-creation: Toward a taxonomy and an integrated research perspective. International Journal of Electronic Commerce, 15(1), 11-48. https://doi.org/10.2753/JEC1086-4415150101
    » https://doi.org/10.2753/JEC1086-4415150101

NOTE

  • 1
    The ideal type is an extreme or superior representation of all dimensions of a typology and may be equivalent to the best possible value chain while the ‘built-in type’ involves a moderate approach, using the most common characteristics found as a central tendency (Bailey, 1994).
  • JEL Code:

    O36
  • 9
    Peer review is responsible for acknowledging an article's potential contribution to the frontiers of scholarly knowledge on business or public administration. The authors are the ultimate responsible for the consistency of the theoretical references, the accurate report of empirical data, the personal perspectives, and the use of copyrighted material.
  • 10
    This content was evaluated using the double-blind peer review process. The disclosure of the reviewers' information on the first page is made only after concluding the evaluation process, and with the voluntary consent of the respective reviewers.

Edited by

Editors-in-Chief

Carlo Gabriel Porto Bellini (Universidade Federal da Paraíba, Brazil)
Ivan Lapuente Garrido (Universidade do Vale do Rio dos Sinos, Brazil)

Associate Editor:

José Roberto Frega (Universidade Federal do Paraná, Curitiba, PR, Brazil)

Edited by

Editorial assistants:

Kler Godoy and Simone Rafael (ANPAD, Maringá, PR, Brazil)

Publication Dates

  • Publication in this collection
    14 Mar 2022
  • Date of issue
    2022

History

  • Received
    17 Dec 2020
  • Accepted
    24 Jan 2022
  • Published
    31 Jan 2022
ANPAD - Associação Nacional de Pós-Graduação e Pesquisa em Administração Av. Pedro Taques, 294, 87030-008 - Maringá, PR, Brazil, Tel.: (+55) (44) 98826-2467 - Maringá - PR - Brazil
E-mail: bar@anpad.org.br