Acessibilidade / Reportar erro

Expert-Driven and Citational Approaches to Assessing Journal Publications of Brazilian Political Scientists* * http://dx.doi.org/10.1590/1981-3821201800010004 For data replication, see www.bpsr.org.br/files/archives/Dataset_Barberia_Barboza_Godoy

Abstract

In this study, we seek to contribute to discussions on how the quality of academic production in the field of political science should be evaluated using Brazil as a case study. We contrast the 'expert-driven approach' that is followed by CAPES, an agency of the Brazilian federal government with the 'citational' approach, which is based on the ranking of journals by mainstream indices of scientific research impact. With data provided by CAPES from 2010 to 2014, we examine to what extent journals that are ranked as having high quality by CAPES also have high impact indexes in the SCImago Journal rank index (SJR), the Hirsch index (h-index) calculated by SCImago, the h5-index and h5-median (based on the h-index period 05 years, calculated by Google Scholar Metrics), and the SNIP indicator (calculated by the CWTS Journal Indicators, included in the Scopus database). Our findings show that there is a positive, but weak correlation between citational criteria and the Qualis evaluation of the same journals. In ordered logistic regressions, we show that a journal's past Qualis scores are the most important factor for explaining its grades in the next evaluation. We show that once a journal's past Qualis score is considered, a journal's citational ranking does not influence its Qualis score with the exception of the SJR in the 2013-4 evaluation. Moreover, a journal's Qualis score is not influenced by the country of publication, language, or social science focus, all else equal.

Journal evaluation; political science; Qualis; citational approach; expert-driven approach


Because of the intense theoretical and methodological debates that are underway in Brazil, the discipline of political science1 1 Throughout this paper, we use the term political science to refer to the area that is denoted as political science and international relations by Capes. is experiencing one of its most dynamic and vital periods. The academic research of Brazilian scholars is increasingly relevant to discussions on a variety of pressing questions in the field. At the same time, however, there is an important literature that has shown that the quality of the research produced by Brazilian political scientists has significant and severe methodological shortcomings (SOARES, 2005SOARES, Gláucio Ary Dillon (2005), O calcanhar metodológico da Ciência Política no Brasil. Sociologia, Problemas e Práticas. Vol. 48, pp. 27-52.). Albeit having made significant advances in the last decade (NEIVA, 2015NEIVA, Pedro (2015), Revisitando o calcanhar de Aquiles metodológico das Ciências Sociais no Brasil. Sociologia, Problemas e Práticas. Vol. 79, pp. 65-83.)2 2 In previous research (BARBERIA et al., 2014a; BARBERIA et al., 2014b), we have examined how undergraduate and graduate programs in political science have sought to reform their programs to address important shortcomings that have been identified in the training of students in methods and techniques of scientific research particularly in quantitative methods. Using Brazil as a case study, we argue that the diversification of methodology training in graduate programs is endogenous to the size of the faculty and students in the programs. Curricula are limited in the early stages of a graduate program. Over time, we show that programs tend to increase the methodological training offered to students as the number of faculty with training in these methods increases. This pattern is important to the quality of academic research produced by these programs, as those faculty and students with more limited methods training are less likely to be able to publish in high quality academic journals in the discipline. , an aspect that has not been analyzed in depth is the determinants of the expert-driven index that is employed by the Brazilian federal government's Coordination for the Improvement of Higher Education Personnel (CAPES) to evaluate the quality of the academic research produced by the faculty and students in the nation's political science graduate programs3 3 As we will discuss in detail in the article, we believe the Qualis is most appropriately classified as an expert-driven index. This is because a committee of experts is responsible for producing the Qualis ranking. The committee of experts has discretion over journal grade assignment, but objective indicators measures help to inform this classification. .

There are an extensive variety of methods used to assess the quality of academic research. Reputational criteria, which are based on expert opinion, have historically served an essential role as instruments for assessing the quality of political science journals (CREWE and NORRIS, 1991CREWE, Ivor and NORRIS, Pippa (1991), British and American Journal evaluation: divergence or convergence? Political Science and Politics. Vol. 24, Nº 03, pp. 524-531.; GILES and WRIGHT, 1975GILES, Michael W. and WRIGHT, Gerald C. (1975), 'Political scientists' evaluations of sixty-three journals. Political Science & Politics. Vol. 08, Nº 03, pp. 254-256.). These methods contrast with citation-based criteria that seek to measure a journal's impact and which have come to predominate as instruments to evaluate individual scholars (KLINGEMANN et al., 1989KLINGEMANN, Hans-Dieter; GROFMAN, Bernard, and CAMPAGNA, Janet (1989), The Political Science 400. Political Science & Politics. Vol. 22, Nº 02, pp. 258-270.; MASUOKA et al., 2007)MASUOKA, Natalie; GROFMAN, Bernard, and FELD, Scott L. (2007), The Political Science 400: a 20-Year update. Political Science & Politics. Vol. 40, Nº 01, pp. 133-145., departments (HIX, 2004)HIX, Simon (2004), A global ranking of Political Science Departments. Political Studies Review. Vol. 02, Nº 03, pp. 293-313. and journal quality (GILES and GARAND, 2007GILES, Micheal W. and GARAND, James C. (2007), Ranking Political Science Journals: reputational and citational approaches. Political Science & Politics. Vol. 40, Nº 04, pp. 741-751.; PLUMPER, 2007)PLUMPER, Thomas (2007), Academic heavy-weights: the relevance of Political Science Journals. European Political Science. Vol. 06, pp. 41-50. in the discipline. Based on the sample of Brazilian political science academic research published in journals from 2010-2014, we use bivariate correlations and ordered logistic regressions to examine if high-quality journals according to the Qualis expert-driven index are also journals with the highest impact according to citation metrics. Furthermore, we explore whether additional structural factors (the language of publication, country of publication, discipline area) might explain divergences between both metrics.

We show that there is a positive, but weak correlation between citational criteria and the Qualis evaluation of the same journals. Indeed, the correlation between the Qualis journal rank and the commonly used impact indexes is 0.35. In ordered logistic regressions, we show that a journal's past Qualis scores are the most important factor for explaining its grades in the next evaluation. We show that once a journal's past Qualis score is considered, a journal's citational ranking does not influence its Qualis score with the exception of the SJR in the 2013-4 evaluation. Moreover, a journal's Qualis score is not influenced by the country of publication, language, or social science focus, all else equal.

The article is structured in four sections in addition to this introduction and the conclusion. The next section presents a brief description of our research design. We then review the literature and discuss the metrics that have been developed to assess academic research quality. The fourth section presents the results of our empirical analysis of the relationship between the CAPES journal quality ranking and the selected international impact indicators. Finally, the conclusion discusses directions for further research.

How should one measure a journal’s relevance?

Ideally, an impact assessment system would require the complete reading of each published article in order to measure its quality (GARFIELD, 2005GARFIELD, Eugene (2005), The agony and the ecstasy — the history and meaning of the journal impact factor. Paper. Presented at Fifth International Congress on Peer Review in Biomedical Publication. CHicago.). Given the infeasibility of this approach from a practical standpoint, evaluators adopted the practice of assessing the importance and quality of published articles based on judgments of the journals in which they appear (GILES and GARAND, 2007GILES, Micheal W. and GARAND, James C. (2007), Ranking Political Science Journals: reputational and citational approaches. Political Science & Politics. Vol. 40, Nº 04, pp. 741-751.). In the field of political science, the first-generation of studies to evaluate journal quality were based on reputational criteria. Giles and Wright (1975)GILES, Michael W. and WRIGHT, Gerald C. (1975), 'Political scientists' evaluations of sixty-three journals. Political Science & Politics. Vol. 08, Nº 03, pp. 254-256. were the first scholars to develop a ranking of journal relevance based on expert opinions of 63 journals. From the onset, several difficulties in measuring the relevance of journals became clear (CHRISTENSON and SIGELMAN, 1985CHRISTENSON, James A. and SIGELMAN Lee (1985), Accrediting knowledge: journal stature and citation impact in Social Science. Social Science Quarterly. Vol. 66, Nº 04, pp. 964-975.). The categorization of journals in 'political science', 'international relations' and 'public administration', for example, is not well defined. Political scientists also publish in journals that they consider part of political science, but that bibliometric organizations do not classify as such. As a result, subsequent studies following reputational criteria increased the pool of journals (GILES et al., 1989GILES, Micheal W.; MIZELL, Francie, and PATTERSON, David (1989), Political scientists' journal evaluations revisited. Political Science & Politics. Vol. 22, Nº 03, pp. 613-617.).

In the second generation of studies, scholars employed citation-based rankings of journals to complement expert opinion criteria (LESTER, 1990LESTER, James P. (1990), Evaluating the evaluators: accrediting knowledge and the ranking of political science journals. Political Science & Politics. Vol. 23, Nº 03, pp. 445-447.) and shifted attention to examining divergences in journal ranking across sub-fields of the discipline (GARAND and GILES, 2003GARAND, James C. and GILES, Micheal W. (2003), Journals in the discipline: a report on a new survey of American political scientists. Political Science & Politics. Vol. 36, Nº 02, pp. 293-308.) and country of residence (CREWE and NORRIS, 1991CREWE, Ivor and NORRIS, Pippa (1991), British and American Journal evaluation: divergence or convergence? Political Science and Politics. Vol. 24, Nº 03, pp. 524-531.). In the third and current generation of studies, citation-based journal rankings have replaced reputational-based methods, and these methods continue to dominate how journal relevance is evaluated in the discipline (GILES and GARAND, 2007GILES, Micheal W. and GARAND, James C. (2007), Ranking Political Science Journals: reputational and citational approaches. Political Science & Politics. Vol. 40, Nº 04, pp. 741-751.; PLUMPER, 2007PLUMPER, Thomas (2007), Academic heavy-weights: the relevance of Political Science Journals. European Political Science. Vol. 06, pp. 41-50.). In part, the adoption of citation-based journal evaluations has to do with data availability and lower costs. But, the dominance of citation-based metrics has also gained in popularity because these measures are argued to be superior as they permit evaluators to base their assessments of journal relevance based on objective and more reliable indicators (SPIRLING and CARTER, 2008SPIRLING, Arthur P. and CARTER, David (2008), Under the influence? Intellectual exchange in Political Science. Political Science & Politics. Vol. 41, Nº 02, pp. 375-378.). As Plumper (2007)PLUMPER, Thomas (2007), Academic heavy-weights: the relevance of Political Science Journals. European Political Science. Vol. 06, pp. 41-50. summarizes, "quality perception scores' have frequently, and I believe rightly, been criticized for being arbitrary and leading to somewhat astonishing results" (PLUMPER, 2007PLUMPER, Thomas (2007), Academic heavy-weights: the relevance of Political Science Journals. European Political Science. Vol. 06, pp. 41-50., p. 42).

Notwithstanding their predominance, an eminent group of researchers has studied the use and proliferation of rankings aimed at measuring the quality of journals based on citation-based metrics (PALACIOS-HUERTA and VOLIJ, 2004PALACIOS-HUERTA, Ignacio and VOLIJ, Oscar (2004), The measurement of intellectual influence. Econometrica. Vol. 72, Nº 03, pp. 963-977.). Researchers have directed efforts to improve citation-based measures of the impact of academic research by creating more specific indices to improve measurement and categorization (GONZÁLEZ-PEREIRA et al., 2010GONZÁLEZ-PEREIRA, Borja; GUERRERO-BOTE, Vicente P., and MOYA-ANEGÓN, Félix (2010), A new approach to the metric of journals' scientific prestige: the SJR indicator. Journal of Informetrics. Vol. 04, Nº 03, pp. 379-391.; GUERRERO-BOTE and MOYA-ANEGÓN, 2012GUERRERO-BOTE, Vicente P. and MOYA-ANEGÓN, Félix (2012), A further step forward in measuring journals' scientific prestige: the SJR2 indicator. Journal of Informetrics. Vol. 06, Nº 04, pp. 674-688.). Political scientists have followed this trend. Studies directed at examining the politics of journal publications now predominantly employ citation-based indicators including the Journal Impact Factor, the Journal Influence, the Invariant Method for the Measurement of Intellectual Influence, the Journal Status, the Eigenfactor, the Scimago Journal Rank and h-Index (ALTMAN, 2012ALTMAN, David (2012), Where is knowledge generated? On the productivity and impact of Political Science Departments in Latin America. European Political Science. Vol. 11, Nº 01, pp. 71-87.; BROOKS et al., 2014BROOKS, Chris; FENTON, Evelyn M., and WALKER, James T. (2014), Gender and the evaluation of research. Research Policy. Vol. 43, Nº 06, pp. 990-1001.; KRISTENSEN, 2012KRISTENSEN, Peter M. (2012), Dividing discipline: structures of communication in international relations. International Studies Review. Vol. 14, pp. 32-50.; MONTPETIT et al., 2008MONTPETIT, Éric; BLAIS, André, and FOUCAULT, Martial (2008), What does it take for a Canadian Political Scientist to be cited? Social Science Quarterly. Vol. 89, Nº 03, pp. 802-816.).

Nevertheless, concerns that citation-based metrics only indirectly and imperfectly measure the quality of journals in the discipline remain (GILES and GARAND, 2007GILES, Micheal W. and GARAND, James C. (2007), Ranking Political Science Journals: reputational and citational approaches. Political Science & Politics. Vol. 40, Nº 04, pp. 741-751.; LESTER, 1990LESTER, James P. (1990), Evaluating the evaluators: accrediting knowledge and the ranking of political science journals. Political Science & Politics. Vol. 23, Nº 03, pp. 445-447.). The concerns raised in the discipline resonate with several initiatives that seek to advance metrics to measure scientific relevance, such as the Declaration of San Francisco on Research Assessment (DORA)(American Society for Cell Biology (ASCB) 2012American Society for Cell Biology (ASCB) (2012), The San Francisco declaration on research assessment (Dora). Available at http://Www.Ascb.Org/Dora/. Accessed on October 20, 2017.
http://Www.Ascb.Org/Dora/...
) and the Leiden Manifesto (HICKS et al., 2015HICKS, Diana; WOUTERS, Paul; WALTMAN, Ludo; RIJCKE, Sarah de, and RAFOLS, Ismael (2015), Bibliometrics: the Leiden manifesto for research metrics. Nature. Vol. 520, Nº 7548, pp. 429-431.), which proposes ten principles for the measurement of research performance.

In the case of Brazil, the Brazilian federal government (namely the Coordination for the Improvement of Higher Education Personnel (CAPES)) efforts have been made to develop a system to assess journal quality recognizing that impact index scores may not fully capture scientific relevance (MUGNAINI et al., 2014MUGNAINI, Rogério; DIGIAMPETRI, Luciano Antonio, and MENA-CHALCO, Jesús Pascual (2014), Comunicação científica no Brasil (1998-2012): indexação, crescimento, fluxo e dispersão. Transinformação. Vol. 26, Nº 03, pp. 239-252.; TRZESNIAK, 2016TRZESNIAK, Piotr (2016), Um Qualis em quatro tempos: histórico e sugestões para Administração, Ciências Contábeis e Turismo. Revista de Contabilidade e Finanças. Vol. 27, Nº 72, pp. 279-290.). As we explain in further detail below, the Qualis index uses a journal's citation in the SCImago Journal Rank (SJR) as one of the inputs for assessing journal quality, but the ranking is best classified as one driven by expert opinion.

In the paragraphs that follow, we briefly describe this expert-driven method and citation-based measures of journal classification. Specifically, we briefly describe:

  1. the Qualis index, an expert-driven system of evaluation of the quality of journals adopted by CAPES to assess the production of Brazil-based academics affiliated with graduate programs and the criteria that have been adopted in political science;

  2. the SCImago Journal Rank, an index created in 2007 to analyze the impact of journals in several areas of knowledge in more than 80 countries;

  3. the indexes adopted by Google Scholar Metrics to calculate the impact of journals and articles; and,

  4. the Source Normalized Impact per Paper (SNIP), a metric of contextual citation impact weighted by subject field.

In addition, we discuss the h-index metric, which was proposed by Hirsch (2005HIRSCH, Jorge E. (2005), An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences of the United States of America. Vol. 102, Nº 46, pp. 16569-16572., 2007HIRSCH, Jorge E. (2007), Does the H index have predictive power? Proceedings of the National Academy of Sciences of the United States of America. Vol. 104, Nº 49, pp. 19193-19198.) to simplify and remove biases in the measurement of the impact of journals as this metric is used both by SCImago and Google Scholar.

The Qualis CAPES evaluation model

The Coordination for the Improvement of Higher Education Personnel (CAPES) is the government agency responsible for designing and implementing public policies for graduate studies in Brazil, including the evaluation and regulation of degree programs (CAPES, 2017bCAPES (2017b). "Sobre as Áreas de Avaliação. Available at http://www.capes.gov.br/avaliacao/sobre-as-areas-de-avaliacao. Accessed on January 16, 2017.
http://www.capes.gov.br/avaliacao/sobre-...
). A fundamental pillar of the agency's responsibilities is the evaluation of the nation's master's and doctorate degree programs. The results of this evaluation have a direct impact on graduate education including the allocation of financial resources, such as scholarships and grants.

The evaluation of Brazilian graduate programs by CAPES now occurs every four years (CAPES 2010CAPES (2010). Documento de Área 2007-2009: Ciência Política e Relações Internacionais." Available at http://capes.gov.br/images/stories/download/avaliacao/POLIT_RELINT_22jun10b.pdf. Accessed on January 16, 2017
http://capes.gov.br/images/stories/downl...
, 2017bCAPES (2017b). "Sobre as Áreas de Avaliação. Available at http://www.capes.gov.br/avaliacao/sobre-as-areas-de-avaliacao. Accessed on January 16, 2017.
http://www.capes.gov.br/avaliacao/sobre-...
)4 4 In earlier periods, the evaluations were carried out on a three-year basis. . There are nine subject areas and forty-nine sub-areas that are evaluated. Each specific area of ​​evaluation has a committee composed of consultants, generally faculty in the specific area, who must plan, execute and evaluate the activities of the graduate programs within the defined period. Among the components of this evaluation are the qualification of each program's faculty, its teaching and research infrastructure, and the scientific production of faculty and students.

One of the main components of this evaluation concerns an indicator of the quality of academic research produced by the Brazilian faculty and graduate students in each department. This evaluation, which aims to promote the 'strengthening of the scientific, technological and innovation' pillars, produces a ranking of specific graduate programs (with the best-evaluated programs receiving the maximum grade (a grade of 7)). Forty percent of the final grade of each program is based on an assessment of the quality of its permanent faculty's academic publications including both journal articles and books. Student academic performance in publications also contributes to the program's final grade. To evaluate publications, CAPES has developed an index, Qualis, which is used to classify the quality of research produced by scholars who are either faculty or students in graduate programs5 5 The journal scores are made available online by Capes. See: https://sucupira.capes.gov.br/ sucupira/public/consultas/coleta/veiculoPublicacaoQualis/listaConsultaGeralPeriodicos.jsf. . The coordination committee of the specific area is responsible for assigning a classification grade for each journal where Brazil-based faculty and student in the specific area published articles. It is important to note that the same journal is graded by different areas and these assessments do not necessarily yield the same grade to the same journal. The ranking is also restricted to journals in which a faculty or graduate student have published.

The Qualis classification is composed of three broad strata (A, B and C), segmented into eight subdivisions (A1 and A2, B1 to B5, C) (CAPES 2013CAPES (2013). Documento de Área 2013, Triênio 2010-2012: Ciência Política e Relações Internacionais. Available at http://capes.gov.br/images/stories/download/avaliacaotrienal/Docs_de_area/CI%C3%AAncia_Pol%C3%ADtica_doc_area_e_comiss%C3%A3o_21out.pdf. Accessed on January 16, 2017.
http://capes.gov.br/images/stories/downl...
, 2017aCAPES (2017a). Critérios De Classificação Qualis: Ciência Política e Relações Internacionais. Available at http://capes.gov.br/images/stories/download/avaliacaotrienal/Docs_de_area/qualis/ciencia_politica_e_relacoes_internacionais.doc. Accessed on January 16, 2017.
http://capes.gov.br/images/stories/downl...
). The strata are classified from the most 'relevant', belonging to stratum 'A1', to the less relevant, belonging to stratum 'C'. Since the onset, the CAPES committee for Political Science and International Relations included the SJR score and peer review as inputs to grade journals as being of higher quality. Over time, the criteria used to inform journal rankings has become more transparent and detailed. The criteria presented in Table 01 were adopted in 2013 and remain the criteria that are used by the political science and international relations area to inform final decisions on journal ranking6 6 The available documents do not specify if these criteria were adopted ex-ante or ex-post. .

Table 01
Criteria for journal rankings by Qualis CAPES in the area of Political Science and International Relations for 2013 and onwards

There are some measurements described in Table 01 that are more clearly defined. For example, the highest quality publications must have an SJR score of 0.3 or higher. To put this score in context, the mean sample SJR score in 2012 was 0.53. However, for most criteria, experts have significant input. For example, experts must assess journal adherence and what makes a scholar to be considered renowned. The relative weight of different factors is also not informed in the table. Moreover, the committee has authority and discretion over journal scores and may decide to classify a journal in a higher (lower) category. It is for these reasons that we believe that the Qualis is characterized as a journal ranking that is driven by expert opinion.

Given the division into strata, CAPES has encouraged the distribution within these categories to maintain a coherent and constant relationship. For all academic fields classified by the system, as a complement to the rules of allocation to each stratum, CAPES also applies a set of 'ad hoc' rules in view of the proportion of each layer in the specific area. The 'ad hoc' rules are presented in Table 02. As the committee must allocate journals within these rules, the expert committee also exercises discretion in grading journals to respect this rule irrespective of the fact that the journal may have met the criteria stipulated in Table 01.

Table 02
'Ad hoc' rules for classification Qualis CAPES

The Hirsch index (h-index)

There are several possible ways (quantity of articles, number of citations per article, number of 'significant' articles, among others) to measure scientific impact and these contribute to the difficulties in assessing the impact of academic journals. The adoption of specific metrics may introduce biases, either because it penalizes more recent journals or younger researchers, or because it overestimates researchers with a longer trajectory or with irregular production (very few articles cited, and many articles with few citations).

Considering this complex framework, Hirsch (2005HIRSCH, Jorge E. (2005), An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences of the United States of America. Vol. 102, Nº 46, pp. 16569-16572., 2007HIRSCH, Jorge E. (2007), Does the H index have predictive power? Proceedings of the National Academy of Sciences of the United States of America. Vol. 104, Nº 49, pp. 19193-19198.), an Argentine physicist, proposed a simple method of analyzing the impact of journals, called the h-index. According to Hirsch (2005HIRSCH, Jorge E. (2005), An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences of the United States of America. Vol. 102, Nº 46, pp. 16569-16572., 2007HIRSCH, Jorge E. (2007), Does the H index have predictive power? Proceedings of the National Academy of Sciences of the United States of America. Vol. 104, Nº 49, pp. 19193-19198.), the h-index is able to overcome the above problems by generating an individual production indicator weighted by the quantity of articles, the quantity of citations per article, journal time and corrected by the bias from authors with large numbers of publications. This indicator, an integer number greater than or equal to zero, is generated both individually and for a group of articles published in a particular journal.

Hirsch (2005)HIRSCH, Jorge E. (2005), An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences of the United States of America. Vol. 102, Nº 46, pp. 16569-16572. recognizes some gaps in the h-index. First, the metric cannot be the only factor of analysis of the scientific trajectory of a given researcher, since other factors, such as teaching activity, also influence this trajectory. Second, the metric is not weighted by the area of ​​interest of the journal, and there are difficulties in comparing different fields of knowledge (for example, medicine and political science) since the number of researchers in that field is not considered. Third, the h-index cannot remove the bias resulting from self-citation, but this problem is relatively minor. This metric has been widely adopted and is used in the composition of the SCImago index and in Google Scholar Metrics.

The SCImago Journal Rank

The SCImago Journal Rank (SJR) Index, which was developed by the Consejo Superior de Investigaciones Científicas of the University of Granada, Extremadura, Carlos III and Alcalá de Henares (Spain), has an index of journals in the world, as well as a series of indicators comparing academic research across countries. This metric classifies journals in twenty-seven thematic areas (subject areas), 313 sub-areas (specific subject categories) and includes information on the country of origin of journals. It is currently one of the most influential indexes due to its periodicity and comprehensive coverage.

The main contribution proposed by SJR is the analysis of the importance of journals in specific areas of knowledge (GONZÁLEZ-PEREIRA et al., 2010GONZÁLEZ-PEREIRA, Borja; GUERRERO-BOTE, Vicente P., and MOYA-ANEGÓN, Félix (2010), A new approach to the metric of journals' scientific prestige: the SJR indicator. Journal of Informetrics. Vol. 04, Nº 03, pp. 379-391.). In contrast to metrics that consider only the gross number of citations of the articles published in a particular journal thereby resulting in an unequal comparison between different areas of knowledge. For example, areas such as medicine rank considerably higher as compared to other areas, especially those related to the social sciences.

The SJR metric includes two dimensions: in addition to the number of citations of articles per journal, the network of citations in journals of the same subfield is also taken into account. Thus, beyond the gross amount of citations, the SJR also measures the prestige of journals. As González-Pereira et al. (2010)GONZÁLEZ-PEREIRA, Borja; GUERRERO-BOTE, Vicente P., and MOYA-ANEGÓN, Félix (2010), A new approach to the metric of journals' scientific prestige: the SJR indicator. Journal of Informetrics. Vol. 04, Nº 03, pp. 379-391. summarize "the SJR indicator is a size-independent metric aimed at measuring the current 'average per paper prestige' of journals for use in research evaluation process" (GONZÁLEZ-PEREIRA et al., 2010GONZÁLEZ-PEREIRA, Borja; GUERRERO-BOTE, Vicente P., and MOYA-ANEGÓN, Félix (2010), A new approach to the metric of journals' scientific prestige: the SJR indicator. Journal of Informetrics. Vol. 04, Nº 03, pp. 379-391., p. 380). The measurement of the prestige of journals aims to remove the effect of excessive citation of a particular article in a small journal which can bias rankings if only the gross amount of citations is observed.

The SJR only counts citations for a particular article for the three years following publication. In order to prevent the inflation resulting from self-citations, only 33% of self-citations are included. Finally, the SJR indicator also attempts to measure the prestige of each journal by weighting the citations made in a particular article by the prestige of the journal.

Google Scholar Metrics (h5-index and h5-median)

The h5-index and h5-median indexes are calculated by Google's impact assessment system, based on the h-index of the last five years of the production of a particular scientific journal (DELGADO-LÓPEZ-COZAR and CABEZAS-CLAVIJO, 2012DELGADO-LÓPEZ-COZAR, Emilio and CABEZAS-CLAVIJO, Alvaro (2012), Google scholar metrics: una herramienta poco fiable para la evaluación de revistas científicas. Profesional de La Información. Vol. 21, Nº 04, pp. 419-427.). According to the description of Google Scholars Metric (2016), the mechanism identifies the number of citations of a particular item over the period between 2011 and 2015 (h5-index) including citations in journals that are not measured by Google as well as the median of the citations throughout this period (h5-median). The indexes produced by Google Scholar are very popular because of their comprehensive coverage: they cover journals in different languages, countries, and areas of interest. The aim is to identify the most influential journals for each scientific area.

For an article to be included in Google Metrics, specific technical requirements must be met. The text must be included in the website of a journal or a university, which has an automatic indexing mechanism to Google Scholar. The text must be in HTML or PDF format. The title of the work should appear at the top of the first page followed by the name of the authors. A bibliography section must appear at the end of the article. These requirements allow Google to automate the search for scientific articles, as to measure their impact on the journal (with Google Scholar Metrics) and for the individual author (Google Scholar Citations).

Some criticisms have been made to Google Scholar Metrics. Explanations on how the mechanism identifies the specific areas or disciplines of the journal (sociology, political science, economics, etc) and in how many areas a journal is inserted are a lacking. The data from previous versions of the journal's classification are not available. The justification for why a five-year window should be used for impact assessment of journals is missing. Despite the existence of these and other criticisms, Google Scholar Metrics has become very popular over the past few years due to the ease of access and interpretation and the measure's relatively full coverage.

Source normalized impact per Paper (SNIP)

A more recent indicator of impact for journals is the Source Normalized Impact per Paper (SNIP). The indicator was introduced by Moed (2010)MOED, Henk F. (2010), A new journal citation impact measure that compensates for disparities in citation potential among research areas. Annals of Library and Information Studies. Vol. 57, pp. 271-277., a Professor of Research Assessment Methodology at the University of Leiden, Netherlands, to measure journal impact correcting for differences between scientific fields. The indicator normalizes the citation counts for different fields so that in some areas, like medicine, consideration is given to the fact that the expected number of citation is high, while in other areas, like the social sciences, the expected number of citations tends to be smaller. SNIP identifies each of these specific fields and creates an indicator that normalizes the impact of citations of an article in the respective area thereby permitting more accurate comparisons between journals and subfields.

Some aspects of this metric are different from the previous indicators we have mentioned. One distinctive characteristic of the SNIP is citation potential: instead of looking only for the number of citations of an article in a journal, the SNIP considers how frequently an article is cited in a specific subject field. The subject field of a journal is delimited by the collection of articles that cite the journal, and not by a pre-specified subject field – but these articles must be covered by the database analyzed (Scopus). The indicator takes into consideration only journals with blind peer-review and does not consider self-citation problems. Like the SJR, the SNIP uses a window of the three preceding years to measure the impact of a journal.

Moed (2010)MOED, Henk F. (2010), A new journal citation impact measure that compensates for disparities in citation potential among research areas. Annals of Library and Information Studies. Vol. 57, pp. 271-277. recognizes that there are some limitations to the indicator. The subject field limitation could be improved with a more sophisticated methodology. Bias could occur when some journal does not cite the journals in the author's subfield (and it is therefore not included (and considered) in this subfield). The indicator works better for more established journals with at least 50 publications and is less reliable for smaller journals. Outliers can influence the SNIP (Centre for Science and Technology Studies (CWTS) 2017Centre for Science and Technology Studies (CWTS) (2017). "Methodology. Available at Http://Www.Journalindicators.Com/Methodology. Accessed on August 30, 2017."
Http://Www.Journalindicators.Com/Methodo...
). Despite the possible flaws of the indicator, today the SNIP is recognized as an important metric of the scientific impact of a journal and is included in the Scopus database (MOED, 2011)MOED, Henk F. (2011), The source normalized impact per paper is a valid and sophisticated indicator of journal citation impact. Journal of the Association for Information Science and Technology. Vol. 62, Nº 01, pp. 211-213..

Data

For the present research, we use the data reported by Qualis from 2010 to 2014 in the area of political science and international relations7 7 Unfortunately, the data for the earlier evaluations is not available online in the Capes portal. . These journals are the venues where Brazil-based faculty and student in political science and international relations published articles. Our sample is the journal's Qualis score assigned in each of three periods: 2010-2011, 2012 and 2013-2014. It is worth mentioning that, in this sample, we include the classification of journals as measured by CAPES in two different evaluations (2010-2012 and 2013-2016)8 8 The classification criteria adopted for the second year of the triennium are equal to the criteria of the classification of the first year. For this reason, the 2010 and 2011 Qualis data represent the same classification. Likewise, the data for 2013 and 2014 also represent only one observation given that there was no change in the journal rankings between these years. . For the first period (2010-2012), we include the classification of journals at the beginning (2010-11) and at the end (2012) of a three-year evaluation period. For the second evaluation period (2013-2016), we were only able to gather data for the initial classification (2013-2014).

For each journal, we also collected the SCImago impact measurement indicators, the h-index (calculated by SCImago), the h5-index and h5-median (calculated by Google Scholar Metrics) and the SNIP indicator (presented in CWTS Journal Indicators). We also collected data on three additional characteristics that should help to explain the variation in journal quality: the country of origin of the journal, the primary language adopted by the journal and the central area of ​​knowledge to which the journal is dedicated.

In order to analyze the data quantitatively, we transformed the Qualis score into a numerical score. The score of 0.1 was assigned to journals that were classified in the lowest stratum, C. We then added 0.1 for each stratum above C. The highest possible score is 0.8 for journals classified as A1 (the highest category). The list of variables, sources, and range of values are summarized in Table 03.

Table 03
Description of variables

The Qualis in political science and its evolution

As can be seen in Table 04, the number of journals indexed by Qualis varied during the period studied but decreased by almost a half in the second evaluation period (2013-2014) as compared to the first evaluation period (2010-2012) in our sample. This decrease was due to the adoption of new classification criteria defined by the Scientific Technical Council of Higher Education (CTC-ES), which only considered journals with production during the entire period. Prior to this evaluation, journals with uneven periodicity were included in order to encourage researchers to publish in these journals.

Table 04
Qualis CAPES Journal Score in Political Science by Quality Category (2010-2014)

As we noted earlier, the number of journals evaluated declined by more than 40% in 2013-14 as compared to 2010-11. However, the number of journals in the top categories has increased over time. Between 2010 and 2011, 20.4% of journals were in the A1, A2 and B1 categories. By 2013-2014, 23.5% of journals were classified in these higher categories. Even though most of the journals in higher strata were published in the United States and the United Kingdom, the number of Brazilian journals in these categories has been steadily rising (see Appendix 01 in the online Statistical Appendix). By 2013-4, thirty-eight journals were considered by Capes to be A1 journals. Of the journals in the highest tier, Brazil-based journals were 18.42% of the total. Of the forty-seven journals considered A2 quality, Brazil-based journals were 34%. Of the fifty-three journals in the B1 category, 47.17% were publications based in Brazil. There is also a sharp decline in the number of journals from the United States in the A1 category; journals published in this country fell from twenty-nine in 2012 to seven in 2013-2014. These findings suggest that there was a shift in the ranking of Brazil-based journals, which is where the majority of Brazilian scholars publish.

It is important to note that the overwhelming majority of journals in the A1 stratum in all observed periods were published in English (89.29% in 2010-2011, 84.91% in 2012, 73.53% in 2013-2014). There are a few Brazilian journals that are published in a foreign language, such as the 'Brazilian Political Science Review' that publishes in English. In the period 2010-2011, only 5.4% of journals in the A1 stratum were written in Portuguese (see Appendix 02 in the Online Statistical Appendix). Already in 2012, there is an increase of the top journals in Portuguese rising to 9.4%. By 2013-2014, the percentage increases to 20.6%.

As we noted earlier, academics in political science published in a wide variety of journals some of which are not generally considered as journals pertaining to this scientific field. As Qualis is required to evaluate the entire set of academic research produced by students and faculty in the area of political science and international relations in Brazil, it must evaluate journals irrespective of their field area. Indeed, the journals that were evaluated during the 2010-2014 period were from very different areas of knowledge including journals specialized in the social sciences, arts and humanities, economics, engineering, biochemistry, medicine and biology in the case of political science (see Appendix 03 in the online Statistical Appendix). The vast majority of journals that were classified by the CAPES evaluation team for this scientific field were social science journals. However, it is interesting to note the percentage of journals in higher ranked categories outside the social sciences has increased over time. Indeed, in the most recent 2013-2014 evaluation, nearly a third (31.2%) of journals were outside the social science area.

Results

We now turn to analyzing the determinants of the expert-driven criteria employed by CAPES to rank the quality of the journals where Brazilian political science faculty and graduate students publish. The analysis below compares the QUALIS with four indicators: the SCImago Journal rank index (SJR), the Hirsch index (h-index) calculated by SCImago, the h5-index and h5-median (based on the h-index period 05 years, calculated by Google Scholar Metrics) and the SNIP indicator (calculated by the CWTS Journal Indicators, included in the Scopus database).

Our analysis proceeds in two stages. First, we examine the correlations between the impact indexes commonly used in academia to evaluate scientific impact and between successive Qualis evaluations. We then compare the correlation between the impact indexes and Qualis scores. Our analysis shows that there is a strong, positive and statistically robust relationship between the SCImago Journal rank index (SJR), the Hirsch index (h-index), the h5-index, the h5-median and the SNIP indicator. However, there is only a weakly positive relationship between a journal's Qualis score and its ranking in impact indexes.

In a second stage, we employ ordered logistic regressions to analyze the relationship between a journal's relevance and its Qualis score. The ordered logistic results show that the most reliable predictor of a journal's Qualis score is its journal score in the past evaluation. A journal's score in the previous Qualis score is the factor that has the greatest likelihood of predicting a journal's score in the current Qualis evaluation. Once we consider a journal's past Qualis score, its publication in English by a publisher based in the U.S. or the United Kingdom and its social science concentration does influence its Qualis score. Furthermore, with the exception of the SJR in the 2013-4 evaluation, we show that a journal's ranking in mainstream indices of scientific research impact does not influence its Qualis score.

It is important to note that there are some critical differences in the number of journals evaluated by each impact index. As Table 05 confirms, the majority of journals in the A1, A2 and B1 Qualis categories were classified by the SCImago Journal rank index (SJR), the Hirsch index (h-index), the h5-index, the h5-median and the SNIP indicator. However, the h-index, h5-index and the h5-median cover a slightly higher percentage of journals in the B2 and below categories as compared to SJR and SNIP.

Table 05
Impact metrics available for Journals with a Qualis score (% in each stratum)

Table 06 presents Kendall's Tau correlation and corresponding p-values between the SCImago in the years 2010 to 2014, h-index, h5-index, h5-median and SNIP9 9 To verify the normality of distribution, we performed the Shapiro-Wilk test. The results indicate that there is no normal distribution in any of the measures. Given these results, all correlation tests reported in this article use Kendall's Tau, a measure recognized as suitable for cases of non-normal distribution and presenting results easier to interpret. For details, see Newson (2002). We thank an anonymous referee for this suggestion. .

Table 06
Correlations between SCImago, h-index, h5-index, h5-median and SNIP 2010-2014 (Kendall's tau and p-values)

Tables 6 shows that the h5-index values for a given year is the average over five years. Thus, the h5-index 2014 is the average of 2009 to 2014, and the 2015 h-index is the average of 2010 to 2015. There is a relatively strong positive correlation between the SJR and the other metrics observed. Moreover, as the p-value confirms, these correlations are statistically significant in all of the comparisons.

In Table 07, the same comparison is undertaken now comparing the Qualis journal rankings with the impact metrics mentioned above. The results show that there is a positive correlation that is statistically significant between these indicators and the Qualis evaluation of the same journals. These correlations, however, are much weaker than the correlations observed in Table 09. On average, the correlation between a journal's Qualis journal rank and its score in impact indexes is 0.35. This weak positive correlation also occurs in the case of the SJR, which is used by Capes as an input in calculating a journal's Qualis score.

Table 07
Correlations between Qualis CAPES and impact indexes (Kendall's tau and p-values)
Table 09
Ordered logistic regression models (dependent variable: Qualis Journal Score)

In Table 08, the Qualis journal rankings are compared over the three evaluations in our sample. The results show that there is a positive correlation that is statistically significant across each successive evaluation wave. These correlations are positive, and stronger than the correlations observed in Table 07 when we compared Qualis rankings with impact index scores. In describing the Qualis classification, we noted that there was a considerable reduction in the number of journals that were evaluated in the 2013-2014 period as Capes more strictly adhered to only evaluating journals with more regular publications cycles. This explains why the correlations between the last wave (2013-4) and the previous evaluations (2010-11 and 2012) are lower.

Table 08
Correlations between Qualis 2010-2014 (Kendall's tau and p-values)

Figure 0110 10 In this and the other figures, we use the jitter command in Stata to underscore that there are many observations in the same circle. The circles that are wider and larger are indicative of where there are more observations. presents a scatter plot of the relationship between the Qualis classification for the 2010-2011 period and SCImago measures for 2010. In these figures, the vertical axis (Y) is the Qualis 2010-2011 classification (converted into a scale from 0.1 (lowest quality) to 0.8 (highest quality). The horizontal axis (X) represents the SJR index in 2010 (Figure 01), in each case ranging from low to higher impact. Excluding the outliers that have an impact factor of five or higher in SJR index, nearly all journals classified by Qualis lie between 0 and 3 in the SCImago SJR index. As we explained earlier, Capes uses the SJR as an impact metric to calculate a journal's Qualis score in combination with the additional criteria stipulated in Table 01. As this figure makes clear, there is a relatively wide range of journals that are considered to have the highest quality grade by Qualis, but that are assessed at varying degrees of impact using the SCImago measure11 11 The results on comparing the rankings of journals in the Qualis classifications of 2012 reveal a similar pattern. The majority of the journals that are classified by Qualis (and where Brazilian scholars have published) are in publications that had an impact factor in the SJR index of less than 03. . The pattern reported in Figure 01 is similar for the 2012 and 2013-2014 periods12 12 For the sake of brevity, these figures are not presented. The results are available from the authors upon request. .

Figure 01
Qualis and SJR Compared (without outliers)

Figure 02 presents a scatterplot comparing the journal classifications in the Qualis 2010-11 evaluation (on the x-axis) and the Qualis 2012 evaluation (on the y-axis)13 13 In Figure 02, the bigger circles are where there are overlapping and a greater number of observations. The figure was generated using the jitter command in Stata. . As the figure makes clear, there is a strong positive correlation between rankings in both periods. This confirms that in general there is a path dependency in a journal's Qualis score. Some journals were evaluated as of relatively low quality in 2010-11, but who were increased in their evaluation turning those journals to be considered top in the 2012 evaluation14 14 Figure 03 in the online appendix presents evidence indicating there was a similar trend of upgrading in classifications when we compare the Qualis journal classifications in 2013-14 with the 2012 evaluation. The journals in the upper left quadrant refer to International Interactions (with an SJR of 0.945 in 2012) and the Journal of International Relations & Development (with an SJR of 0.324 in 2012). . These observations in the upper left quadrant refer to the International Journal of Urban and Regional Research (print and online versions) (SJR in 2011 of 1.538 and SJR in 2011 of 1.371), the Journal of Legislative Studies (SJR in 2011 of 0.26) and two journals, Brazilian Review of Social Research and Politique et Sociétés (Montreal) without SJR impact scores. Some journals that were evaluated as of relatively high quality in 2010-11 were downgraded in the 2012 Qualis evaluation. These are in the lower right quadrant and with the exception of Human Rights Review (with an SJR in 2011 of 0.20) did not have SJR impact scores.

Figure 02
Qualis 2010-2011 and Qualis 2012

Ordered logistic regressions

In this section, we analyze the degree to which systematic factors explain why some journals might be considered as having higher quality than others have in the Qualis using a multivariate framework. The dependent variable is the journal's Qualis score. As explained earlier, the Qualis score was transformed into an ordinal variable ranging from 0.1 (the lowest stratum, C) to 0.8 (the highest stratum, A1). Accordingly, ordered logistic regressions were estimated to examine the extent to which systematic factors affect the likelihood of a specific journal being classified by Qualis into each of the eight possible categories.

The explanatory variables in the model include the lag of its Qualis score in the previous evaluation and the lag of the impact indexes of each journal. The models also include an explanatory variable to control for the effect of geography that is coded as '1' if the specific journal is published in the United States or the United Kingdom and '0' otherwise as these are the countries where the largest academic publishers are based. A dummy variable in which journals published in English are coded as '1' and '0' otherwise is included to test whether additional preferences are given to journals published by the language most commonly employed in the communication of academic research across the globe and across disciplines. A dummy variable that codes journals in the Social Sciences as '1' and '0' for journals in other subject areas is also included. The outliers reported earlier were excluded in the results we report below15 15 The following journals are considered outliers: Nature, The American Political Science Review, International Organization, Lancet and World Politics. .

Table 09 presents the results from four ordered logistic regressions. The first and second columns present the results using the Qualis 2012 indicator as the dependent variable, and the third and fourth columns report the regression coefficient estimates with the Qualis 2013-2014 as the dependent variable. In the first and third models, the SJR impact factor is the only impact metric used. In the second and fourth models, controls are added for additional impact indexes. The number of journals varies depending on which year of the Qualis is used. The largest available sample is for the 2012 Qualis. As noted earlier, the number of journals that were evaluated decreased considerably in the 2013-2016 evaluation period. Our ordered logistic models consistently show that many of the structural factors that should matter for journal quality do not influence a journal's Qualis score. The results underscore that the most reliable predictor of a journal's Qualis score is its journal score in the past Capes evaluation. A journal's score in the previous Qualis score is the factor that has the greatest likelihood of predicting a journal's score in the current Qualis evaluation.

As we showed in Table 01, a journal's SJR is considered in assessing its journal Qualis score. The results reported in Model 01 and Model 02 of Table 09 confirm that a higher SJR impact factor did not influence a journal's Qualis score in 2012, all else equal. There is some suggestive evidence that journal impact factors are becoming more relevant to Qualis scores based on the results reported in Models 03 and 04. This is because a journal's SJR scores increased the likelihood of a given journal's having a higher Qualis score in the 2013-14 evaluation. However, this evidence should be interpreted with caution as these results are based on a much smaller sample. Furthermore, the other impact indexes (SNIP, h5-index or h5-median) have no effect on a journal's Qualis rank, all else equal.

The four models consistently confirm that a journal's social science identity or publication in English does not increase its chances of receiving a higher Qualis score, all else equal. In the smaller, more restricted sample in the 2013-2014 evaluation, the results in column 04 show that US and UK-based journals were less likely to receive a higher Qualis score, all else equal.

Conclusions

In most scientific fields, and political science is no different in this regard, academic research impact metrics have become the dominant criteria employed in academia to determine which journals are most influential. Given the imperfections in these measures and the recognition that journal citations are not equivalent to journal quality, several alternative methods have been proposed as more appropriate to assessing a journal's relevance. In this study, we have contrasted how evaluations differ when impact metrics are used as compared to an expert-driven measure to evaluate Brazilian political science research in published journals.

While our study faced certain limitations, we believe that we have made a solid case for greater research directed at understanding the challenges of ranking journal quality in the case of Brazilian political science. In Brazil, the federal government has developed its own evaluation system to assess the research production of faculty and graduate students as part of a more extensive assessment that is undertaken to evaluate and finance graduate education. One of the critical criteria to recognize higher quality programs is to identify which programs produce higher quality academic research. In turn, these better-ranked programs receive a greater share of federal funds as opposed to lower-ranked programs. As a result, the definition of the criteria that determine which journals are assessed as being of higher quality has become an essential aspect of understanding the public policies that shape graduate education in Brazil.

References

  • ALTMAN, David (2012), Where is knowledge generated? On the productivity and impact of Political Science Departments in Latin America. European Political Science Vol. 11, Nº 01, pp. 71-87.
  • American Society for Cell Biology (ASCB) (2012), The San Francisco declaration on research assessment (Dora). Available at http://Www.Ascb.Org/Dora/ Accessed on October 20, 2017.
    » http://Www.Ascb.Org/Dora/
  • BARBERIA, Lorena Guadalupe; GODOY, Samuel Ralize de, and BARBOZA, Danilo Praxedes (2014a), Novas perspectivas sobre o 'Calcanhar Metodológico': o ensino de métodos de pesquisa em Ciência Política no Brasil. Revista Teoria & Sociedade Vol. 22, Nº 02, pp. 156-184.
  • BARBERIA, Lorena Guadalupe; GODOY, Samuel Ralize de; BARBOZA, Danilo Praxedes; DUARTE, Guilherme Jardim, and ANJOS, José Radamés Marques Miguel dos (2014b), Inovação no ensino de métodos quantitativos em Ciência Política: aplicação de modelo baseado em atividades. Agenda Política Vol. 02, Nº 02, pp. 152-179.
  • BROOKS, Chris; FENTON, Evelyn M., and WALKER, James T. (2014), Gender and the evaluation of research. Research Policy Vol. 43, Nº 06, pp. 990-1001.
  • CAPES (2017a). Critérios De Classificação Qualis: Ciência Política e Relações Internacionais. Available at http://capes.gov.br/images/stories/download/avaliacaotrienal/Docs_de_area/qualis/ciencia_politica_e_relacoes_internacionais.doc Accessed on January 16, 2017.
    » http://capes.gov.br/images/stories/download/avaliacaotrienal/Docs_de_area/qualis/ciencia_politica_e_relacoes_internacionais.doc
  • CAPES (2017b). "Sobre as Áreas de Avaliação. Available at http://www.capes.gov.br/avaliacao/sobre-as-areas-de-avaliacao Accessed on January 16, 2017.
    » http://www.capes.gov.br/avaliacao/sobre-as-areas-de-avaliacao
  • CAPES (2013). Documento de Área 2013, Triênio 2010-2012: Ciência Política e Relações Internacionais. Available at http://capes.gov.br/images/stories/download/avaliacaotrienal/Docs_de_area/CI%C3%AAncia_Pol%C3%ADtica_doc_area_e_comiss%C3%A3o_21out.pdf Accessed on January 16, 2017.
    » http://capes.gov.br/images/stories/download/avaliacaotrienal/Docs_de_area/CI%C3%AAncia_Pol%C3%ADtica_doc_area_e_comiss%C3%A3o_21out.pdf
  • CAPES (2010). Documento de Área 2007-2009: Ciência Política e Relações Internacionais." Available at http://capes.gov.br/images/stories/download/avaliacao/POLIT_RELINT_22jun10b.pdf Accessed on January 16, 2017
    » http://capes.gov.br/images/stories/download/avaliacao/POLIT_RELINT_22jun10b.pdf
  • Centre for Science and Technology Studies (CWTS) (2017). "Methodology. Available at Http://Www.Journalindicators.Com/Methodology Accessed on August 30, 2017."
    » Http://Www.Journalindicators.Com/Methodology
  • CHRISTENSON, James A. and SIGELMAN Lee (1985), Accrediting knowledge: journal stature and citation impact in Social Science. Social Science Quarterly Vol. 66, Nº 04, pp. 964-975.
  • CREWE, Ivor and NORRIS, Pippa (1991), British and American Journal evaluation: divergence or convergence? Political Science and Politics Vol. 24, Nº 03, pp. 524-531.
  • DELGADO-LÓPEZ-COZAR, Emilio and CABEZAS-CLAVIJO, Alvaro (2012), Google scholar metrics: una herramienta poco fiable para la evaluación de revistas científicas. Profesional de La Información Vol. 21, Nº 04, pp. 419-427.
  • GARAND, James C. and GILES, Micheal W. (2003), Journals in the discipline: a report on a new survey of American political scientists. Political Science & Politics Vol. 36, Nº 02, pp. 293-308.
  • GARFIELD, Eugene (2005), The agony and the ecstasy — the history and meaning of the journal impact factor. Paper Presented at Fifth International Congress on Peer Review in Biomedical Publication. CHicago.
  • GILES, Michael W. and WRIGHT, Gerald C. (1975), 'Political scientists' evaluations of sixty-three journals. Political Science & Politics Vol. 08, Nº 03, pp. 254-256.
  • GILES, Micheal W. and GARAND, James C. (2007), Ranking Political Science Journals: reputational and citational approaches. Political Science & Politics Vol. 40, Nº 04, pp. 741-751.
  • GILES, Micheal W.; MIZELL, Francie, and PATTERSON, David (1989), Political scientists' journal evaluations revisited. Political Science & Politics Vol. 22, Nº 03, pp. 613-617.
  • GONZÁLEZ-PEREIRA, Borja; GUERRERO-BOTE, Vicente P., and MOYA-ANEGÓN, Félix (2010), A new approach to the metric of journals' scientific prestige: the SJR indicator. Journal of Informetrics. Vol. 04, Nº 03, pp. 379-391.
  • GOOGLE (2016), Google scholar metrics. Available at <https://Scholar.Google.Com/Intl/En/Scholar/Metrics.Html> Accessed on April, 11, 2016.
    » https://Scholar.Google.Com/Intl/En/Scholar/Metrics.Html>
  • GUERRERO-BOTE, Vicente P. and MOYA-ANEGÓN, Félix (2012), A further step forward in measuring journals' scientific prestige: the SJR2 indicator. Journal of Informetrics Vol. 06, Nº 04, pp. 674-688.
  • HICKS, Diana; WOUTERS, Paul; WALTMAN, Ludo; RIJCKE, Sarah de, and RAFOLS, Ismael (2015), Bibliometrics: the Leiden manifesto for research metrics. Nature Vol. 520, Nº 7548, pp. 429-431.
  • HIRSCH, Jorge E. (2007), Does the H index have predictive power? Proceedings of the National Academy of Sciences of the United States of America Vol. 104, Nº 49, pp. 19193-19198.
  • HIRSCH, Jorge E. (2005), An index to quantify an individual's scientific research output. Proceedings of the National academy of Sciences of the United States of America Vol. 102, Nº 46, pp. 16569-16572.
  • HIX, Simon (2004), A global ranking of Political Science Departments. Political Studies Review Vol. 02, Nº 03, pp. 293-313.
  • KLINGEMANN, Hans-Dieter; GROFMAN, Bernard, and CAMPAGNA, Janet (1989), The Political Science 400. Political Science & Politics Vol. 22, Nº 02, pp. 258-270.
  • KRISTENSEN, Peter M. (2012), Dividing discipline: structures of communication in international relations. International Studies Review Vol. 14, pp. 32-50.
  • LESTER, James P. (1990), Evaluating the evaluators: accrediting knowledge and the ranking of political science journals. Political Science & Politics Vol. 23, Nº 03, pp. 445-447.
  • MASUOKA, Natalie; GROFMAN, Bernard, and FELD, Scott L. (2007), The Political Science 400: a 20-Year update. Political Science & Politics Vol. 40, Nº 01, pp. 133-145.
  • MOED, Henk F. (2010), A new journal citation impact measure that compensates for disparities in citation potential among research areas. Annals of Library and Information Studies Vol. 57, pp. 271-277.
  • MOED, Henk F. (2011), The source normalized impact per paper is a valid and sophisticated indicator of journal citation impact. Journal of the Association for Information Science and Technology Vol. 62, Nº 01, pp. 211-213.
  • MONTPETIT, Éric; BLAIS, André, and FOUCAULT, Martial (2008), What does it take for a Canadian Political Scientist to be cited? Social Science Quarterly Vol. 89, Nº 03, pp. 802-816.
  • MUGNAINI, Rogério; DIGIAMPETRI, Luciano Antonio, and MENA-CHALCO, Jesús Pascual (2014), Comunicação científica no Brasil (1998-2012): indexação, crescimento, fluxo e dispersão. Transinformação Vol. 26, Nº 03, pp. 239-252.
  • NEIVA, Pedro (2015), Revisitando o calcanhar de Aquiles metodológico das Ciências Sociais no Brasil. Sociologia, Problemas e Práticas Vol. 79, pp. 65-83.
  • NEWSON, Roger (2002), Parameters behind 'nonparametric' statistics: Kendall's Tau, Somers' D and median differences. Stata Journal. Vol. 02, Nº 01, pp. 45-64.
  • PALACIOS-HUERTA, Ignacio and VOLIJ, Oscar (2004), The measurement of intellectual influence. Econometrica Vol. 72, Nº 03, pp. 963-977.
  • PLUMPER, Thomas (2007), Academic heavy-weights: the relevance of Political Science Journals. European Political Science Vol. 06, pp. 41-50.
  • SOARES, Gláucio Ary Dillon (2005), O calcanhar metodológico da Ciência Política no Brasil. Sociologia, Problemas e Práticas Vol. 48, pp. 27-52.
  • SPIRLING, Arthur P. and CARTER, David (2008), Under the influence? Intellectual exchange in Political Science. Political Science & Politics Vol. 41, Nº 02, pp. 375-378.
  • TRZESNIAK, Piotr (2016), Um Qualis em quatro tempos: histórico e sugestões para Administração, Ciências Contábeis e Turismo. Revista de Contabilidade e Finanças Vol. 27, Nº 72, pp. 279-290.
  • 1
    Throughout this paper, we use the term political science to refer to the area that is denoted as political science and international relations by Capes.
  • 2
    In previous research (BARBERIA et al., 2014aBARBERIA, Lorena Guadalupe; GODOY, Samuel Ralize de, and BARBOZA, Danilo Praxedes (2014a), Novas perspectivas sobre o 'Calcanhar Metodológico': o ensino de métodos de pesquisa em Ciência Política no Brasil. Revista Teoria & Sociedade. Vol. 22, Nº 02, pp. 156-184.; BARBERIA et al., 2014bBARBERIA, Lorena Guadalupe; GODOY, Samuel Ralize de; BARBOZA, Danilo Praxedes; DUARTE, Guilherme Jardim, and ANJOS, José Radamés Marques Miguel dos (2014b), Inovação no ensino de métodos quantitativos em Ciência Política: aplicação de modelo baseado em atividades. Agenda Política. Vol. 02, Nº 02, pp. 152-179.), we have examined how undergraduate and graduate programs in political science have sought to reform their programs to address important shortcomings that have been identified in the training of students in methods and techniques of scientific research particularly in quantitative methods. Using Brazil as a case study, we argue that the diversification of methodology training in graduate programs is endogenous to the size of the faculty and students in the programs. Curricula are limited in the early stages of a graduate program. Over time, we show that programs tend to increase the methodological training offered to students as the number of faculty with training in these methods increases. This pattern is important to the quality of academic research produced by these programs, as those faculty and students with more limited methods training are less likely to be able to publish in high quality academic journals in the discipline.
  • 3
    As we will discuss in detail in the article, we believe the Qualis is most appropriately classified as an expert-driven index. This is because a committee of experts is responsible for producing the Qualis ranking. The committee of experts has discretion over journal grade assignment, but objective indicators measures help to inform this classification.
  • 4
    In earlier periods, the evaluations were carried out on a three-year basis.
  • 5
    The journal scores are made available online by Capes. See: https://sucupira.capes.gov.br/ sucupira/public/consultas/coleta/veiculoPublicacaoQualis/listaConsultaGeralPeriodicos.jsf.
  • 6
    The available documents do not specify if these criteria were adopted ex-ante or ex-post.
  • 7
    Unfortunately, the data for the earlier evaluations is not available online in the Capes portal.
  • 8
    The classification criteria adopted for the second year of the triennium are equal to the criteria of the classification of the first year. For this reason, the 2010 and 2011 Qualis data represent the same classification. Likewise, the data for 2013 and 2014 also represent only one observation given that there was no change in the journal rankings between these years.
  • 9
    To verify the normality of distribution, we performed the Shapiro-Wilk test. The results indicate that there is no normal distribution in any of the measures. Given these results, all correlation tests reported in this article use Kendall's Tau, a measure recognized as suitable for cases of non-normal distribution and presenting results easier to interpret. For details, see Newson (2002)NEWSON, Roger (2002), Parameters behind 'nonparametric' statistics: Kendall's Tau, Somers' D and median differences. Stata Journal. Vol. 02, Nº 01, pp. 45-64.. We thank an anonymous referee for this suggestion.
  • 10
    In this and the other figures, we use the jitter command in Stata to underscore that there are many observations in the same circle. The circles that are wider and larger are indicative of where there are more observations.
  • 11
    The results on comparing the rankings of journals in the Qualis classifications of 2012 reveal a similar pattern. The majority of the journals that are classified by Qualis (and where Brazilian scholars have published) are in publications that had an impact factor in the SJR index of less than 03.
  • 12
    For the sake of brevity, these figures are not presented. The results are available from the authors upon request.
  • 13
    In Figure 02, the bigger circles are where there are overlapping and a greater number of observations. The figure was generated using the jitter command in Stata.
  • 14
    Figure 03 in the online appendix presents evidence indicating there was a similar trend of upgrading in classifications when we compare the Qualis journal classifications in 2013-14 with the 2012 evaluation. The journals in the upper left quadrant refer to International Interactions (with an SJR of 0.945 in 2012) and the Journal of International Relations & Development (with an SJR of 0.324 in 2012).
  • 15
    The following journals are considered outliers: Nature, The American Political Science Review, International Organization, Lancet and World Politics.
  • *

Publication Dates

  • Publication in this collection
    2018

History

  • Received
    20 Sept 2016
  • Accepted
    23 June 2017
Associação Brasileira de Ciência Política Avenida Prof. Luciano Gualberto, 315, sala 2047, CEP 05508-900, Tel.: (55 11) 3091-3754 - São Paulo - SP - Brazil
E-mail: bpsr@brazilianpoliticalsciencareview.org