Acessibilidade / Reportar erro

Citations to papers from Brazilian institutions: a more effective indicator to assess productivity and the impact of research in graduate programs

Abstract

A recent assessment of 4400 postgraduate courses in Brazil by CAPES (a federal government agency dedicated to the improvement of the quality of and research at the postgraduate level) stimulated a large amount of manifestations in the press, scientific journals and scientific congresses. This gigantic effort to classify 16,400 scientific journals in order to provide indicators for assessment proved to be puzzling and methodologically erroneous in terms of gauging the institutions from a metric point of view. A simple algorithm is proposed here to weigh the scientometric indicators that should be considered in the assessment of a scientific institution. I conclude here that the simple gauge of the total number of citations accounts for both the productivity of scientists and the impact of articles. The effort spent in this exercise is relatively small, and the sources of information are fully accessible. As an exercise to estimate the value of the methodology, 12 institutions of physics (10 from Brazil, one from the USA and one from Italy) have been evaluated.

Scientometrics; Institutional assessment; CAPES; Scientific journals; Citations


Abstract

Introduction

References

Acknowledgments

Braz J Med Biol Res, August 2011, Volume 44(8) 738-747

Citations to papers from Brazilian institutions: a more effective indicator to assess productivity and the impact of research in graduate programs

Correspondence and Footnotes R. Meneghini

Scientific Coordinator of the SciELO Project, FAP, Fundação de Apoio à Universidade Federal de São Paulo, São Paulo, SP, Brasil

Correspondence and Footnotes Correspondence and Footnotes Correspondence and Footnotes

A recent assessment of 4400 postgraduate courses in Brazil by CAPES (a federal government agency dedicated to the improvement of the quality of and research at the postgraduate level) stimulated a large amount of manifestations in the press, scientific journals and scientific congresses. This gigantic effort to classify 16,400 scientific journals in order to provide indicators for assessment proved to be puzzling and methodologically erroneous in terms of gauging the institutions from a metric point of view. A simple algorithm is proposed here to weigh the scientometric indicators that should be considered in the assessment of a scientific institution. I conclude here that the simple gauge of the total number of citations accounts for both the productivity of scientists and the impact of articles. The effort spent in this exercise is relatively small, and the sources of information are fully accessible. As an exercise to estimate the value of the methodology, 12 institutions of physics (10 from Brazil, one from the USA and one from Italy) have been evaluated.

Key words: Scientometrics; Institutional assessment; CAPES; Scientific journals; Citations

Assessment of scientific performance is becoming increasingly dependent on scientometric indicators as tools to complement peer evaluation. This trend has spread widely in recent years and has been applied for many purposes: analysis of merit for advancement in the academic career, application of scoring criteria in the dispute for research funding, and ranking of scientific institutions and universities, among others. Debate on the use and misuse of these indicators is abundant in the literature. Nevertheless, the requirement of its judicious use in the increasingly overwhelming processes of assessment has been recognized more and more as necessary (see comments in Nature (http://www.nature.com/news/2009/090923/full/news.2009.933.html).

In Brazil, the federal agency CAPES, a branch of the Ministry of Education, devoted to the improvement of the quality of learning and research at the postgraduate (PG) level, has recently evaluated 16,400 scientific journals. The attempt was to classify them according to their impact factor [IF, Journal Citation Report (JCR), Thomson-Reuters] and/or to their relative importance for 43 areas of knowledge into which the 4400 Brazilian PG courses are classified. This criterion was not the only one used for the final classification of PG courses, but certainly it was the most important. To grasp the intricacy of the process, a given journal received different grades in distinct areas. The problems raised by this procedure caused a large amount of confusion and led to complaints by the Brazilian scientific community (1-3).

The IF of the journals in which the scientific results of a person or of an institution were published has been considered a very inappropriate indicator of scientific merit. The main reason lies in the citation distribution of the journal’s articles: typically, articles in the most cited half of the articles in a journal are cited ten times as often as the least cited half (4). For a small community of scientists, as is the case for a PG course, the use of the IF of the journals as an indicator of the quality of scientific production might result in a much distorted picture of its merit. If scientometrics is to be used in the scale of assessment of an institution then other indicators are more appropriate; citation per article, for instance, has been considered in the new phase of the Research Assessment Exercise in England, by far the most knowledgeable evaluation procedure on a national scale in science and education (5). In this article I will suggest that the total number of citations is an encompassing indicator to be used to compare institutions in the same scientific field.

The present exercise has been carried out to show the feasibility of a much less demanding and time-consuming procedure of evaluation of PG courses through scientific indicators. Ten of the most renowned Brazilian institutions of physics plus the Departments of Physics of the Universities of Princeton and Padua (as examples of distinguished international institutions, for the purpose of comparison) were chosen for this exercise.

The three main scientometric indicators, obtained from the Web-of-Science (WoS) and JCR, both affiliated with Thomson-Reuters, were 1) number of articles published in each journal in a 5-year period (2003-2007); 2) number of citations per article to these publications in 2008; 3) the 5-year IF of the same journals used to publish the articles.

The reasons for these choices are explained in the Results section. Basically, the IFs of the journals that published articles of a given institution are not, per se, an indicator of quality of the institution. Instead, the number of citations is considered to be a more reliable gauge for quality.

Data sources, processing and results

Scientometric data were obtained from the WoS and JCR databases (Thomson-Reuters).

Ten Brazilian institutions of research in physics were chosen for this study based on their reputation in the Brazilian academic community. All of them maintain PG courses of high CAPES evaluation scores according to the 2004-2007 evaluation (http://qualis.capes.gov.br/webqualis/). They are: Departamento de Física, Universidade Federal de Pernambuco (UFPE), Recife, PE; Instituto de Física, Universidade Estadual de Campinas (UNICAMP), Campinas, SP; Centro Brasileiro de Pesquisas Físicas (CBPF), Rio de Janeiro, RJ; Instituto de Física, Universidade de São Paulo (USP), São Paulo, SP; Instituto de Física, Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS; Instituto de Física, Universidade Federal de São Carlos (UFSCar), São Carlos, SP; Instituto de Física Teórica, Universidade do Estado de São Paulo (UNESP), São Paulo, SP; Departamento de Física, Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG; Instituto de Física, Universidade Federal do Rio de Janeiro (UFRJ), Rio de Janeiro, RJ; Instituto de Física, Universidade de São Paulo (USP), São Carlos, SP.

Two institutions abroad were selected for the purpose of comparison: the Physics Department of Princeton University (USA), one of the top institutions in the world, and the Physics Department Galileu Galilei of the University of Padua (Italy), respected for its tradition and current research dynamism.

Institutions of research in physics

The application of 4 scientific indicators to these Universities is shown in Table 1. Among the Brazilian Institutions, the number of citations in 2008 to 2003-2007 articles ranges from 699 to 6200 (column 2). The Universities of Padua and Princeton are both above USP, the best classified among the Brazilian universities. They are 1.39 and 2.96 times higher, respectively. The average citation per article (column 3) sets the institutions approximately in the same order as the total number of citations (linear regression, R2 = 0.923). So does the indicator weighted average impact factor (WAIF) in column 4 (R2 = 0.931); less so does the indicator citations per article divided by WAIF in column 5 (R2 = 0.803). This latter indicator is important to measure how the average IF compares to the real citation average to the journals employed by the institutions. Column 5 shows that the number of citations achieved in the journals by the authors of some universities is well above that expected from their IFs. This is evident in the case of Princeton University, in which the number of citations per article is 2.1 times the WAIF of the journals that published its articles; in the case of the Brazilian Universities this same comparison falls in a 0.8-1.0 interval.

The correlations among the indicators can be better appreciated in 3-D graphics in which 3 indicators are plotted: citations per article, IF and number of articles (Figure 1). The light blue and the dark blue bars help to compare the citations per article and the IFs for the corresponding journals, respectively, for 6 IF intervals. For instance, the shorter light blue bars as compared to the dark blue bars in the case of UFPE show the trend of lower citations per article than expected from the average IF in each IF range. The opposite occurs for Princeton University. Notice that the red line (number of articles in each IF interval) clearly shows the tendency towards publishing more or less in the most prestigious journals. Thus, a glance at these graphs allows a quick grasp of the profile of the institutions.

Classifying the institutions

The data listed in Table 1 and illustrated in Figure 1 provide a very convenient way to establish a scientific metric procedure to assess the performance of institutions. The traditional triad of indicators is employed, namely, number of publications, number of citations per article and IF of the journals. However, instead of being examined separately, they are interconnected to allow both the exploration of their attributes and to avoid the pitfalls of their use individually: 1) the number of publications are distributed within intervals limited by the IF of the journals where the articles were published; 2) the number of citations per article is not examined separately, but rather is related to the IF of the journal that published the articles; for example, when this ratio is two it means that the articles in this journal received twice more citations than those expected from its IF. This may be easily viewed in Figure 1 by comparing the light and dark blue bars. This approach eliminates the drawback of considering the IF as an isolated indicator of the quality of the articles by itself (4) since it is balanced by the real number of citations that the journal received in the institution.

In this procedure, the journals are distributed in intervals set by their IFs (Figure 1). It is possible to express the inputs of the indicators in a simple algorithm in order to obtain numerical values for the assessment of the institution. For this purpose one has to: 1) attribute a weight to each interval set by its minimum and maximum IF. The most obvious way is to take the WAIF of the journals used by the institution in each interval; 2) to consider the number of articles (na) in each interval (Figure 1); 3) to ponder, as reasoned above, the ratio between number of citations per number of articles (nc/na) and WAIF of the journals in each interval. Given these assumptions, the algorithm could be expressed as:

(Eq. 1)

In order to appreciate the algorithm it is helpful to point out that the general idea is to weigh the "positive" scientometric values (number of articles, number of citations and IF of the journals) for each institution. In principle, the "quality" of an institution should be in proportion to each of these "positive" parameters, and therefore with their product. The advantage of breaking down the total data into the intervals of IFs is a more didactic one and is helpful for keeping a correspondence with the figures. For this reason, the final expression is given by a summation function, as shown above.

In this summation function, each term corresponds to an interval limited by IFs 0-1, namely, 1-2, 2-3, 3-4, 4-6, and >6. Each interval encompasses the WAIF of the articles in this interval, the na and the nc. The positive outcome of the two latter, in terms of productivity and quality, is obvious. The meaning of the expression [(nc/na)/WAIF] may be less evident. The numerator (nc/na) is the average number of citations per article in the interval and the denominator is the WAIF of the journals in that interval. If the numerator is higher than the denominator it means that in that interval the institution produces articles whose citations achieve numbers higher than those predicted by the impact factors (WAIF) and vice versa if the numerator is lower than the denominator. Therefore, the expression reflects a positive outcome of quality.

It is intriguing that the expression results in nc, namely, the total number of citations, even though the other scientific indicators were included in the algorithm. However, on a second thought, it makes sense that nc expresses best the comparative performance of institutions of the same nature. Considering a similar faculty size, a higher number of citations is likely to reflect both a higher rate of publications and more citations per article. In fact, each of these indicators taken separately are almost meaningless in this context.

The h-indexes of the institutions

The h-index (6) has been widely welcomed as an indicator of quality of individual scientists. The extension of the application of the h-index to institutions has been proposed (7,8). The assertion is that the h-index of an institution (hi) is equal to h if the institution has published h articles, each of which has at least h citations. The h-indexes of the institutions retrieved from the WoS follow this definition and are shown in Table 2.

The h-index varies with the square root of the nc and tends to underestimate productivity (6). It has a particular advantage when comparing scientists of different academic ages, because the nc favors those with long careers. When comparing institutions in a same timeframe, as is the case in the present study, the advantage of using the h-index vanishes.

The effect of articles produced by large networks of authors on citation indicators

Networks with a large number of authors have been a matter of discussion in the area of scientometrics (9). These networks are consistent with the need for collecting a massive amount of data. This is typically the case for multicenter studies in medicine, genomic-proteomic analysis in biology and subatomic particle collisions and astrophysical studies in physics. The individual or institutional participation in these enterprises significantly affects their scientometric indicators since such studies are bound to draw considerable interest and, consequently, highly cited articles. Fifteen networks were detected in the present assessment, most of them with hundreds of authors. In seven institutions these network articles were numerous, averaging 11.1% of the total articles (Table 3). However, 5 institutions did not publish articles produced by large networks of authors. The elimination of the network articles caused a significant reduction in the number of citations per article of most institutions, with the exception of Princeton University. This example illustrates the large impact of the regular articles of the Department of Physics of Princeton University.

Topics of research in the institutions

Physics has turned into a highly interdisciplinary field. Its connection with chemistry is centennial whereas its connection with molecular biology/biochemistry has increased steadily in the last 50 years. The research interest of a physics institution in these two areas is dependent on the appeal they have for the faculty members and on the general scientific atmosphere that surrounds the institution. Table 4 shows the very strong trend towards research in biochemistry and chemistry at some institutions. In the USP-São Carlos Institute of Physics, these areas account for 20.3% of the articles. In fact, a strong long-standing interest in X-ray diffraction and optics has driven the research of this institution towards chemistry, structural/molecular biology and photodynamics in biology and medicine. On the other hand, it is quite clear that the Department of Physics of the Princeton University is over focused on pure physics, whereas the UNESP Institute of Physics, being devoted to theoretical physics, practically has no investigation in these two areas. These distinct trends of research focus set different patterns for the journals used and this may have influenced the WAIF and citations per article shown in Table 1. However, this point was not investigated further.

Table 4 also lists the number of articles published in Brazilian journals. The ten Brazilian Institutions published an average of 6.2% of their articles in Brazilian journals, the great majority of which were in the Brazilian Journal of Physics. This is the only Brazilian journal that publishes original research on experimental and theoretical physics. The 5-year IF (2008) of this journal is 0.500, well below the average citations per article in 2008 of the 2003-2007 articles of the Brazilian institutions of physics (Table 1, column 3). Nevertheless, in recent years this journal has increased both in IF and in publications of non-Brazilian articles, which attained 51.4% in 2007-2008!

Final considerations

A significant number of articles have already discussed the inexistence of a universal scientometric indicator to gauge science output. The choice of a metric method is very much dependent on the specific aims and the characteristics of the subject of study. I have proposed that when the focus is the performance of a scientific institution the indicator number of citations is the most appropriate one because it comprises both productivity and impact. Whereas number of citations per article, even crudely, gives an estimate of quality (10), it does not include productivity. The widespread use of the impact factor of the journals as a quality gauge has its own pitfalls (4) and also does not contemplate productivity. As already pointed out, productivity is also underestimated if the h-index is the indicator of choice.

The size of the faculty has not been considered in the methodology proposed. In fact, a simple normalization of the number of citations in terms of the faculty size is troublesome, since in the case of very big or very small institutions distortions may occur towards false impressions of under- or over-productivity, respectively.

A large faculty will tend to generate a larger number of articles. However, this will not necessarily correspond to a larger number of citations, which is the parameter that encompasses both productivity and quality.

In the specific case of CAPES, the assessment of 16,400 journals as the basis for the evaluation of Brazilian PG courses involved a gigantic effort and was absolutely pointless. The challenge became insurmountable when different committees had to value the journals according to their interest and use. This survey was undertaken with the conviction that the procedure was sustained by a proper methodology, when this premise was undoubtedly wrong.

The effort to obtain data on the total number of citations to the articles of an institution is immeasurably less. The database Web of Science, Scopus, SciELO, Scimago, and Google Scholar are available to CAPES and to the institutions. A complementary search may be carried out easily in these databases. Moreover, in a set of institutions of the same nature but with specific interests toward subsets of scientific areas (for instance, in the present study, the strong interest of some of the institutions of physics in carrying out research in biochemistry and chemistry), correction can be made using available methodologies (11,12).

Figure 1.
Number of articles (2003-2007), citations per article (2008), and 5-year impact factor (2008) for 12 institutions of Physics, 10 from Brazil plus the University of Padua and Princeton University. The three indicators were reported in terms of six intervals of impact factor (IF).

Table 1.
Scientometric indicators of Brazilian institutions of physics.

Table 2.
h-index of the institutions studied.

Table 3.
Effect of network articles on the total number of citations.

Table 4.
Articles related to biology and chemistry published by the institutions of physics.

1. Rocha-E-Silva M. [The new Qualis, or the announced tragedy]. Clinics 2009; 64: 1-4.

2. Galembeck F, de Andrade JB. Qualis: quo vadis? Quim Nova 2009; 32: 5.

3. Lucena AF, Tiburcio RV. [Qualis periodical: view of an academic on medical graduation]. Rev Assoc Med Bras 2009; 55: 247-248.

4. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997; 314: 498-502.

5. Harnad S. Open access scientometrics and the UK Research Assessment Exercise. Scientometrics 2009; 79: 147-156.

6. Hirsch JE. An index to quantify an individual’s scientific research output. Proc Natl Acad Sci U S A 2005; 102: 16569-16572.

7. Prathap G. Hirsch-type indices for ranking institutions’ scientific research output. Current Sci 2006; 91: 1439.

8. Da Luz MP, Marques-Portella C, Mendlowicz M, Gleiser S, Freire-Coutinho ES, Figueira I. Institutional h-index: The performance of a new metric in the evaluation of Brazilian Psychiatric Post-graduation Programs. Scientometrics 2008; 77: 361-368.

9. Bollen J, Van de Sompel H, Smith JA, Luce R. Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing & Management 2005; 41: 1419-1440.

10. Hermes-Lima M, Santos NCF, Alencastro ACR, Ferreira ST. Whither Latin America? Trends and challenges of science. IUBMB Life 2007; 59: 199-210.

11. Batista PD, Campiteli MG, Kinouchi O, Martinez AS. Is it possible to compare researchers with different scientific interests? Scientometrics 2006; 68: 179-189.

12. Radicchi F, Fortunato S, Castellano C. Universality of citation distributions: toward an objective measure of scientific impact. Proc Natl Acad Sci U S A 2008; 105: 17268-17272.

I thank Daniela Oliveira for the excellent creation of the illustrations. Research partially supported by FAPESP (#2005/57665-8).

Address for correspondence: R. Meneghini, Rua Machado Bittencourt, 430, 04044-001 São Paulo, SP, Brasil. E-mail: rogerio.meneghini@scielo.org

Received December 29, 2010. Accepted June 10, 2011. Available online June 24, 2011. Published August 19, 2011.

The Brazilian Journal of Medical and Biological Research is partially financed by

All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License

  • 1. Rocha-E-Silva M. [The new Qualis, or the announced tragedy]. Clinics 2009; 64: 1-4.
  • 2. Galembeck F, de Andrade JB. Qualis: quo vadis? Quim Nova 2009; 32: 5.
  • 3. Lucena AF, Tiburcio RV. [Qualis periodical: view of an academic on medical graduation]. Rev Assoc Med Bras 2009; 55: 247-248.
  • 4. Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ 1997; 314: 498-502.
  • 5. Harnad S. Open access scientometrics and the UK Research Assessment Exercise. Scientometrics 2009; 79: 147-156.
  • 8. Da Luz MP, Marques-Portella C, Mendlowicz M, Gleiser S, Freire-Coutinho ES, Figueira I. Institutional h-index: The performance of a new metric in the evaluation of Brazilian Psychiatric Post-graduation Programs. Scientometrics 2008; 77: 361-368.
  • 9. Bollen J, Van de Sompel H, Smith JA, Luce R. Toward alternative metrics of journal impact: A comparison of download and citation data. Information Processing & Management 2005; 41: 1419-1440.
  • 10. Hermes-Lima M, Santos NCF, Alencastro ACR, Ferreira ST. Whither Latin America? Trends and challenges of science. IUBMB Life 2007; 59: 199-210.
  • 11. Batista PD, Campiteli MG, Kinouchi O, Martinez AS. Is it possible to compare researchers with different scientific interests? Scientometrics 2006; 68: 179-189.
  • 12. Radicchi F, Fortunato S, Castellano C. Universality of citation distributions: toward an objective measure of scientific impact. Proc Natl Acad Sci U S A 2008; 105: 17268-17272.
  • Correspondence and Footnotes

  • Publication Dates

    • Publication in this collection
      27 Sept 2011
    • Date of issue
      Aug 2011

    History

    • Accepted
      10 June 2011
    • Received
      29 Dec 2010
    Associação Brasileira de Divulgação Científica Av. Bandeirantes, 3900, 14049-900 Ribeirão Preto SP Brazil, Tel. / Fax: +55 16 3315-9120 - Ribeirão Preto - SP - Brazil
    E-mail: bjournal@terra.com.br