Acessibilidade / Reportar erro

Impact factor, scimago indexes and the brazilian journal rating sytem: where do we go from here?

EDITORIAL

Hospital das Clínicas, Faculdade de Medicina da Universidade de São Paulo - São Paulo/SP, Brazil. mrsilva36@hcnet.usp.br

As we approach a new period of evaluation of Brazilian Graduate Programs by the Ministry of Education, it seems appropriate to revisit the rating systems for scientific journals. According to the Ulrich's Periodicals Directory,1 there are about 300,000 scientific periodicals in the world today. However, 270,000 of these are not subject to the process of peer review, meaning that only 30,000 journals should be taken seriously. Two major rating systems exist in the world: the Journal of Citation reports issued by ISI-Thomson2 and the SCImago indexes issued by Elsevier.3

The Journal of Citation Reports of ISI Thomson (JCR-ISI) has been publishing Impact Factors (IFs) for its collection of scientific journals for a few decades. More recently, SCImago began publishing Cites/Document (C/D) for its own collection of journals. The calculations to determine IFs and C/Ds are very similar. Briefly, the IF or the C/D for any journal "J" in any year "N" is given by the following equation:

where C (N) is the total number of cites appearing in journals from each respective collection to articles published by journal "J" in years "N-2" and "N-1", and A(N-i) is the number of articles published by "J" in years "N-1" and "N-2".

Given that JCR-ISI and SCImago use their own sets of included journals, IF is not necessarily identical to C/D. The two collections cover a very large spectrum of peer-reviewed scientific journals worldwide. They overlap considerably, but there are differences. JCR-ISI divides its collection into two independent categories: the JCR Science Citation Edition and JCR Social Sciences Citation Edition. Conversely, SCImago posts a single index that encompasses its entire collection. The Science Citation Edition for 2008 includes 6620 journals, of which 6567 (99.2% of the total) from 72 countries (where England, Scotland and Wales are counted as three separate countries much as they would be for Rugby and Football, although I did miss Northern Ireland) exhibited an IF > 0. The Social Sciences Citation Edition comprises 1985 journals from 45 countries (including England and Scotland, but not Wales), of which 1974 (99.4% of the total) have an IF > 0. Thus, the entire JCR-ISI collection totaled 8,541 in 2008. The SCImago collection is considerably larger, comprising 16,032 journals from 233 countries, of which 14,649 journals (91.3% of the total) boast a C/D > 0. It is thus a more comprehensive index, covering twice as many journals from thrice as many countries. Two other differences should be mentioned: (a) SCImago is freely accessible, whereas the JCR-ISI can only be accessed by fee-paying subscribers, and (b) JCR-ISI has a bias in favor of English-speaking countries, while SCImago has a broader base. However, it must be noted that JCR-ISI has been broadening its non-English speaking base over recent years.

The obvious questions are as follows: what do these indexes measure and how reliable are they? Given that it is generally believed that citations of articles are an indirect measure of quality, it is generally assumed that both IF and C/D reflect quality. However, no gold standard has ever been described that effectively measures journal quality. Consequently, no proof exists to show that IF or C/D truly reflect quality, even though there is a general gut feeling that they do. Generally speaking, journals with a high IF or C/D are desirable places to publish scientific findings; it is also true that most (but certainly not all) of the truly relevant scientific results appear in high-impact journals.

It has often been argued that tampering with IF and C/D can take place,4 and indications do exist that such tampering effectively occurs. The most usual complaint concerns the excessive and indiscriminate use of autocites, defined as articles in journal "J" citing other articles in journal "J". This is sometimes attributed to pressure from the editorial office upon authors to agree to such possibly improper citing. ISI posts a warning against the excessive use of autocites, stating that 80% of the journals in their collection have an autocite rate of less than 20%. But it should also be noted that journals usually cover a limited field of knowledge. Thus, it is quite natural to expect that articles therein may have completely appropriate autociting. This digression allows me to introduce a third parameter introduced by SCImago, namely the SCImago Journal Rank (SJR). SJR excludes autocites and considers the quality rather than the absolute number of citations of a journal by other journals. This is done by attributing more weight to citations in higher-quality journals. The question of how high-quality journals are defined may obviously turn into an ugly circular argument, but such a discussion would be way beyond the scope of this editorial. I will, however, come back to this and hope to show that SJR goes a long way toward removing doubts about the value of IFs and C/Ds.

Reliability is also an important issue. To the best of my knowledge, no global comparisons between IF and C/D or SJR have been published since 2008. Partial comparisons exist and focus on specific areas of knowledge.5 Therefore, I have endeavored to compare these indexes in a generalized manner. Figure 1 correlates IF and C/D for 99 randomly selected journals in the JCR-ISI Science Citation Index Edition of 2008. Corresponding values for the same journals were collected from the SCImago index for the same year. It is completely obvious that the two indexes measure almost exactly the same things (r=0.993; slope=1.005; p <0.001).


Figure 2 correlates IF vs. SJR, for the journals shown in Figure 1: a highly significant correlation (r = 0.884; p <0.01) is observed. Thus, it appears that SJR, C/D and IF measure very similar properties of scientific journals; it also appears that autocites may not be such an important confounding issue and that giving more weight to citations in quality journals does not materially alter the results. However, I do wish to note that these are very preliminary findings and that more research is required before definitive conclusions may be reached.


I undertook this study as part of an analysis of a new rating system for publications by Brazilian graduate students. The new system was instituted by CAPES, an agency of the Brazilian Ministry of Education and Culture that is responsible for the control of Brazilian Graduate Education. Since its inception in 1951, CAPES has been an absolutely indispensable factor in promoting an impressive development of Brazilian Graduate Courses and is, therefore, partly responsible for the stunning growth in Brazilian scientific production over the past decades. Among other activities, CAPES promotes a complete evaluation of all Brazilian Graduate Courses once every 3 years. This is carried out by 50 committees, each covering a specific area within the broader fields of humanities, life sciences and exact sciences. High on the agenda of each of these committees is an evaluation of the quality of publications coming from graduate students and their mentors. Strating in the late nineties, publications have been indirectly evaluated through a ranking of the periodicals in which these publications appear. This ranking system, known as Qualis, has recently been upgraded for the next general evaluation. Periodicals are now ranked under eight different categories: A1, A2, B1-B5 and C. General guidelines establish that journals in A1 and A2 may not exceed 26% of all listed periodicals, with A1 journals being necessarily less than A2 journals. Finally, journals in A1, A2 and B1 may not exceed 50% of the listed journals. Using one of the three Medicine areas as an example (Medicina I), I would like to show the new Qualis guidelines in action. Medicina I established the following boundaries for each of the eight categories.6

A1 - IF < 3.800

A2 - 3.8 > IF < 2.500

B1 - 2.5 > IF < 1.300

B2 - 1.3 > IF < 0.001

B3 - Medline/PubMed Indexed

B4 - Scielo Indexed

B5 - Any other Indexing

C - No indexing

When the Medicina I list of periodicals (not shown here) is examined, it can be seen that the guidelines were precisely followed.6 However, the theoretical flaw is pretty obvious: the four highest rating categories rely upon a single criterion, namely IF. Assuming that Medicina I does not intend to raise a monument to the Impact Factor Deity but instead attempts to evaluate the quality of publications, and given the almost perfect synchrony between IF and C/D, it necessarily follows that approximately half of the journals relevant to Medicina I are being downgraded to category B3 or worse. This is despite having a C/D > 0, which is likely to be very similar to what the IF would be if posted. With respect to Brazilian journals, things are even worse: the exclusive adoption of the IF, which only applies to 33 Brazilian journals, while SCImago lists 154, appears as a particularly perverse form of negative discrimination. One might inquire how well IFs and C/Ds correlate for Brazilian journals. Figure 3 shows the IF vs. C/D interaction for the entire collection of Brazilian journals. This interaction is as good as the one observed for the world collection, with a slope and a regression coeficient which are practically equal to 1.


Table 1 lists the 20 Brazilian journals with the highest C/Ds. Nine of these (45%) are excluded from ISI and, therefore, can only hope to be ranked as B3 by Medicina I. In this plight, they are joined by over 100 other Brazilian journals with C/Ds > 0.01!

Table 2 shows that a similar omission occurs for the 20 Brazilian journals with the highest SJRs. Even though we are not looking at the same journals, 50% of these are again excluded from ISI.

Thus, it may be safely stated that had this committee set out to deliberately discriminate against some Brazilian scientific journals, they could not have succeeded more completely. The same applies to all other life sciences committees because all of them adopted IFs as the only criterion for the highest categories. The Ciências Biológicas I area goes to the extreme of requiring an IF value for six of the eight categories (A1 to B4, leaving B5 to lump together all indexed journals as if all indexing systems were equivalent and C to include all totally irrelevant periodicals).

Finally, two completely different theoretical flaws must be contemplated. These have less to do with the journals and far more to do with the task of evaluating graduate programs. To understand the first of these flaws, Figure 4 shows the percentile distribution of the entire JCR-ISI Science and Social Sciences collection. Two points clearly stand out: (1) the IFs of the Science index are consistently higher that the corresponding IFs of the Social Sciences index. This does not mean that the Science Journals are better than the Social Science Journals, it merely reflects the fact that social sciences are inherently less citable than life and physical sciences, and it is well-known that comparisons between IFs of different branches of knowledge are inappropriate; (2) the differences between IFs in the lower percentiles are negligible; however, above the 75th percentile they became massive.


In Figure 5, the same comparison is made for the two most highly cited subject categories (oncology and immunology) versus the two less cited categories (clinical neurology and pneumology) in Medicina I. Both databases contain equivalent numbers of JCR-ISI journals, and their percentile distributions mimic the discrepancy shown in Figure 4. There is not much of a difference at the 10th to 15th percentiles, which is where the two curves cross the IF = 1.0 value, but there is an important difference at the 70th to 85th percentiles where each of the curves crosses the critical IF = 3.8 mark. Once again, this does not mean that oncology/immunology journals are scientifically better than their neurology/pneumology counterparts! But the trouble here is that all four are lumped together as Medicina I in the Qualis system. How did this happen? The answer lies in something which was overlooked by the evaluating committee: in the days of the old Qualis, when the highest cutoff mark was an IF of 1.0, these sub-areas of Medicina I could live together with no negative consequences. However, with the highly critical cutoff points of the new Qualis, it is inevitable that oncology/immunology graduate programs will be given higher ratings than pneumology/neurology programs, not because they are better, but simply as a consequence this flaw. Before anyone suggests that I am advocating a return to the old Qualis system, allow me to reinstate my previous comments: the old Qualis is dead and over, but it is absolutely necessary that the evaluation areas in the new Qualis be further subdivided into sub-areas for fair rating.7,8


One last point must be made. It is a well-known fact that Review Journals are high-impact journals: An inspection of the JCR-ISI Science Citation Edition 2008, illustrated in Fig 6 shows that review journals comprise a sizable chunk of the top third of the collection: nearly 40% of the top 200, 8.5% of the top 2,000 journals belong to this class. The regression is almost perfectly logarithmic. Unfortunately, this was ignored by the evaluating committees, with serious consequences: even though no one really expects graduate students to publish invited papers in big-time review journals, yet 8% of all the journals listed in the A1/A2 categories of Medicina I are Review Journals. Had these been excluded as they indeed ought to have been, the cutoff points would have decreased to a more realistic level. Unfortunately, they were not, and this attempt to apply Harvard-size hurdles to Brazilian Science and Brazilian Journals may have come too early!


This kind of flawed performance is by no means limited to Medicina I. For instance, in many ways Medicina I did much better than Ciências Biológicas I and II, which share the shortcomings described for Medicina I, but were more careless in the ranking of their journals, and frequently contradicted the rules they had established for their own guidance.

The new Qualis requires a far more serious, critical and imaginative approach, not simply the bureaucratically minded following of a recipe. It might even be argued that the new Qualis requires a radical revision of the mentalities in charge of the CAPES evaluating committees, if for no other good reason but to simply make sure that CAPES continues in its glorious role as a motor for the development of Brazilian science.

  • 1. Ulrich's Periodicals Directory -Global source for periodicals information since 1932. www.ulrichsweb.com
  • 2
    ISI-Thomson Journal of citations report in www.isiknowledge.com
  • 3. SCImago. (2007). SJR - SCImago Journal & Country Rank. Retrieved April 14, 2010, from http://www.scimagojr.com
  • 4. Falagas ME, Alexiou VG. The top-ten in journal impact factor manipulation. Arch Immunol Ther Exp. 2008;56:223-6.
  • 5. Siebelt M, Siebelt T, Pilot P, Bloem RM, Bhandari M, Poolman RW. Citation analysis of orthopaedic literature; 18 major orthopaedic journals compared for Impact Factor and SCImago. BMC Musculoskelet Disord. 2010;11:4-10.
  • 6
    Capes. O novo qualis. In http://qualis.capes.gov.br/webqualis/
  • 7. Rocha-e-Silva M.. O novo Qualis, ou a tragédia anunciada. Clinics. 2009;64:1-4.
  • 8. Rocha-e-Silva M. O Novo Qualis, que não tem nada a ver com a ciência do Brasil: carta aberta ao presidente da CAPES. Clinics. 2009;64: 721-4.
  • Impact factor, scimago indexes and the brazilian journal rating sytem: where do we go from here?

    Mauricio Rocha-e-Silva, Editor
  • Publication Dates

    • Publication in this collection
      21 Mar 2011
    • Date of issue
      2010
    Faculdade de Medicina / USP Rua Dr Ovídio Pires de Campos, 225 - 6 and., 05403-010 São Paulo SP - Brazil, Tel.: (55 11) 2661-6235 - São Paulo - SP - Brazil
    E-mail: clinics@hc.fm.usp.br