Acessibilidade / Reportar erro

In medio stat virtus: some thoughts about journal Impact Factor

EDITORIAL

In medio stat virtus: some thoughts about journal Impact Factor

J. Landeira-Fernandez; Dora Fix Ventura; A. Pedro de Mello Cruz

Journal editors are constantly seeking to improve the excellence of their journals. One of the main endeavors of an editor is to attract the attention of the scientific community and thus increase the number of manuscript submissions and quality of papers. A journal's Impact Factor is currently the most widely employed tool to express a journal's quality. Problems in the calculation of this index and its misuse, however, have caused several academic consequences. A group of high-impact journal editors and publishers of scholarly journals convened a discussion group during the annual meeting of the American Society for Cell Biology in December 2012 in San Francisco to address this issue. Further discussions of this group led to the publication of a manifesto in May 2013 known as The San Francisco Declaration on Research Assessment, which recognizes noteworthy limitations of the Impact Factor.

The concept of Impact Factor was introduced by Gross & Gross (1927) who argued that the most frequently cited journals are the most relevant to the field and thus the most valuable journals for a library to purchase. For that reason, they suggested counting references as a measure to rank the use of scientific journals. In 1955, Eugene Garfield first mentioned the idea of an impact factor (Garfield, 1955), although the term "impact factor" itself was introduced later by Garfield & Sher (1963). In 1960, Garfield founded the Institute for Scientific Information (ISI) and initiated the Science Citation Index in 1963. Over time, the ISI absorbed the Social Sciences Citation Index and the Arts & Humanities Citation Index. As a consequence of these bibliographic databases, Journal Citation Reports (JCR) was launched in 1975, which contained the Impact Factor for each journal indexed by the ISI. In 1992, Garfield sold the ISI to Thomson Scientific for a substantial amount of money (USD $210 million). Thomson Scientific currently publishes JCR annually. In 2005, Thomson Scientific began Web of Science, which integrated all of the Thomson Scientific databases. In 2007, Thomson Scientific merged with Reuters to become one of the primary cooperatives for scientific information. In 2009, Thomson Reuters Scientific (TRS) had more than 55,000 employees in more than 100 countries and revenue of ~$13 billion USD annually (Beira, 2010).

The Impact Factor considers only journals indexed by TRS. It is calculated as the ratio between the total number of citations that each paper that was published in that journal received during the preceding 2 years and the total number of papers that the journal published during the same 2-year period. For example, the 2012 Impact Factor reflects the number of citations in 2012 of articles published by the journal in 2010 and 2011, divided by all of the articles that this journal published during the 2010-2011 period. Let us suppose that journal "X" published 100 papers during 2010 and 2011. From all of these paper published during this period, journal "X" was cited 300 times in all of the journals indexed by TRS during 2012. Therefore, the Impact Factor of journal "X" in 2012 is 300/100 = 3.0. Notice that the 2012 Impact Factor of journal "X" can only be published by JCR in 2013 because all of the citations that occurred in 2012 need to be computed.

Several criticisms have been raised about the way in which the Impact Factor is calculated. For example, TRS does not index all scientific journals worldwide. Indeed, TRS is extremely selective about which journals they select. In 2011, TRS covered 16,350 journals. By comparison, Elsevier's Scopus began publishing the SCImago Journal Rank indicator in 2004, which is a TRS competitor because of its citation tracking capabilities. Scopus covered 26,447 journals during the same period (Nagaraja, & Vasanthakumar, 2011).

One of the main problems of the Impact Factor is that it does not consider the citation performance of each paper published by the journal. Taking our example of journal "X," which had an Impact Factor of 3.0 in 2012, it is possible that only one paper received 300 citations in 2012, and all of the other 99 papers published during the 2010-2011 period had no citations at all. Accordingly, it is well known that the paper citation distribution of a journal is strongly skewed to the right (i.e., a positive skew), meaning that a relatively low number of papers receives a very high number of citations, thus contributing disproportionately to the Impact Factor (Seglen, 1992). In fact, review papers from well-known and very productive groups of investigators tend to enhance the impact factor of a journal (Ketcham & Crawford, 2007). Therefore, the Impact Factor reveals very little about the citation of a paper published by this journal. Papers published in low-Impact Factor journals can receive a considerable number of citations. Conversely, other papers published in high-Impact Factor journals might receive very few citations. Therefore, the Impact Factor is not a comprehensive bibliometric indicator for evaluating the research impact of a particular author. Other parameters such as the H-index (introduced by Hirsch, 2005) may better reflect an author's scientific productivity because it takes into account the scholar's total number of published works and how many times they have been cited.

An often-mentioned criticism of the Impact Factor concerns journal self-citation in which an article in journal "X" cites another article published in the same journal "X." This artificially inflates the Impact Factor, especially if the journal editor encourages their authors to cite papers published by its own journal in the previous 2 years. In some cases, the editor may even force authors who submit papers to journal "X" to add these citations to their article before the journal will agree to publish their work. To correct this coercive citation practice, JCR began to publish an adjusted Impact Factor that excludes journal self-citations. In extreme cases when the journal presents extreme outlier self-citation behavior, TRS may exclude the journal from the database for 2 consecutive years (the journal's inclusion can be reevaluated in the third year).

Another malpractice that can be employed to artificially increase the Impact Factor is that a group of editors can team up for mutual benefit by forming a kind of citation cartel (Franck, 1999). Known as "citation stacking," the editor of journal "X" encourages their authors to cite papers published by journal "Y." In turn, the editor of journal "Y" encourages their authors to cite papers published by journal "X." Although this is a refined coercive citation practice, it can be detected by observing anomalous patterns of citations between journals in which journal "X" has an excessive concentration of citations of journal "Y" and vice versa. In this case, both the donor and recipient journals can be excluded from the TRS database for 1 year and reevaluated the following year.

Despite these problems and potential misuse, the advent of the Impact Factor represented a landmark in the field of scientific publication. As we discussed in our last editorial (Landeira-Fernandez, Ventura, & Cruz, 2012), journal evaluation is a complex task that involves both quantitative and qualitative assessment methods. One of the main elements of this process is the placement of a journal within its field. Publication and citation norms are specific to different areas of knowledge (Amin & Mabe, 2000; Cole, 1983). At the level of scientific journals, the Impact Factor can provide information about a journal's influence and impact within a given field that may help authors decide where they want to publish their manuscripts. Accordingly, a high correlation exists between Impact Factor and the quality ratings of journals by investigators (Saha, Saint, & Christakis, 2003). The Impact Factor can also help journal editors and publishers to track the efficacy and efficiency of editorial policies and the objectives of their journals in the long run. Government and public organizations worldwide can also employ this citation index as one indicator to evaluate journal success across a wide range of scientific fields. Therefore, the Impact Factor also has valuable informative advantages that obviously do not justify its indiscriminate use. For that reason, the Latin expression "in medio stat virtus" (i.e. virtue stands in the middle) expresses the concept that the Impact Factor ought to be used in moderation in conjunction with other journal indicators that consider the place where a journal is published and the field of knowledge that the journal covers, so that different journals worldwide can be evaluated and appropriately ranked.

  • Amin, M., & Mabe, M. (2000). Impact factors: Use and abuse. Perspectives in Publishing, 1, 1-6.
  • Beira, E. (2010). Eugene Garfield, from ISI to Thomson Reuters: A timeline. Mercados e Empresas: Dinâmicas e estratégias, 104, 1-17.
  • Cole, S. (1983). The hierarchy of the sciences? American Journal of Sociology, 89, 111-139.
  • Franck, G. (1999). Scientific communication: A vanity fair? Science 286(5437), 53.
  • Garfield, E. (1955). Citation indexes for science: A new dimension in documentation through association of ideas. Science, 122(3159), 108-111.
  • Garfield, E., & Sher, I. H. (1963). New factors in the evaluation of scientific literature through citation indexing. American Documentation, 14(3), 195-201.
  • Gross, P. L., & Gross, E. M. (1927). College libraries and chemical education. Science, 66(1713), 385-399.
  • Hirsch, J. E. (2005) An index to quantify an individual's scientific research output. Proceedings of the National Academy of Sciences of the United States of America, 102, 16569-16572.
  • Ketcham, C. M., & Crawford, J. M. (2007). The impact of review articles. Laboratory Investigation, 87(12), 1174-1185.
  • Landeira-Fernandez, J., Cruz, A. P. M., & Ventura, D. F. (2012). Psychology & Neuroscience is well-ranked by the Brazilian Qualis Psychology Committee (editorial). Psychology & Neuroscience, 5, 1-2.
  • Nagaraja, A., & Vasanthakumar, M. (2011). Comparison of Web of Science and Scopus impact factors of Indian journals. Library Philosophy and Practice, Paper 596.
  • Saha, S., Saint, S., & Christakis, D. A. (2003). Impact factor: A valid measure of journal quality? Journal of Medical Library Association, 91(1), 42-46.
  • Seglen, P. (1992). The skewness of science. Journal of American Society for Information Science, 43, 628-638.

Publication Dates

  • Publication in this collection
    02 Oct 2013
  • Date of issue
    June 2013
Pontificia Universidade Católica do Rio de Janeiro, Universidade de Brasília, Universidade de São Paulo Rua Marques de São Vicente, 225, 22453-900 Rio de Janeiro/RJ Brasil, Tel.: (55 21) 3527-2109, Fax: (55 21) 3527-1187 - Rio de Janeiro - RJ - Brazil
E-mail: psycneuro@psycneuro.org