SciELO - Scientific Electronic Library Online

 
vol.96 issue2Risk factors for mortality in octogenarians undergoing myocardial revascularization surgery author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

Share


Arquivos Brasileiros de Cardiologia

Print version ISSN 0066-782X

Arq. Bras. Cardiol. vol.96 no.2 São Paulo Feb. 2011

http://dx.doi.org/10.1590/S0066-782X2011000200001 

SPECIAL ARTICLE

 

Using the impact factor and H index to assess researchers and publications

 

 

Petronio Generoso Thomaz; Renato Samy Assad; Luiz Felipe P. Moreira

Instituto do Coração HCFMUSP - São Paulo, SP - Brasil

Mailing address

 

 


Keywords: Impact factor; H index; research personnel; statistics; scientific and technical publications.


 

 

Introduction

With the growing demand for inputs for scientific research funding, it is necessary to establish mechanisms for assessing academic and scientific quality as a way to honor individuals and institutions capable of producing cutting-edge research, thus ensuring a profitable investment of research funding agencies1. In this new scenario, traditional peer evaluation reveals weaknesses inherent in the subjective and corporatist aspects of academic productivity evaluation. Qualitative and quantitative indexes of evaluation should be added to it, seen by many as more reproducible and less subject to personal biases2. The number of publications, citations and average citations per paper published, taken separately, are traditional bibliometric indices that are flawed because they do not depict the combined information from papers with their relevant citations3. The traditional assessment of the number of papers published, initially widely accepted and used, is no longer sufficient as a means of assessing researchers' scientific strength. The quality of publications is now seen as a distinguising feature. Therefore, evaluating the interest aroused by the paper or the line of research within the scientific community is emphasized. This factor reflects the number of citations to a particular paper.

Assessing the quality of journals, used as a means of disseminating scientific research, has been employed in our field as a form of analysis of post-graduate programs, generating the well-known Qualis list of the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Department of Higher Education Personnel Improvement - CAPES). Qualis is a set of procedures designed to meet the specific needs of the evaluation system, which provides a list with a ranking of those media used by post-graduate programs for dissemination of production. Qualis measures the quality of papers and other types of production, by analyzing the quality of advertising media, i.e. , scientific journals and conference proceedings. The media that disseminate the intellectual output of post-post-graduate programs (masters' and doctors') are ranked as A1, A2, B1, B2, B3, B4, B5 and C, from cut-off values of impact caused by the journal4.

Described in 2005 by Jorge E. Hirsch as a tool to determine the relative quality of papers produced by theoretical physicists, the H index has become widely used in scientific circles as a way of measuring researchers' productivity and impact. It was even incorporated into the Lattes Platform of the Brazilian Council of Scientific and Technological Development (CNPq)5. In this article, we discuss the impact factor as a means of assessing scientific journals and H index as a way of assessing researchers. Other bibliometric indexes will not be addressed in this paper, but it should be noted that only the impact factor and H index alone may not be sufficient to accomplish the task of evaluating journals, articles and authors.

 

Impact factor

Proposed by Eugene Garfield in 19556, the Impact Factor (IF) started to be used as a tool for assessing the quality of publications in the sixties and later used as a criterion for selection of journals to be indexed by the Science Citation Index (SCI). Since then, the IF has established itself as a means of assessing journals at various levels. It is calculated annually by the Institute for Scientific Information/Thompson Scientific Reuters for those journals indexed in its database, and it is published by the Journal Citations Reports (JCR)7.

Today, authors consider the IF value to select the journal that would give greater visibility to their paper. Librarians consider the IF a parameter for the selection of titles of greater scientific interest, which therefore should be part of the scientific collections of institutions. In parallel, editors pay close attention to the IF of journals, aware of the importance of this index as a factor of influence in fundraising and attraction of quality papers to be published. In funding agencies, those responsible for developing the scientific policies also use this index to select researchers and institutions of higher merit, which would best meet the demands of institutions.

To calculate the IF of a journal in a given year, the number of citations received in that year from articles published in that journal over the two previous years is divided by the number of articles published by that journal during the same period (Table 1)7.

 

 

Therefore, the impact factor is effective in evaluating the quality of a journal and is not, however, useful in analyzing the scientific quality of a single article, of a researcher or an institution8-10.

 

Variables that may alter the IF calculation

To calculate the IF of the JCR, only original manuscripts and review articles are considered. Letters to the editor or editorials are not included in the denominator of the calculation. On the other hand, they may be cited; therefore, are considered in the numerator of the IF calculation. Journals such as Nature or Science, which publish many articles that are not strictly scientific, may have its IF inflated because of that bias11.

Another bias that influences the calculation of the impact factor relates to the different fields of knowledge, or even sub-fields. The number of references cited per article (density of citations) may be quite different, for example, articles about exact sciences, which have lower-density citations than those related to health sciences. That partly explains why the IFs of health science journals are, in average, much higher than exact science journals, such as mathematics12.

The IF published annually by the JCR, by taking into account only the citations of a journal over two consecutive years, tends to benefit those journals dealing with fields whose pace of knowledge updating is rather fast. Thus, the citation of articles occurs immediately after its publication, creating a bias of increased IF. Fields such as biological or exact sciences tend to have a higher IF than those whose knowledge production takes place at a slower pace, like social sciences or humanities.

 

H Index

This index was initially proposed by Jorge E. Hirsch for qualitative evaluation of physics researchers13. It quickely gained prominence in other disciplines, and is now widely used as a means of assessing the impact of individual researchers. Many authors consider it not only the safest way to measure researchers' scientific quality, but also a good tool for assessing regular production and forecasting its future scientific performance, as it combines productivity and impact14,15. The H index of a researcher is defined as the number of articles published by the researcher, whose citations are greater than or equal to that number. For example, when we say that the H index of a researcher is ten, it means that he has at least ten articles published, each with at least ten citations. The greater the number of articles of great interest published by the researcher, the greater the number of citations achieved, and the greater his H index, reflecting the academic and scientific quality of the researcher and his production capacity. However, only the total number of articles, for example, may hide the lack of relevance of each text in isolation. We can thus say that the H index is the result of the balance between the number of publications and the number of citations. Jorge Hirsch compared the H factor with other indexes commonly used to analyze the scientific output of a researcher, summarized in Table 2.

 

 

Hirsch himself argues that individuals with similar H indexes are also comparable in terms of scientific impact, even when the number of articles or total number of citations from both is very different. In contrast, when we compare two individuals (of the same scientific age), with equal numbers of publications or citations, and very different H indexes, the one with the highest H index is probably a more talented researcher. However, like any simplistic attempt to categorize or classify the production of a researcher by a single number, the H index is far from being perfect and faces several criticisms16. Among these, besides the usual ones according to which one cannot characterize a researcher by a number, are: self-citation17, the lack of distinction between active and inactive scientists, dependence of scientific age, differences between areas, sex, etc. Some variants have been proposed to overcome these disadvantages such as, for example, the M index, which allows comparing scientific careers of different times18. Dodson19 believes that the index underestimates by about 30% to 50% the actual number of citations and proposes the E index, which helps estimating the citations of papers not covered by the H index, i.e. , the citations of papers published after that one corresponding to the H index19.

 

Calculation of H Index

Currently, the database Web of Science of ISI/Thompson Scientific Reuters automatically calculates the H index of researchers. For this purpose, we must enter the "name of the citation" of the author in the appropriate space of the research platform and wait for articles and their citations to be generated. If there is a homonym author, any articles that were not written by the researcher concerned should be excluded. Afterwards, just click on the appropriate icon ("Create Citation Report") to obtain the H-index, as well as the total number of citations and average citations per article20. An interesting alternative to calculate this and many other indexes would be using the computer program called "Publish or Perish", available on the web: http://www.harzing.com/pop.htm. The program uses the website "Google Scholar" to retrieve and analyze academic citations. We can also manually calculate the H index. For that, we should sort out the papers by number of citations, starting with the one with a greater number of citations. The H index of a particular author is the number of the numeric sequence of papers whose number of citations equals to or is greater than the rank of the sequence21. Here is an example. If a researcher has the following sequence of articles published: article 1 - 17 citations; article 2 - 16 citations; article 3 - 14 citations; article 4 - 10 citations; article 5 - five citations; article 6 - three citations; article 7 - two citations. This author has an H index of five, because five is the point in the sequence where the numbers of citations equal the number of the paper. Some authors emphasize that the H index, when taken in absolute terms, cannot be used to compare researchers from different fields22. A H index considered good in a given field may neither be as good as it is in that field nor be considered bad in other fields. In general, the highest H index values are found among researchers working with life sciences.

 

Closing remarks

Getting familiar with some bibliometric indexes has become extremely important for researchers who rely on inputs for research and often are evaluated with these bibliometric tools23. Each of these bibliometric indexes has limitations. Using some of these indexes in a combined manner represents the fairest and most legitimate form. Despite the subjectivity, peer reviews are still valuable, whether in the evaluation of researchers who apply for academic positions, or even in the editorial evaluation of scientific papers. None of the qualitative and quantitative indexes, as good as they may be, is sufficiently accurate to be used in isolation. The combination of some these indexes associated with peer review is certainly the best way to carry out an objective evaluation. Finally, it is important to keep in mind that the task of judging, either the scientific reputation of researchers, or the eligibility of institutions as a recipient of funds, should always strive for fairness and accuracy of assessments, thus avoiding irreparable mistakes.

Potential Conflict of Interest

No potential conflict of interest relevant to this article was reported.

Sources of Funding

There were no external funding sources for this study.

Study Association

This study is not associated with any post-graduation program.

 

References

1. Bornmann L, Daniel HD. The state of H index research: is the H index the ideal way to measure research performance? EMBO Rep. 2009; 10 (1): 2-6.         [ Links ]

2. Strehl L. O fator de impacto ISI e a avaliação da produção científica: aspectos conceituais e metodológicos. Ci Inf. 2005; 34 (1): 1-14.         [ Links ]

3. Sypsa V, Hatzakis A. Assessing the impact of biomedical research in academic institutions of disparate sizes. BMC Med Res Methodol. 2009; 9: 33.         [ Links ]

4. Nova tabela Qualis. [Acesso em 2009 nov 20]. Disponivel em: http://qualis.capes.gov.br/webqualis/>         [ Links ]

5. Kellner AW, Ponciano LC. H-index in the Brazilian Academy of Sciences: comments and concerns. An Acad Bras Cienc. 2008; 80 (4): 771-81.         [ Links ]

6. Garfield E. Citation indexes: new paths to scientific knowledge. Chem Bull. 1956; 43 (4): 11-2.         [ Links ]

7. ISI Web of Knowledge. Journal Citation Reports. [Acesso em 2009 nov 20]. Disponível em: http://admin-apps.isiknowledge.com/JCR/JCR         [ Links ]

8. Garfield E. Citation indexes for science: a new dimension in documentation through association of ideas. Science. 1955; 122 (3159): 108-11.         [ Links ]

9. Gisbert JP, Panés J. Índice H de Hirsch: una nueva herramienta para medir la producción científica. Cir Esp. 2009; 86 (4): 193-5.         [ Links ]

10. Quindós G. Confundiendo al confuso: reflexiones sobre el factor de impacto, el índice H(irsch), el valor Q y otros cofactores que influyen en la felicidad del investigador. Rev Iberoam Micol. 2009; 26(2): 97-102.         [ Links ]

11. Garfield E. The use of JCR and JPI in measuring short and long term journal impact. Croat Med J. 2000; 41 (4): 368-74.         [ Links ]

12. Garfield E. The use of JCR and JPI in measuring short and long term journal impact. In: The Scientist. Council of Scientific Editors Annual Meeting, May 9, 2000.         [ Links ]

13. Hirsch JE. An index to quantify an individual's scientific research output. Proc Nat Acad Sci USA. 2005; 102(46): 16569-72.         [ Links ]

14. Hirsch JE. Does the H index have predictive power? Proc Natl Acad Sci USA. 2007; 104 (49): 19193-8.         [ Links ]

15. Kulasegarah J, Fenton JE. Comparison of the h index with standard bibliometric indicators to rank influential otolaryngologists in Europe and North America. Eur Arch Otorhinolaryngol. 2010; 267 (3): 455-8.         [ Links ]

16. Engqvist L, Frommen JG. The h-index and self-citations. Trends Ecol Evol. 2008; 23 (5): 250-2.         [ Links ]

17. Purvis A. The h index: playing the numbers game. Trends Ecol Evol. 2006; 21 (8): 422.         [ Links ]

18. Molinari JF, Molinari A. A new methodology for ranking scientific institutions. Scientometrics. 2008; 75: 163-74.         [ Links ]

19. Dodson MV. Citation analysis: maintenance of h-index and use of e-index. Biochem Biophys Res Commun. 2009; 387 (4): 625-6.         [ Links ]

20. ISI Web of Knowledge. Web of Science. [Acesso em 2009 nov 20]. Disponível em: http://apps.isiknowledge.com/WOS_GeneralSearch.do?action=clear&product=WOS&search_mode=GeneralSearch&SID=3F1C5AcanmGi@LIi5NG         [ Links ]

21. Salgado JF, Páez D. Scientific productivity and Hirsch's h index of Spanish social psychology: convergence between productivity indexes and comparison with other areas. Psicothema. 2007; 19 (2): 179-89.         [ Links ]

22. Iglesias JE, Pecharromán C. Scaling the h-index for different scientific ISI Field. Scientometrics. 2007; 73 (3): 303-20.         [ Links ]

23. Gisbert JP, Panés J. Publicación científica, indicadores bibliométricos e índice H de Hirsch. Gastroenterol Hepatol. 2009; 32(3): 140-9.         [ Links ]

 

 

Mailing address:
Renato S. Assad
Instituto do Coração do Hospital das Clínicas da Faculdade de Medicina da USP
Av. Dr. Eneas de Carvalho Aguiar, 44 - Cerqueira César
05403-000 - São Paulo, SP - Brazil
E-mail: rsassad@cardiol.br

Manuscript received September 29, 2009; revised manuscript received January 28, 2010; accepted March 29, 2010.

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License