Acessibilidade / Reportar erro

Scientific indicators of productivity: time for action

Indicadores científicos de produtividade: hora de agir

EDITORIAL

Scientific indicators of productivity - time for action

Indicadores científicos de produtividade - hora de agir

Given the widespread use of research indicators for various assessment and resource allocation purposes, it is surprising to find the scarcity of information on the association between their quantitative citation indices and their implicit assumptions about associations with research quality.1 Bibliometric methods such as citation indices are increasingly being used as a tool for research performance evaluation to make statements about the qualitative features of research, orient important aspects of science such as researcher promotion and allocate funds to certain fields or research groups. Due to their growing importance, additional research should continue to identify which citation measures are best and whether these measures are statistically reliable.

One of the most crucial objectives in bibliometric analysis is to reach a consistent and standardized set of indicators. Citation analysis, for example, is a quick method, thus explaining its widespread use. Nevertheless, these methods are frequently used without much attention to their limitations such as a strong variability across research disciplines and types of publications.1 It is also well-known that some citation-based indices can even be manipulated by researchers. For example, "citation per paper" can be increased by publishing fewer papers and refraining from publishing work that is unlikely to be widely cited; total citations can be increased by preferentially publishing reviews, which are usually cited more often than are primary data papers.

Several deficiencies that are inherent to the citation metric itself can be mentioned: citation does not necessarily measure the importance or impact of papers, when the metrics is based solely on the total number of papers; if the metrics is based on the total number of citations they are affected by a small number of "big hits" articles which received huge numbers of citations, whereas the remaining articles may have negligible total impact; measurement of productivity is not ideal when the metrics is based on the average number of citations per paper; it is difficult to match them with administrative parameters.2

In 2005, attempting to overcome the disadvantages of the present metrics used for ranking scientists and journals, Hirsch proposed an innovative metric, the h index. According to him, a scientist has an index h if h of his/her Np papers have received at least h citations each, and the other (Np-h) papers have no more than h citations each.3 Through this new index, Hirsch argues that scientists could be more objectively rewarded with promotions, awards or even funding. To compare individuals with different scientific ages (defined by him as the number of years since the author's first publication), Hirsch divided h by their scientific age generating the value m (m can be thought as the speed with which a researcher's h index increases). These new indicators would account for both productivity and impact characterizing the cumulative impact of the research work of individual scientists.4

The introduction of the h index was a major breakthrough in citation analysis, and independent of how good and reliable it might be, it has generated a lot of commentaries among the scientific community. Some authors have reported comparisons between the characteristics of Hirsch h index and several standard bibliometric indicators such as the total number of citations with results of peer review judgment, both demonstrating a strong correlation with the h index.4 Two measures of author quality were also comparatively analyzed with the h index by Lehmann: the mean number of citations per paper and the number of papers published per year (frequency of publication).5 Calibration of the results, with an obviously meaningless measure, whereby the authors are ranked alphabetically by name, was also included. The results were as follows: the alphabetical ranking of authors contains no information regarding their scientific quality; the average number of papers published by author per year (one of the most widely used measures of scientific quality measure) resulted similar to an alphabetical list; compared with the h index, the mean number of citations per paper is a superior indicator of scientific quality in terms of both accuracy and precision.5 Additional recent comments are: the original h index assigns the same importance to all citations, no matter how old they might be; it also assigns the same importance to all articles, thus providing novice researchers with a relatively small h index compared to their senior counterparts, consequently hiding brilliant researchers at the beginning of their careers.2

So, after learning that no available individual measure will provide us with all the information we want and need, should we simply stop trying to rank quality in science or should we continue to search for a single "holy grail" of all scientometric indices? We believe the answer to both questions is no. Decisions about science based on merit of individual or groups of researchers need to be made, and scientific indices of quality are the natural means by which this is accomplished. The problem, we believe, is in how this is accomplished. Science is a very complex construct and its complexity will not be mapped using single indicators. Similarly to Economics, multiple indices focusing on different aspects and areas of science need to be established in order that they can be appropriately measured. Of utmost importance, indices of scientific quality should not be based exclusively on secondary sources of data such as citation patterns but, similarly to what happened to the Macro-Economic indices during the mid-20th century, indices should also be based on prospectively collected data. These data will help us evaluate the impact of individual articles, scientists, groups, and institutions on science as a whole as well as on the well-being of society. These measures will only be feasible once governments realize that the large sums of funding allocated to individual researchers and groups every year should continue to be advised by renowned scientists in the fields, but that these decisions should also be supported by a series of indices of scientific quality certified through adequate evidence.

Sonia Mansoldo Dainesi

Núcleo de Apoio à Pesquisa Clínica

(NAPesq - Clinical Research Support Center), Clinical Hospital, Medical School,

Universidade de São Paulo (USP), São Paulo (SP), Brazil

Ricardo Pietrobon

Biomedical Informatics,

Duke Translational Medicine Institute (DTMI),

Duke University, Durham, NC USA

References

1. Wallin JA. Bibliometric methods: pitfalls and possibilities. Basic Clin Pharmacol Toxicol. 2005;97(5):261-75.

2. Sidiropoulos A, Katsaros D, Manolopolus Y. Generalized h-index for disclosing latent facts in citation networks. July 2006. Cornell University Library. [cited 2007 Jan 10]. Available from: http://arxiv.org/PS_cache/cs/pdf/0607/0607066.pdf

3. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci U S A. 2005;102(46):16569-72.

4. Van Raan AJ. Comparison of the Hirsch-index with standard bibliometric indicators and with peer judgment for 147 chemistry research groups. Scientometrics. 2006;67(3):491-502.

5. Lehmann S, Jackson AD, Lautrup BE. Measures for measures. Nature. 2006;444(7122):1003-4.

Financing: None

Conflict of interests: Sonia M. Dainesi has previously worked at Rhodia Farma, Sandoz and, more recently (until March 2005), as the Medical Director of Aventis Pharma. She is also a member and former President (2004-2005) of the Sociedade Brasileira de Medicina Farmaceutica (SBMF, Brazilian Society of Pharmaceutical Medicine).

Publication Dates

  • Publication in this collection
    06 July 2007
  • Date of issue
    June 2007
Associação Brasileira de Psiquiatria Rua Pedro de Toledo, 967 - casa 1, 04039-032 São Paulo SP Brazil, Tel.: +55 11 5081-6799, Fax: +55 11 3384-6799, Fax: +55 11 5579-6210 - São Paulo - SP - Brazil
E-mail: editorial@abp.org.br