SciELO - Scientific Electronic Library Online

 
vol.6 issue1Traumatic rupture of the thoracic aorta due to closed-chest trauma author indexsubject indexarticles search
Home Pagealphabetic serial listing  

Services on Demand

Journal

Article

Indicators

Related links

Share


Jornal Vascular Brasileiro

Print version ISSN 1677-5449On-line version ISSN 1677-7301

J. vasc. bras. vol.6 no.1 Porto Alegre Mar. 2007

http://dx.doi.org/10.1590/S1677-54492007000100001 

EDITORIAL

 

How to practice evidence–based medicine

 

 

Regina Paolucci El Dib

Scientific Research Assistant, Centro Cochrane do Brasil. Sentinel reader of comments on articles published in The Evidence–Based Journals Group McMaster Online Rating of Evidence. PhD student in Internal and Therapeutic Medicine, Universidade Federal de São Paulo – Escola Paulista de Medicina (UNIFESP–EPM), São Paulo, SP, Brazil

 

 

The economic, political, social, cultural and scientific developments are marked by slow, gradual processes of deep awareness of important aspects that should be transformed and improved for the well–being of a given community. Regarding the scientific field, in the past researches were only based on pathophysiological theories. However, more recently, research has undergone deep changes, adopting a process based on evidence from good scientific research.

Evidence–based medicine (EBM) is defined as the link between good scientific research and clinical practice.1–2 In other words, EBM uses existing and available scientific evidence, with good internal and external validity, to apply its results in clinical practice. When we approach the treatment and talk about evidence, we refer to effectiveness, efficiency, efficacy and safety. Effectiveness concerns the treatment that works under real world conditions. Efficiency concerns cheap and accessible treatment, so that patients can benefit from it. We refer to efficacy when the treatment works under conditions of an ideal world. And, finally, safety means that an intervention has reliable characteristics that reduce the likelihood of any undesirable effect for the patient.3 Therefore, a study with good internal validity should present the components described above.

The EBM process starts with the formulation of a clinical question of interest. A good formulated question is the first and most important step to start a research, since it reduces the possibilities of systematic errors (bias) during the design, planning, statistical analysis and conclusion of a research project. A good scientific question consists of four essential items, namely: clinical situation (which disease), intervention (which treatment of interest is to be tested), control group (placebo, sham, no intervention or another intervention) and clinical outcome. Let us suppose that one wishes to know whether the inhibitors of platelet aggregation are more effective and safer when compared with oral anticoagulants in reducing the incidence of cardiovascular mortality. In this example, inhibitors of platelet aggregation are the intervention of interest, oral anticoagulants are the control group, hypertensive patients are the clinical situation and reduction in the incidence of cardiovascular mortality is the primary outcome of interest. Of course there are other outcomes that may be assessed in the same study. Using the same example, we could consider non–fatal cardiovascular events (stroke, myocardial infarction and thromboembolic events) as secondary outcomes.

Starting from the question, the next step is knowing which study design best answers the clinical question. In the previous example, the study design that has the most appropriate internal validity are systematic reviews with or without meta–analyses (considered level I evidence), followed by large clinical trials, called mega trials (more than 1,000 patients – level II evidence), clinical trials with less than 1,000 patients (level III evidence), cohort studies (do not have a randomization process – level IV evidence), case control studies (level V evidence), case series (level VI evidence), case reports (level VII evidence), expert opinions, animal research and in vitro research.4 The last three classifications are at the same evidence level (level VIII evidence), being essential to formulate hypotheses that will be tested in the light of a good scientific research.

It is worth stressing that the hierarchy of evidence levels presented above is valid for studies on treatment and prevention. Therefore, if the formulated question is related to risk factors, prevalence of a given disease or sensitivity and specificity of a diagnostic test, the order of evidence levels will be different according to the clinical question. In other words, the hierarchy of evidence levels is not stactic, but dynamic according to the formulated question.

Systematic reviews have advantages when compared with traditional reviews. Systematic reviews using strict methods reduce the occurrence of biases. Systematic reviews with meta–analyses usually optimize the results, since the quantitative analysis of the studies included in the review provides additional information.5 On the other hand, narrative reviews answer wide and badly formulated questions. In addition, the source and selection of studies are often not specified and, therefore, increase the potential of biases. Systematic reviews are currently considered level I evidence for any clinical question, because they systematically summarize information on a given topic through primary studies (clinical trials, cohort studies, case control or cross–sectional studies), using a reproducible methodology, besides critically integrating information to help the decision–making process and explain the differences and contradictions found in individual studies.

Meta–analyses are a statistical calculation (statistical sum) applied to primary studies included in a systematic review. Meta–analyses increase the statistical power to detect possible differences between assessed groups and the accuracy of data estimation, reducing the confidence interval. Moreover, meta–analyses are easy to be interpreted, depending only on some practice and training.

To perform a systematic review, the following is required: a second reviewer (an assistant researcher to select studies, evaluate the quality of selected studies and extract data); equipment, such as computers and software; particular skills, such as, for instance, the development of search strategies in databases, selection of studies based on inclusion and exclusion criteria, critical evaluation of studies to be included in the systematic review, interpretation of results and updating of the systematic review.

Randomized clinical trials are primary studies that answer treatment and prevention questions. Randomized clinical trials are considered level II evidence, since they have a control group, are prospective (parallel or cross–over), have randomization processes (drawing of participants to be allocated in one of the study groups, giving all individuals the same chance of being in the treated or control group) and masking of outcomes to be assessed by the investigator (blind study).6 In this study design, there are at least two groups: one is given the intervention to be tested (for example, imatinib for gastrointestinal tract tumors) and the other is given another intervention, no intervention or placebo. Both groups are followed so that participants are not lost until the outcomes of interest occur.

However, there are questions in the health area in which the randomization process would be unethical, such as, for example, investigating the possible occurence of lung cancer by randomizing individuals for smoking and non–smoking. Thus, the best study design to answer this question is the classical cohort study. In this type of study, participants exposed and not exposed to the risk factor – cigarrette – are prospectively followed over a period of time until the events of interest occur. In this study, a hypothesis of association is tested.

When dealing with risk fator questions, cohort studies are considered level II evidence, only behind the systematic review of cohort studies (level I evidence).

It is worth remembering that some clinical trials, usually of a surgical nature, are difficult to be classififed as double–blind (when the patient and investigator are unaware of the participant's allocation). However, there is a procedure to deal with this type of situation, called sham, fake or dummy (simulation). This procedure aims at working like a placebo (drug without specific pharmacological activity); however, it is used to mask surgical techniques. Some authors consider this procedure unethical, since the patients are submitted to anesthesia, being exposed to risks. Other authors defend that a procedure can be ethically justified if there is a relevant clinical question to be answered, if the use of a control group with sham is methodologically necessary to test the study hypothesis and if the risk of the procedure with sham is minimal.7 It is what we call the principle of equipoise – distribution of risks to reduce uncertainty in medicine. Within a context to answer a relevant clinical question, the use of sham procedures can be the only way to determine whether the mechanism of surgery hypothesis is responsible for the improvement in patients' condition.7

Similarly to the cohort study, the case control study is observational, but retrospective, going from outcome to exposure, and usually useful to questions approaching rare diseases or situations. One example of case control study is to investigate the possible effect of a salt–rich diet on cardiovadcular diseases. The study starts with a group of patients with cardiovascular disease (cases) and a group of individuals without cardiovascular disease (control). A questionnaire is performed to investigate the patients' eating habits and then establish a possible relationship of association between patients who ingested a salt–rich diet and those who developed or not cardiovascular disease.

This study design is cheaper and faster to be performed. However, for questions on treatment and prevention, it has just been considered level V evidence, since it is retrospective and obviously for being subject to a memory bias, besides not including the randomization process and, therefore, being subject to a selection bias.

There are many classifications of recommended evidence levels and degrees. Most reviewers and collaborators at Centro Cochrane do Brasil use the evidence levels and degrees presented here to guide their research or decision making concerning the patient's health care because they are simple and feasible.

The Cochrane Collaboration is an excellent advance toward decision making in health care, being compared with the Genoma Project in terms of importance to worldwide clinical medicine.8 The objectives of Cochrane Collaboration are to provide accurate information on the effects of health care promptily available all over the world, produce and disseminate systematic reviews of health care interventions, and promote the search for evidence in clinical trials and other intervention studies.

At the homepage of Centro Cochrane do Brasil (www.bireme.br/cochrane), there are several systematic reviews and clinical trials available in the virtual library.

How can we practice EBM? To practice EBM, we must follow these steps:

1.Transformation of the need for information (about prevention, diagnosis, prognosis, treatment, etc.) into a question that can be answered.

2.Identification of the best evidence with which to answer this question (verifying the best study design for the clinical question).

3.Access to the main health databases, such as Cochrane Library, MEDLINE, EMBASE, SciELO and LILACS, searching for well delineated studies.

4.Performing critical analysis of the evidence in relation to validity (proximity to truth), impact (effect size) and applicability (usefulness in clinical practice).

It is crucial to consider in which evidence level and degree we are currently basing our clinical practice.

In addition, it is important to stress that EBM does not deny the value of personal experience, but proposes that it should be based on evidence. Similarly, good scientific research aims at reducing uncertainty in the health area to help making better clinical decisions.

Increasing clinicians' awareness on the need of using good evidence in clinical practice is essential for the continuity of scientific knowledge, especially to increase the quality of patient care, considering their circumstances and desires, the clinician's professional experience and the best available evidence at the moment.

 

References

1. Atallah AN. A incerteza, a ciência e a evidência. Diagn Tratamento. 2004;9:27–8.

2. El Dib RP, Atallah AN. Fonoaudiologia baseada em evidências e o Centro Cochrane do Brasil. Diagn Tratamento. 2006;11:103–6.

3. El Dib RP, Atallah AN. Evidence–based speech, language and hearing therapy and the Cochrane Library's systematic reviews. Sao Paulo Med J. 2006;124:51–4.

4. Cook DJ, Guyatt GH, Laupacis A, Sackett DL, Goldberg RJ. Clinical recommendations using levels of evidence for antithrombotic agents. Chest. 1995;108(4 Suppl):227S–30S.

5. Manser R, Walters EH. What is evidence–based medicine and the role of the systematic review: the revolution coming your way. Monaldi Arch Chest Dis. 2001;56:33–8.

6. Jadad AR. Randomised controlled trials: a user's guide. London: BMJ Books; 1998. p. 1–3.

7. Horng S, Miller FG. Ethical framework for the use of sham procedures in clinical trials. Crit Care Med. 2003;31(3 Suppl):S126–30.

8. Naylor CD. Grey zones of clinical practice: some limits to evidence–based medicine. Lancet. 1995;345:840–2.

Creative Commons License All the contents of this journal, except where otherwise noted, is licensed under a Creative Commons Attribution License