Acessibilidade / Reportar erro

Case reports and qualitative research: two important approaches to evaluation and communication in medical science

Case reports and qualitative research – two important approaches to evaluation and communication in medical science

Thomas C. Jones

Editor BJID

As an Editor of the Brazilian Journal of Infectious Diseases, it is a pleasure to announce that the "Case Report", as a method of communicating medical experience and summarizing key educational points, is alive and well in our journal. During the four and a half years of publication of the BJID, 25 manuscripts identified as case reports have been published, an average of 1 per issue.

This record is particularly relevant because other journals are asking their readers, "What has happened to the case report as a mechanism of medical communication?" [1, 2]. The journal, Clinical Infectious Diseases, recently asked its readers to ignore rumors that it did not accept such reports; its editors stated that there was no truth to such rumors, and that they needed to have more information transmitted in that format. In an article in the Annals of Internal Medicine a full defense of case reports and case series was presented, as if a crisis in medical education was near [1]. Our readers can take comfort, as the BJID has never even considered placing the case report or case series in an inferior position. In fact, this form of communication is vital to the exchange of medical experiences.

After all this concern, one must ask why has there been a perception that personal experience, as a form of medical communication, is no longer necessary? The answer is clear; "evidence-based" medicine has achieved a role in medical education that has begun to affect the big picture of how we think, learn and communicate. This is dangerous. The following arguments are presented in favor of communicating our experiences (case reports and cases series), of studying open-ended questions (qualitative research), and of recording opinions (expert reviews and editorials) as critical components of information exchange.

"Evidence-based" medical studies have emerged over the last 50 years as a critical part of the process of data gathering and communicating medical information [3]. These studies include ensuring proper sample sizes of study populations, randomization, blinding of all participants to prevent bias, focused end-points for analysis, and statistical rejection of the null hypothesis in order to properly evaluate any effects studied. These have been laudable goals, and they have contributed a great deal to our ability to do proper science. Unfortunately, what has also happened is that we have forgotten the major limitations of this approach and have allowed an imbalance in medical information gathering to occur. Recently, there have been efforts to redirect, or at least put in perspective, various approaches to medical information gathering and communication. Here, we summarize some of the rationales and directions of this process.

There are two major problems with evidence-based medical studies: 1) errors in the medical design of a study are far more likely to affect the outcome of the evidence-based study design than has been previously thought, and, 2) the theoretical basis and limitations of rejecting the null hypothesis have been forgotten. These two overlooked points have led to serious problems in the use of these studies. More important, these problems have not, and probably cannot be corrected, thus requiring us to always use other forms of data gathering.

Poor design of evidence-based studies

We recently published arguments urging a change in the requirement for most Phase III and Phase IV studies for drug registration [4]. These arguments were based on the growing costs and ethical issues related to these studies, and on the fact that a large number of them have included design flaws. These flaws have included poor definition of the type of diseased patient enrolled, inaccurate selection of the drug dose or duration to be tested, inappropriate study sample size, and ignoring imbalances in demographics of the study groups. These flaws have meant that even when a study yielded positive results, the dose, duration, or target disease were wrong. When results of the study were negative, this was more likely due to study design flaws than to drug inefficacy.

This problem in evidence-based studies has gone unrecorded because there has been no adequate way to place a score on the "quality" of the study in the assessment. Although efforts are being made to teach young scientists how to design and evaluate the quality of studies, and recent publications have tried to include assessments of article quality [5, 6], a comprehensive and useful "quality score" for grading studies is not available. The problem of assessing the quality of a study has been compounded by the use of retrospective meta-analyses. Such analyses have almost always asked whether a randomized, blinded study was done and published? After a Medline search for these studies is done, all are included in the analysis. Thus, studies of very poor quality regarding aspects of the study other than blinding or randomization (which are the majority of published studies [6]) are mixed with good quality studies, yielding a worthless piece of information. The conclusion is that, before we allow evidence-based studies to remain as the "gold standard" of medical research, a scale to measure the quality of the studies must be introduced. Until that time, all other types of data gathering stand as important methods for providing information about key advances in medical care.

The rejection of the null hypothesis

It has been reported for many years that the use of a confidence level of 0.05 to reject the null hypothesis is a practical compromise dependent on several theoretical conditions relevant to the data of the study, but these theoretical conditions almost never exist [7, 8]. For, example, a bell-shaped response curve is required, which is something that almost never occurs. A random event that occurred in 1 of 21 tests is considered so unlikely that the drug being studied must have altered the condition. A random event of 1 in 19 would indicate an absence of drug effect. This practical tool, although useful, has been pushed far beyond its capacity to discriminate. Each study is viewed as if it existed in a vacuum with the only important factor being whether random events could explain the outcome. None of us lives in such a world.

Recently, the old idea presented in the Bayes Theorem was re-examined [9, 10]. This approach was rejected decades ago because it was too complicated, but it may now be exactly what is needed. In the Bayes Theorem, one is asked to factor other questions into the statistical analysis of any study. These questions are, "What is the background setting within which the study was done, what other data are available about the class of drugs or what other studies were done using the drug in question?" The answers to these questions are then incorporated into the analysis as a "factor", in addition to the null hypothesis. This factor essentially summarizes the entire environment relevant to the study. For example, if the drug studied was an antibiotic, in order to calculate the factor, one would need to know how many other studies were done with the same drug and the results of those studies, as well as the results of studies using antibiotics that are in the same drug class. The factor would also include the effects of other antibiotics on the same disease. What a refreshing addition - indeed, an addition very similar to what the physician does every day in his or her decision making processes during patient care. Is it a complicated addition? Certainly. Can it and must it be done in our computer sophisticated society? Yes. Until this addition occurs, one must look at evidence-based studies as seriously compromised.

Other approaches for the communication of medical information

Until these corrections of evidence-based data collection are made, we must turn to other methods of data collection. These methods include the presentation of case reports and case series, with an added careful review of the medical evidence supporting points made based on the cases studied. After a proper medical education, the reader provides the control of bias from his own experience, just as he or she does in every other aspect of reading. Criteria at the basis of controlled trials can also be very useful in evaluating clinical trials that are not "evidence-based" in design.. For example, observational and historical controlled trials can be easily substituted for randomized, blinded, comparative trials at considerably lower expense, once we have learned how to assess their quality [11, 12]. It is important to remember that these types of trials have improved a great deal over the past decades, in part because of points learned from evidence-based trial designs.

Also of great interest is another type of trial included under the heading of "qualitative research" [13, 14]. In this type of trial, we ask an open-ended question that cannot be answered by use of the null hypothesis or an environment Bayes factor. The example used by Giacomini [13] was particularly interesting in Brazil. He asked how we can better understand the behavior of people at a traffic light. An evidence-based study design would randomly assign drivers to be confronted by a red or a green light, and record the response. The conclusion might be that the percentage of "good" drivers varies among different countries. A qualitative research design would ask the general question, "What does a red or a green traffic light mean to the driver?" The answer would yield different results in Brazil than in Switzerland, and it might lead to a better understanding about how to use a yellow caution light. Qualitative research in medicine could assist in answering questions about how guidelines help in antibiotic use [15], or for advising nurses about hand hygiene in an intensive care unit [16]. We have the techniques and understanding to begin using trials with these types of study designs.

The challenges in medical information gathering and communication are immense. The questions in the future regarding any data submitted for publication should be, "Are the issues well addressed, properly focused, sufficiently detailed, clearly presented and discussed?" If the answers are yes, then we can all feel that medical science has been advanced. The Brazilian Journal of Infectious Diseases will continue to do its part to ensure publication of all such articles.

References

1. Vandenbroucke J.P. In defense of case reports and cases series. Ann Int Med 2001;134:330-4.

2.Gorbach S.L., Barza M. Where have all the case reports gone? Clin Infect Dis 200132:1.

3. Nies A.S. Principles of therapeutics. In The Pharmacological Basis of Therapeutics, Eds. A.G. Gilman, T.W. Rall, A.S. Nies, and P. Taylor. Pergamon Press, New York, Oxford 1990:62-83.

4.Jones T.C. A call for a new approach to the clinical trials process and drug registration. Brit Med J 2001;322:920-3.

5. Bach P.B., Brown C., Gelfand S.E., McCrory D.C. Management of acute exacerbations of chronic obstructive pulmonary disease: A summary and appraisal of published evidence. Ann Intern Med 2001;134: 600-20.

6.Altman, DG, Schulz, KF, Moher, D, etal. The revised CONSORT statement for reporting randomized trials: explanation and elaboration. Ann Intern Med 2001; 134: 663-694.

7.Rothman K.J. Significance questing (Editorial). Ann Intern Med 1986;105:445-7.

8. Pocock S.J., Hughes M.D., Lee R.J. Statistical problems in the reporting of clinical trials. A survey of three medical journals. New Eng J Med 1987;317:426-32.

9.Goodman S.N. Toward evidence-based medical statistics.1. The p value fallacy. Ann Intern Med 1999;130:995-1004.

10.Goodman S.N. Toward evidence-based medical statistics. 2: The Bayes factor. Ann Intern Med 1999; 130: 1005-1013.

11.Concato J., Shah N., Horwitz R.I. Randomized, controlled trials, observational studies, and the hierarchy of research design. New Eng J Med 2000;342:1887-92.

12. Benson K., Hartz A.J. A comparison of observational studies and randomized, controlled trials. New Eng Jou Med 2000;342:1878-86.

13.Giacomini M.K. The rocky road: qualitative research as evidence. ACP Journal Club 2001; 134: A-11-A-13.

14. Innui T.S., Frankel R.M. Evaluating the quality of qualitative research. J Intern Med 1991;6:485-6.

15. Badaro R., Jones TC. Conflicting messages regarding emerging microbial resistance, microbial sensitivity testing and control of antibiotic use in hospitals. Braz J Infect Dis 2000:43-5.

16. Badaro, R., Jones, TC. Controlling the spread of microorganisms in the hospital; back to the basics of hand washing and glove use. Braz J Infect Dis 2001;5(1): 47-9.

Publication Dates

  • Publication in this collection
    19 Feb 2003
  • Date of issue
    June 2001
Brazilian Society of Infectious Diseases Rua Augusto Viana, SN, 6º., 40110-060 Salvador - Bahia - Brazil, Telefax: (55 71) 3283-8172, Fax: (55 71) 3247-2756 - Salvador - BA - Brazil
E-mail: bjid@bjid.org.br