Acessibilidade / Reportar erro

Implementation of evidence into practice: complex, multi-faceted and multi-layered

During the last two decades, academics, healthcare providers and funders, policy makers and those who access our health services have acknowledged the need to ensure healthcare organizations, systems and practices are evidence-based. Drivers for evidence-based practice quite rightly highlight that implementation of care which is clinically and cost effective could reduce variation in healthcare outcomes, make better use of finite healthcare resources, and support healthcare systems in which clinicians and patients reach shared-decisions about management informed by best evidence.

Since Sackett et al.(101 1.Sackett DL, Rosenberg WM, Gray JA, Haynes B, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71-2), published their 1996 British Medical Journal commentary in which they presented their definition of evidence-based medicine, reports, papers and guidelines have seemingly reflected clinical decision making and translation of evidence into practice as a linear process – a patient has a health need which they discuss with a healthcare professional(101 1.Sackett DL, Rosenberg WM, Gray JA, Haynes B, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71-2). The healthcare professional is aware of an evidence-based intervention, drug or therapy which could meet the health need – the shared-decision is made, the intervention implemented and the health need is met. However, this simplistic view is far from the reality and complexity of our day to day dealings with patients and their families in real-life clinical situations. Furthermore in many areas of health care we struggle to find evidence to support our decision making, as much research output fails to result in worthwhile achievement for its intended users, leading to what has been described as research waste(202 2.Macleod MR, Michie S, Roberts I, Dirnagle U, Chalmers I, Ionnidis JPA, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014; 383 (9912):101-4).

Those of us who have worked on intervention studies are only too aware that once the active research intervention stops, practice more often than not reverts to what it was before. This could be for a myriad of reasons including poor dissemination of findings, other competing healthcare priorities, or too little resource or motivation to implement and sustain change in practice. In addition, we have little information on whether differences in outcomes of interest which did occur were as intended, or if there were additional, unintended impacts of changing practice. For example in the UK, evidence to restrict routine episiotomy for vaginal birth was implemented following clinical trials which found that routine episiotomy afforded no protective benefit for women’s health(303 3.Carroli G, Mignini L. Episiotomy for vaginal birth. Cochrane Database Syst Rev. 2009;(1):CD000081). Restricting episiotomy was a good thing as we should not implement unnecessary or potentially harmful care. However, an unexpected consequence is that as student midwives in the UK are no longer required to perform episiotomy in their training, they have potentially lost skills and competencies in knowing when episiotomy is indicated and how to perform a correct surgical incision(404 4.Bick D. Evidence based midwifery practice: take care to mind the ‘gap’ [editorial]. Midwifery. 2011;27(5):569-70). As a consequence, women may sustain more severe spontaneous perineal trauma which could have been prevented if a correctly performed episiotomy had been used. This is one example from one area of practice but it highlights the complexity of implementation and need for primary and secondary research to inform how we evaluate implementation and the optimal time to do this.

As there is currently no single method recommended to support successful implementation of evidence into practice, and uncertainty as to the value of tools developed by guideline authors to support implementation of their guidance(505 5.Flodgren G, Eccles MP, Grimshaw J, Leng GC, Shepperd S. Tools developed and disseminated by guideline producers to promote the uptake of their guidelines (Protocol). Cochrane Database Syst Rev. 2013 Aug) how can research teams planning studies reduce research waste and influence implementation likely to make a positive difference? It is increasingly recognized as naïve to assume implementation automatically follows study publication and improved patient outcomes follow(606 6.Bick D, Graham I. The importance of addressing outcomes of evidence based practice. In: Bick D, Graham I, editors. Evaluating the impact of implementing evidence based practice. London: Wiley Blackwell; 2010. Chapter 1). An important step is to ensure the evidence gap we are aiming to meet is of the highest priority for the intended end users of the evidence we generate. These could include patients, clinicians, health service managers or policy makers. Engagement with these groups needs to commence at the outset with the formulation of the research questions, as failure to ask the right questions is unlikely to generate findings likely to make a difference. In the UK, initiatives such as The James Lind Alliance (http://www.lindalliance.org) are working to bring patients, carers and clinicians together to identify and prioritize uncertainties, or unanswered questions, about the effects of treatments that they consider to be the most important, information important to ensure research funders know what matters to patients and clinicians. This approach could ensure that research developments involve all relevant stakeholders, and focus on real clinical decisions most likely to make an impact on outcomes that matter.

As well as asking the right study questions, we also need to ensure we include the most appropriate study outcomes, measured and assessed from the perspectives of the intended users of our research. Are our outcomes of interest most relevant for clinicians, for example to improve their surgical skills or attitudes in patient consultations? Are we looking at saving health care resources? Or are we aiming to enhance patient health, satisfaction or experience in the shorter and longer-term? As clinicians, we often assume we know what the primary study outcome should be – however, health service users are really the experts in knowing which outcomes have the highest priority, highlighting that when planning research we need to consider our audience. In our recent perineal repair trial(707 7.Ismail K, Kettle C, Tohill S, Macdonald S, Thomas P, Bick D. Perineal Assessment and Repair Longitudinal Study (PEARLS): a matched-pair cluster randomized trial. BMC Med. 2013;11:209), as clinicians we assumed pain would be the most important outcome for women who had a second degree tear or an episiotomy. When we asked the women who had sustained these perineal outcomes what their priorities were, they were most concerned about perineal wound infection. Pain was a secondary concern.

More national and international research funding bodies are asking research teams to consider in their applications how study findings will be disseminated and implemented. We would also suggest that funders need to consider how best to evaluate the outcomes of implementation of evidence-based practice which considers the full-cycle of implementation(606 6.Bick D, Graham I. The importance of addressing outcomes of evidence based practice. In: Bick D, Graham I, editors. Evaluating the impact of implementing evidence based practice. London: Wiley Blackwell; 2010. Chapter 1). Implementation of evidence is complex, multi-faceted and multi-layered and interventions need to reflect and take account of context, culture and facilitation to support and sustain research use. What is apparent is that we need to submit the evaluation of outcomes of implementation to the same level of rigour as other interventions and procedures as called for by the evidence-based medicine movement. We also need more methodological work to know when, where and how to assess the outcomes of use of research.

References

  • 01
    1.Sackett DL, Rosenberg WM, Gray JA, Haynes B, Richardson WS. Evidence based medicine: what it is and what it isn’t. BMJ. 1996;312(7023):71-2
  • 02
    2.Macleod MR, Michie S, Roberts I, Dirnagle U, Chalmers I, Ionnidis JPA, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014; 383 (9912):101-4
  • 03
    3.Carroli G, Mignini L. Episiotomy for vaginal birth. Cochrane Database Syst Rev. 2009;(1):CD000081
  • 04
    4.Bick D. Evidence based midwifery practice: take care to mind the ‘gap’ [editorial]. Midwifery. 2011;27(5):569-70
  • 05
    5.Flodgren G, Eccles MP, Grimshaw J, Leng GC, Shepperd S. Tools developed and disseminated by guideline producers to promote the uptake of their guidelines (Protocol). Cochrane Database Syst Rev. 2013 Aug
  • 06
    6.Bick D, Graham I. The importance of addressing outcomes of evidence based practice. In: Bick D, Graham I, editors. Evaluating the impact of implementing evidence based practice. London: Wiley Blackwell; 2010. Chapter 1
  • 07
    7.Ismail K, Kettle C, Tohill S, Macdonald S, Thomas P, Bick D. Perineal Assessment and Repair Longitudinal Study (PEARLS): a matched-pair cluster randomized trial. BMC Med. 2013;11:209

Publication Dates

  • Publication in this collection
    Aug 2014
Universidade de São Paulo, Escola de Enfermagem Av. Dr. Enéas de Carvalho Aguiar, 419 , 05403-000 São Paulo - SP/ Brasil, Tel./Fax: (55 11) 3061-7553, - São Paulo - SP - Brazil
E-mail: reeusp@usp.br