Acessibilidade / Reportar erro

Appraising psychotherapy case studies in practice-based evidence: introducing Case Study Evaluation-tool (CaSE)

Abstract

Systematic case studies are often placed at the low end of evidence-based practice (EBP) due to lack of critical appraisal. This paper seeks to attend to this research gap by introducing a novel Case Study Evaluation-tool (CaSE). First, issues around knowledge generation and validity are assessed in both EBP and practice-based evidence (PBE) paradigms. Although systematic case studies are more aligned with PBE paradigm, the paper argues for a complimentary, third way approach between the two paradigms and their ‘exemplary’ methodologies: case studies and randomised controlled trials (RCTs). Second, the paper argues that all forms of research can produce ‘valid evidence’ but the validity itself needs to be assessed against each specific research method and purpose. Existing appraisal tools for qualitative research (JBI, CASP, ETQS) are shown to have limited relevance for the appraisal of systematic case studies through a comparative tool assessment. Third, the paper develops purpose-oriented evaluation criteria for systematic case studies through CaSE Checklist for Essential Components in Systematic Case Studies and CaSE Purpose-based Evaluative Framework for Systematic Case Studies. The checklist approach aids reviewers in assessing the presence or absence of essential case study components (internal validity). The framework approach aims to assess the effectiveness of each case against its set out research objectives and aims (external validity), based on different systematic case study purposes in psychotherapy. Finally, the paper demonstrates the application of the tool with a case example and notes further research trajectories for the development of CaSE tool.

Keywords:
Systematic case studies; Psychotherapy research; Research appraisal tool; Evidence-based practice; Practice-based evidence; Research validity

Introduction

Due to growing demands of evidence-based practice, standardised research assessment and appraisal tools have become common in healthcare and clinical treatment (Hannes, Lockwood, & Pearson, 2010Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research, 20(12), 1736–1743. https://doi.org/10.1177/1049732310378656.
https://doi.org/10.1177/1049732310378656...
; Hartling, Chisholm, Thomson, & Dryden, 2012Hartling, L., Chisholm, A., Thomson, D., & Dryden, D. M. (2012). A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One, 7(11), e49667. https://doi.org/10.1371/journal.pone.0049667.
https://doi.org/10.1371/journal.pone.004...
; Katrak, Bialocerkowski, Massy-Westropp, Kumar, & Grimmer, 2004Katrak, P., Bialocerkowski, A. E., Massy-Westropp, N., Kumar, S. V. S., & Grimmer, K. (2004). A systematic review of the content of critical appraisal tools. BMC Medical Research Methodology, 4(1), 22. https://doi.org/10.1186/1471-2288-4-22.
https://doi.org/10.1186/1471-2288-4-22...
). This allows researchers to critically appraise research findings on the basis of their validity, results, and usefulness (Hill & Spittlehouse, 2003Hill, A., & Spittlehouse, C. (2003). What is critical appraisal? Evidence–Based Medicine, 3(2), 1–8.). Despite the upsurge of critical appraisal in qualitative research (Williams, Boylan, & Nunan, 2019Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine. https://doi.org/10.1136/bmjebm-2018-111132.
https://doi.org/10.1136/bmjebm-2018-1111...
), there are no assessment or appraisal tools designed for psychotherapy case studies.

Although not without controversies (Michels, 2000Michels, R. (2000). The case history. Journal of the American Psychoanalytic Association, 48(2), 355–375. https://doi.org/10.1177/00030651000480021201.
https://doi.org/10.1177/0003065100048002...
), case studies remain central to the investigation of psychotherapy processes (Midgley, 2006Midgley, N. (2006). Re–reading “Little Hans”: Freud's case study and the question of competing paradigms in psychoanalysis. Journal of the American Psychoanalytic Association, 54(2), 537–559. https://doi.org/10.1177/00030651060540021601.
https://doi.org/10.1177/0003065106054002...
; Willemsen, Della Rosa, & Kegerreis, 2017Willemsen, J., Della Rosa, E., & Kegerreis, S. (2017). Clinical case studies in psychoanalytic and psychodynamic treatment. Frontiers in Psychology, 8(108). https://doi.org/10.3389/fpsyg.2017.00108.
https://doi.org/10.3389/fpsyg.2017.00108...
). This is particularly true of systematic case studies, the most common form of case study in contemporary psychotherapy research (Davison & Lazarus, 2007Davison, G. C., & Lazarus, A. A. (2007). Clinical case studies are important in the science and practice of psychotherapy. In S. O. Lilienfeld, & W. T. O’Donohue (Eds.), The great ideas of clinical science: 17 principles that every mental health professional should understand, (pp. 149–162). Routledge/Taylor & Francis Group.; McLeod & Elliott, 2011McLeod, J., & Elliott, R. (2011). Systematic case study research: A practice– oriented introduction to building an evidence base for counselling and psychotherapy. Counselling and Psychotherapy Research, 11(1), 1–10. https://doi.org/10.1080/14733145.2011.548954.
https://doi.org/10.1080/14733145.2011.54...
).

Unlike the classic clinical case study, systematic cases usually involve a team of researchers, who gather data from multiple different sources (e.g., questionnaires, observations by the therapist, interviews, statistical findings, clinical assessment, etc.), and involve a rigorous data triangulation process to assess whether the data from different sources converge (McLeod, 2010McLeod, J. (2010). Case study research in counselling and psychotherapy. SAGE Publications. https://doi.org/10.4135/9781446287897.
https://doi.org/10.4135/9781446287897...
). Since systematic case studies are methodologically pluralistic, they have a greater interest in situating patients within the study of a broader population than clinical case studies (Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
). Systematic case studies are considered to be an accessible method for developing research evidence-base in psychotherapy (Widdowson, 2011Widdowson, M. (2011). Case study research methodology. International Journal of Transactional Analysis Research & Practice, 2(1). https://doi.org/10.29044/v2i1p25.
https://doi.org/10.29044/v2i1p25...
), especially since they correct some of the methodological limitations (e.g. lack of ‘third party’ perspectives and bias in data analysis) inherent to classic clinical case studies (Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
). They have been used for the purposes of clinical training (Tuckett, 2008Tuckett, D. (Ed.) (2008). The new library of psychoanalysis. Psychoanalysis comparable and incomparable: The evolution of a method to describe and compare psychoanalytic approaches. Routledge/Taylor & Francis Group. https://doi.org/10.4324/9780203932551.
https://doi.org/10.4324/9780203932551...
), outcome assessment (Hilliard, 1993Hilliard, R. B. (1993). Single–case methodology in psychotherapy process and outcome research. Journal of Consulting and Clinical Psychology, 61(3), 373–380. https://doi.org/10.1037/0022-006X.61.3.373.
https://doi.org/10.1037/0022-006X.61.3.3...
), development of clinical techniques (Almond, 2004Almond, R. (2004). “I Can Do It (All) Myself”: Clinical technique with defensive narcissistic self–sufficiency. Psychoanalytic Psychology, 21(3), 371–384. https://doi.org/10.1037/0736-9735.21.3.371.
https://doi.org/10.1037/0736-9735.21.3.3...
) and meta-analysis of qualitative findings (Timulak, 2009Timulak, L. (2009). Meta–analysis of qualitative studies: A tool for reviewing qualitative research findings in psychotherapy. Psychotherapy Research, 19(4–5), 591–600. https://doi.org/10.1080/10503300802477989.
https://doi.org/10.1080/1050330080247798...
). All these developments signal a revived interest in the case study method, but also point to the obvious lack of a research assessment tool suitable for case studies in psychotherapy (Table 1).

Table 1
Key concept: systematic case study

The development of a research assessment or appraisal tool is a lengthy, ongoing process (Long & Godfrey, 2004Long, A. F., & Godfrey, M. (2004). An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology, 7(2), 181–196. https://doi.org/10.1080/1364557032000045302.
https://doi.org/10.1080/1364557032000045...
). It is particularly challenging to develop a comprehensive purpose-oriented evaluative framework, suitable for the assessment of diverse methodologies, aims and outcomes. As such, this paper should be treated as an introduction to the broader development of CaSE tool. It will introduce the rationale behind CaSE and lay out its main approach to evidence and evaluation, with further development in mind. A case example from the Single Case Archive (SCA) (https://singlecasearchive.com) will be used to demonstrate the application of the tool ‘in action’. The paper notes further research trajectories and discusses some of the limitations around the use of the tool.

Separating the wheat from the chaff: what is and is not evidence in psychotherapy (and who gets to decide?) The common approach: evidence-based practice

In the last two decades, psychotherapy has become increasingly centred around the idea of an evidence-based practice (EBP). Initially introduced in medicine, EBP has been defined as ‘conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients’ (Sackett, Rosenberg, Gray, Haynes, & Richardson, 1996Sackett, D. L., Rosenberg, W. M., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn't. BMJ, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71.
https://doi.org/10.1136/bmj.312.7023.71...
). EBP revolves around efficacy research: it seeks to examine whether a specific intervention has a causal (in this case, measurable) effect on clinical populations (Barkham & Mellor-Clark, 2003Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy, 10(6), 319–327. https://doi.org/10.1002/cpp.379.
https://doi.org/10.1002/cpp.379...
). From a conceptual standpoint, Sackett and colleagues defined EBP as a paradigm that is inclusive of many methodologies, so long as they contribute towards clinical decision-making process and accumulation of best currently available evidence in any given set of circumstances (Gabbay & le May, 2011Gabbay, J., & le May, A. (2011). Practice–based evidence for healthcare: Clinical mindlines. Routledge.). Similarly, the American Psychological Association (APA, 2010) has recently issued calls for evidence-based systematic case studies in order to produce standardised measures for evaluating process and outcome data across different therapeutic modalities.

To attend to this research gap, this paper first reviews issues around the conceptualisation of validity within the paradigms of evidence-based practice (EBP) and practice-based evidence (PBE). Although case studies are often positioned at the low end of EBP (Aveline, 2005Aveline, M. (2005). Clinical case studies: Their place in evidence–based practice. Psychodynamic Practice: Individuals, Groups and Organisations, 11(2), 133–152. https://doi.org/10.1080/14753630500108174.
https://doi.org/10.1080/1475363050010817...
), the paper suggests that systematic cases are a valuable form of evidence, capable of complimenting large-scale studies such as randomised controlled trials (RCTs). However, there remains a difficulty in assessing the quality and relevance of case study findings to broader psychotherapy research.

As a way forward, the paper introduces a novel Case Study Evaluation-tool (CaSE) in the form of CaSE Purpose-based Evaluative Framework for Systematic Case Studies and CaSE Checklist for Essential Components in Systematic Case Studies. The long-term development of CaSE would contribute to psychotherapy research and practice in three ways.

Given the significance of methodological pluralism and diverse research aims in systematic case studies, CaSE will not seek to prescribe explicit case study writing guidelines, which has already been done by numerous authors (McLeod, 2010McLeod, J. (2010). Case study research in counselling and psychotherapy. SAGE Publications. https://doi.org/10.4135/9781446287897.
https://doi.org/10.4135/9781446287897...
; Meganck, Inslegers, Krivzov, & Notaerts, 2017Meganck, R., Inslegers, R., Krivzov, J., & Notaerts, L. (2017). Beyond clinical case studies in psychoanalysis: A review of psychoanalytic empirical single case studies published in ISI–ranked journals. Frontiers in Psychology, 8, 1749. https://doi.org/10.3389/fpsyg.2017.01749.
https://doi.org/10.3389/fpsyg.2017.01749...
; Willemsen et al., 2017Willemsen, J., Della Rosa, E., & Kegerreis, S. (2017). Clinical case studies in psychoanalytic and psychodynamic treatment. Frontiers in Psychology, 8(108). https://doi.org/10.3389/fpsyg.2017.00108.
https://doi.org/10.3389/fpsyg.2017.00108...
). Instead, CaSE will enable the retrospective assessment of systematic case study findings and their relevance (or lack thereof) to broader psychotherapy research and practice. However, there is no reason to assume that CaSE cannot be used prospectively (i.e. producing systematic case studies in accordance to CaSE evaluative framework, as per point 3 in Table 2).

Table 2
How can Case Study Evaluation-tool (CaSE) be used in psychotherapy research and practice?

However, given EBP's focus on establishing cause-and-effect relationships (Rosqvist, Thomas, & Truax, 2011Rosqvist, J., Thomas, J. C., & Truax, P. (2011). Effectiveness versus efficacy studies. In J. C. Thomas, & M. Hersen (Eds.), Understanding research in clinical and counseling psychology, (pp. 319–354). Routledge/Taylor & Francis Group.), it is unsurprising that qualitative research is generally not considered to be ‘gold standard’ or ‘efficacious’ within this paradigm (Aveline, 2005Aveline, M. (2005). Clinical case studies: Their place in evidence–based practice. Psychodynamic Practice: Individuals, Groups and Organisations, 11(2), 133–152. https://doi.org/10.1080/14753630500108174.
https://doi.org/10.1080/1475363050010817...
; Cartwright & Hardie, 2012Cartwright, N., & Hardie, J. (2012). What are RCTs good for? In N. Cartwright, & J. Hardie (Eds.), Evidence–based policy: A practical guide to doing it better. Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199841608.003.0008.
https://doi.org/10.1093/acprof:osobl/978...
; Edwards, 2013Edwards, D. J. A. (2013). Collaborative versus adversarial stances in scientific discourse: Implications for the role of systematic case studies in the development of evidence–based practice in psychotherapy. Pragmatic Case Studies in Psychotherapy, 3(1), 6–34.; Edwards, Dattilio, & Bromley, 2004Edwards, D. J. A., Dattilio, F. M., & Bromley, D. B. (2004). Developing evidence–based practice: The role of case–based research. Professional Psychology: Research and Practice, 35(6), 589–597. https://doi.org/10.1037/0735-7028.35.6.589.
https://doi.org/10.1037/0735-7028.35.6.5...
; Longhofer, Floersch, & Hartmann, 2017Longhofer, J., Floersch, J., & Hartmann, E. A. (2017). Case for the case study: How and why they matter. Clinical Social Work Journal, 45(3), 189–200. https://doi.org/10.1007/s10615-017-0631-8.
https://doi.org/10.1007/s10615-017-0631-...
). Qualitative methods like systematic case studies maintain an appreciation for context, complexity and meaning making. Therefore, instead of measuring regularly occurring causal relations (as in quantitative studies), the focus is on studying complex social phenomena (e.g. relationships, events, experiences, feelings, etc.) (Erickson, 2012Erickson, F. (2012). Comments on causality in qualitative inquiry. Qualitative Inquiry, 18(8), 686–688. https://doi.org/10.1177/1077800412454834.
https://doi.org/10.1177/1077800412454834...
; Maxwell, 2004Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11. https://doi.org/10.3102/0013189X033002003.
https://doi.org/10.3102/0013189X03300200...
). Edwards (2013)Edwards, D. J. A. (2013). Collaborative versus adversarial stances in scientific discourse: Implications for the role of systematic case studies in the development of evidence–based practice in psychotherapy. Pragmatic Case Studies in Psychotherapy, 3(1), 6–34. points out that, although context-based research in systematic case studies is the bedrock of psychotherapy theory and practice, it has also become shrouded by an unfortunate ideological description: ‘anecdotal’ case studies (i.e. unscientific narratives lacking evidence, as opposed to ‘gold standard’ evidence, a term often used to describe the RCT method and the therapeutic modalities supported by it), leading to a further need for advocacy in and defence of the unique epistemic process involved in case study research (Fishman, Messer, Edwards, & Dattilio, 2017Fishman, D. B., Messer, S. B., Edwards, D. J. A., & Dattilio, F. M. (Eds.) (2017). Case studies within psychotherapy trials: Integrating qualitative and quantitative methods. Oxford University Press.).

The EBP paradigm prioritises the quantitative approach to causality, most notably through its focus on high generalisability and the ability to deal with bias through randomisation process. These conditions are associated with randomised controlled trials (RCTs) but are limited (or, as some argue, impossible) in qualitative research methods such as the case study (Margison et al., 2000Margison, F. B., et al. (2000). Measurement and psychotherapy: Evidence–based practice and practice–based evidence. British Journal of Psychiatry, 177(2), 123–130. https://doi.org/10.1192/bjp.177.2.123.
https://doi.org/10.1192/bjp.177.2.123...
) (Table 3).

Table 3
Key concept: evidence-based practice (EBP)

‘Evidence’ from an EBP standpoint hovers over the epistemological assumption of procedural objectivity: knowledge can be generated in a standardised, non-erroneous way, thus producing objective (i.e. with minimised bias) data. This can be achieved by anyone, as long as they are able to perform the methodological procedure (e.g. RCT) appropriately, in a ‘clearly defined and accepted process that assists with knowledge production’ (Douglas, 2004Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138(3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91.
https://doi.org/10.1023/B:SYNT.000001645...
, p. 131). If there is a well-outlined quantitative form for knowledge production, the same outcome should be achieved regardless of who processes or interprets the information. For example, researchers using Cochrane Review assess the strength of evidence using meticulously controlled and scrupulous techniques; in turn, this minimises individual judgment and creates unanimity of outcomes across different groups of people (Gabbay & le May, 2011Gabbay, J., & le May, A. (2011). Practice–based evidence for healthcare: Clinical mindlines. Routledge.). The typical process of knowledge generation (through employing RCTs and procedural objectivity) in EBP is demonstrated in Fig. 1.

Fig. 1
Typical knowledge generation process in evidence-based practice (EBP)

In EBP, the concept of validity remains somewhat controversial, with many critics stating that it limits rather than strengthens knowledge generation (Berg, 2019Berg, H. (2019). How does evidence–based practice in psychology work? – As an ethical demarcation. Philosophical Psychology, 32(6), 853–873. https://doi.org/10.1080/09515089.2019.1632424.
https://doi.org/10.1080/09515089.2019.16...
; Berg & Slaattelid, 2017Berg, H., & Slaattelid, R. (2017). Facts and values in psychotherapy—A critique of the empirical reduction of psychotherapy within evidence-based practice. Journal of Evaluation in Clinical Practice, 23(5), 1075–1080. https://doi.org/10.1111/jep.12739.
https://doi.org/10.1111/jep.12739...
; Lilienfeld, Ritschel, Lynn, Cautin, & Latzman, 2013Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2013). Why many clinical psychologists are resistant to evidence–based practice: root causes and constructive remedies. Clinical Psychology Review, 33(7), 883–900. https://doi.org/10.1016/j.cpr.2012.09.008.
https://doi.org/10.1016/j.cpr.2012.09.00...
). This is because efficacy research relies on internal validity. At a general level, this concept refers to the congruence between the research study and the research findings (i.e. the research findings were not influenced by anything external to the study, such as confounding variables, methodological errors and bias); at a more specific level, internal validity determines the extent to which a study establishes a reliable causal relationship between an independent variable (e.g. treatment) and independent variable (outcome or effect) (Margison et al., 2000Margison, F. B., et al. (2000). Measurement and psychotherapy: Evidence–based practice and practice–based evidence. British Journal of Psychiatry, 177(2), 123–130. https://doi.org/10.1192/bjp.177.2.123.
https://doi.org/10.1192/bjp.177.2.123...
). This approach to validity is demonstrated in Fig. 2.

Social scientists have argued that there is a trade-off between research rigour and generalisability: the more specific the sample and the more rigorously defined the intervention, the outcome is likely to be less applicable to everyday, routine practice. As such, there remains a tension between employing procedural objectivity which increases the rigour of research outcomes and applying such outcomes to routine psychotherapy practice where scientific standards of evidence are not uniform.

According to McLeod (2002)McLeod, J. (2002). Case studies and practitioner research: Building knowledge through systematic inquiry into individual cases. Counselling and Psychotherapy Research: Linking research with practice, 2(4), 264–268. https://doi.org/10.1080/14733140212331384755.
https://doi.org/10.1080/1473314021233138...
, inability to address questions that are most relevant for practitioners contributed to a deepening research-practice divide in psychotherapy. Studies investigating how practitioners make clinical decisions and the kinds of evidence they refer to show that there is a strong preference for knowledge that is not generated procedurally, i.e. knowledge that encompasses concrete clinical situations, experiences and techniques. A study by Stewart and Chambless (2007)Stewart, R. E., & Chambless, D. L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology, 63(3), 267–281. https://doi.org/10.1002/jclp.20347.
https://doi.org/10.1002/jclp.20347...
sought to assess how a larger population of clinicians (under APA, from varying clinical schools of thought and independent practices, sample size 591) make treatment decisions in private practice. The study found that large-scale statistical data was not the primary source of information sought by clinicians. The most important influences were identified as past clinical experiences and clinical expertise (M = 5.62). Treatment materials based on clinical case observations and theory (M = 4.72) were used almost as frequently as psychotherapy outcome research findings (M = 4.80) (i.e. evidence-based research). These numbers are likely to fluctuate across different forms of psychotherapy; however, they are indicative of the need for research about routine clinical settings that does not isolate or generalise the effect of an intervention but examines the variations in psychotherapy processes.

Fig. 2
Internal validity

The alternative approach: practice-based evidence

In an attempt to dissolve or lessen the research-practice divide, an alternative paradigm of practice-based evidence (PBE) has been suggested (Barkham & Mellor-Clark, 2003Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy, 10(6), 319–327. https://doi.org/10.1002/cpp.379.
https://doi.org/10.1002/cpp.379...
; Fox, 2003Fox, N. J. (2003). Practice–based evidence: Towards collaborative and transgressive research. Sociology, 37(1), 81–102. https://doi.org/10.1177/0038038503037001388.
https://doi.org/10.1177/0038038503037001...
; Green & Latchford, 2012Green, L. W., & Latchford, G. (2012). Maximising the benefits of psychotherapy: A practice–based evidence approach. Wiley–Blackwell. https://doi.org/10.1002/9781119967590.
https://doi.org/10.1002/9781119967590...
; Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
; Laska, Motulsky, Wertz, Morrow, & Ponterotto, 2014Laska, K. M., Gurman, A. S., & Wampold, B. E. (2014). Expanding the lens of evidence–based practice in psychotherapy: A common factors perspective. Psychotherapy, 51(4), 467–481. https://doi.org/10.1037/a0034332.
https://doi.org/10.1037/a0034332...
; Margison et al., 2000Margison, F. B., et al. (2000). Measurement and psychotherapy: Evidence–based practice and practice–based evidence. British Journal of Psychiatry, 177(2), 123–130. https://doi.org/10.1192/bjp.177.2.123.
https://doi.org/10.1192/bjp.177.2.123...
). PBE represents a shift in how we think about evidence and knowledge generation in psychotherapy. PBE treats research as a local and contingent process (at least initially), which means it focuses on variations (e.g. in patient symptoms) and complexities (e.g. of clinical setting) in the studied phenomena (Fox, 2003Fox, N. J. (2003). Practice–based evidence: Towards collaborative and transgressive research. Sociology, 37(1), 81–102. https://doi.org/10.1177/0038038503037001388.
https://doi.org/10.1177/0038038503037001...
). Moreover, research and theory-building are seen as complementary rather than detached activities from clinical practice. That is to say, PBE seeks to examine how and which treatments can be improved in everyday clinical practice by flagging up clinically salient issues and developing clinical techniques (Barkham & Mellor-Clark, 2003Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy, 10(6), 319–327. https://doi.org/10.1002/cpp.379.
https://doi.org/10.1002/cpp.379...
). For this reason, PBE is concerned with the effectiveness of research findings: it evaluates how well interventions work in real-world settings (Rosqvist et al., 2011Rosqvist, J., Thomas, J. C., & Truax, P. (2011). Effectiveness versus efficacy studies. In J. C. Thomas, & M. Hersen (Eds.), Understanding research in clinical and counseling psychology, (pp. 319–354). Routledge/Taylor & Francis Group.). Therefore, although it is not unlikely for RCTs to be used in order to generate practice-informed evidence (Horn & Gassaway, 2007Horn, S. D., & Gassaway, J. (2007). Practice–based evidence study design for comparative effectiveness research. Medical Care, 45(10), S50–S57. https://doi.org/10.1097/MLR.0b013e318070c07b.
https://doi.org/10.1097/MLR.0b013e318070...
), qualitative methods like the systematic case study are seen as ideal for demonstrating the effectiveness of therapeutic interventions with individual patients (van Hennik, 2020van Hennik, R. (2020). Practice based evidence based practice, part II: Navigating complexity and validity from within. Journal of Family Therapy, 43 (1), 27–45. https://doi.org/10.1111/1467-6427.12291.
https://doi.org/10.1111/1467-6427.12291...
) (Table 4).

Table 4
Key concept: practice-based evidence (PBE)

PBE's epistemological approach to ‘evidence’ may be understood through the process of concordant objectivity (Douglas, 2004Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138(3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91.
https://doi.org/10.1023/B:SYNT.000001645...
): ‘Instead of seeking to eliminate individual judgment, … [concordant objectivity] checks to see whether the individual judgments of people in fact do agree’ (p. 462). This does not mean that anyone can contribute to the evaluation process like in procedural objectivity, where the main criterion is following a set quantitative protocol or knowing how to operate a specific research design. Concordant objectivity requires that there is a set of competent observers who are closely familiar with the studied phenomenon (e.g. researchers and practitioners who are familiar with depression from a variety of therapeutic approaches).

Systematic case studies are a good example of PBE ‘in action’: they allow for the examination of detailed unfolding of events in psychotherapy practice, making it the most pragmatic and practice-oriented form of psychotherapy research (Fishman, 1999Fishman, D. B. (1999). The case for pragmatic psychology. New York University Press., 2005Fishman, D. B. (2005). Editor's introduction to PCSP––From single case to database: A new method for enhancing psychotherapy practice. Pragmatic Case Studies in Psychotherapy, 1(1), 1–50.). Furthermore, systematic case studies approach evidence and results through concordant objectivity (Douglas, 2004Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138(3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91.
https://doi.org/10.1023/B:SYNT.000001645...
) by involving a team of researchers and rigorous data triangulation processes (McLeod, 2010McLeod, J. (2010). Case study research in counselling and psychotherapy. SAGE Publications. https://doi.org/10.4135/9781446287897.
https://doi.org/10.4135/9781446287897...
). This means that, although systematic case studies remain focused on particular clinical situations and detailed subjective experiences (similar to classic clinical case studies; see Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
), they still involve a series of validity checks and considerations on how findings from a single systematic case pertain to broader psychotherapy research (Fishman, 2005Fishman, D. B. (2005). Editor's introduction to PCSP––From single case to database: A new method for enhancing psychotherapy practice. Pragmatic Case Studies in Psychotherapy, 1(1), 1–50.). The typical process of knowledge generation (through employing systematic case studies and concordant objectivity) in PBE is demonstrated in Fig. 3. The figure exemplifies a bidirectional approach to research and practice, which includes the development of research-supported psychological treatments (through systematic reviews of existing evidence) as well as the perspectives of clinical practitioners in the research process (through the study of local and contingent patient and/or treatment processes) (Teachman et al., 2012Teachman, B. A., Drabick, D. A., Hershenberg, R., Vivian, D., Wolfe, B. E., & Goldfried, M. R. (2012). Bridging the gap between clinical research and clinical practice: introduction to the special section. Psychotherapy, 49 (2), 97–100. https://doi.org/10.1037/a0027346.
https://doi.org/10.1037/a0027346...
; Westen, Novotny, & Thompson-Brenner, 2004Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130(4), 631–663. https://doi.org/10.1037/0033-2909.130.4.631.
https://doi.org/10.1037/0033-2909.130.4....
).

Fig. 3
Typical knowledge generation process in practice-based evidence (PBE)

From a PBE standpoint, external validity is a desirable research condition: it measures extent to which the impact of interventions apply to real patients and therapists in everyday clinical settings. As such, external validity is not based on the strength of causal relationships between treatment interventions and outcomes (as in internal validity); instead, the use of specific therapeutic techniques and problem-solving decisions are considered to be important for generalising findings onto routine clinical practice (even if the findings are explicated from a single case study; see Aveline, 2005Aveline, M. (2005). Clinical case studies: Their place in evidence–based practice. Psychodynamic Practice: Individuals, Groups and Organisations, 11(2), 133–152. https://doi.org/10.1080/14753630500108174.
https://doi.org/10.1080/1475363050010817...
). This approach to validity is demonstrated in Fig. 4.

Fig. 4
External validity

Since effectiveness research is less focused on limiting the context of the studied phenomenon (indeed, explicating the context is often one of the research aims), there is more potential for confounding factors (e.g. bias and uncontrolled variables) which in turn can reduce the study's internal validity (Barkham & Mellor-Clark, 2003Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy, 10(6), 319–327. https://doi.org/10.1002/cpp.379.
https://doi.org/10.1002/cpp.379...
). This is also an important challenge for research appraisal. Douglas (2004)Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138(3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91.
https://doi.org/10.1023/B:SYNT.000001645...
argues that appraising research in terms of its effectiveness may produce significant disagreements or group illusions, since what might work for some practitioners may not work for others: ‘It cannot guarantee that values are not influencing or supplanting reasoning; the observers may have shared values that cause them to all disregard important aspects of an event’ (Douglas, 2004Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138(3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91.
https://doi.org/10.1023/B:SYNT.000001645...
, p. 462). Douglas further proposes that an interactive approach to objectivity may be employed as a more complex process in debating the evidential quality of a research study: it requires a discussion among observers and evaluators in the form of peer-review, scientific discourse, as well as research appraisal tools and instruments. While these processes of rigour are also applied in EBP, there appears to be much more space for debate, disagreement and interpretation in PBE's approach to research evaluation, partly because the evaluation criteria themselves are subject of methodological debate and are often employed in different ways by researchers (Williams et al., 2019Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine. https://doi.org/10.1136/bmjebm-2018-111132.
https://doi.org/10.1136/bmjebm-2018-1111...
). This issue will be addressed more explicitly again in relation to CaSE development (‘Developing purpose-oriented evaluation criteria for systematic case studies’ section).

A third way approach to validity and evidence

The research-practice divide shows us that there may be something significant in establishing complementarity between EBP and PBE rather than treating them as mutually exclusive forms of research (Fishman et al., 2017Fishman, D. B., Messer, S. B., Edwards, D. J. A., & Dattilio, F. M. (Eds.) (2017). Case studies within psychotherapy trials: Integrating qualitative and quantitative methods. Oxford University Press.). For one, EBP is not a sufficient condition for delivering research relevant to practice settings (Bower, 2003Bower, P. (2003). Efficacy in evidence-based practice. Clinical Psychology and Psychotherapy, 10(6), 328–336. https://doi.org/10.1002/cpp.380.
https://doi.org/10.1002/cpp.380...
). While RCTs can demonstrate that an intervention works on average in a group, clinicians who are facing individual patients need to answer a different question: how can I make therapy work with this particular case? (Cartwright & Hardie, 2012Cartwright, N., & Hardie, J. (2012). What are RCTs good for? In N. Cartwright, & J. Hardie (Eds.), Evidence–based policy: A practical guide to doing it better. Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199841608.003.0008.
https://doi.org/10.1093/acprof:osobl/978...
). Systematic case studies are ideal for filling this gap: they contain descriptions of microprocesses (e.g. patient symptoms, therapeutic relationships, therapist attitudes) in psychotherapy practice that are often overlooked in large-scale RCTs (Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
). In particular, systematic case studies describing the use of specific interventions with less researched psychological conditions (e.g. childhood depression or complex post-traumatic stress disorder) can deepen practitioners’ understanding of effective clinical techniques before the results of large-scale outcome studies are disseminated.

Secondly, establishing a working relationship between systematic case studies and RCTs will contribute towards a more pragmatic understanding of validity in psychotherapy research. Indeed, the very tension and so-called tradeoff between internal and external validity is based on the assumption that research methods are designed on an either/or basis; either they provide a sufficiently rigorous study design or they produce findings that can be applied to real-life practice. Jimenez-Buedo and Miller (2010)Jimenez-Buedo, M., & Miller, L. (2010). Why a Trade–Off? The relationship between the external and internal validity of experiments. THEORIA: An International Journal for Theory History and Foundations of Science, 25(3), 301–321. call this assumption into question: in their view, if a study is not internally valid, then ‘little, or rather nothing, can be said of the outside world’ (p. 302). In this sense, internal validity may be seen as a pre-requisite for any form of applied research and its external validity, but it need not be constrained to the quantitative approach of causality. For example, Levitt, Motulsky, Wertz, Morrow, and Ponterotto (2017Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative Psychology, 4(1), 2–22. https://doi.org/10.1037/qup0000082.
https://doi.org/10.1037/qup0000082...
) argue that, what is typically conceptualised as internal validity, is, in fact, a much broader construct, involving the assessment of how the research method (whether qualitative or quantitative) is best suited for the research goal, and whether it obtains the relevant conclusions. Similarly, Truijens, Cornelis, Desmet, and De Smet (2019Truijens, F., Cornelis, S., Desmet, M., & De Smet, M. (2019). Validity beyond measurement: Why psychometric validity is insufficient for valid psychotherapy research. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.00532.
https://doi.org/10.3389/fpsyg.2019.00532...
) suggest that we should think about validity in a broader epistemic sense—not just in terms of psychometric measures, but also in terms of the research design, procedure, goals (research questions), approaches to inquiry (paradigms, epistemological assumptions), etc.

The overarching argument from research cited above is that all forms of research—qualitative and quantitative—can produce ‘valid evidence’ but the validity itself needs to be assessed against each specific research method and purpose. For example, RCTs are accompanied with a variety of clearly outlined appraisal tools and instruments such as CASP (Critical Appraisal Skills Programme) that are well suited for the assessment of RCT validity and their implications for EBP. Systematic case studies (or case studies more generally) currently have no appraisal tools in any discipline. The next section evaluates whether existing qualitative research appraisal tools are relevant for systematic case studies in psychotherapy and specifies the missing evaluative criteria.

The relevance of existing appraisal tools for qualitative research to systematic case studies in psychotherapy

What is a research tool?

Currently, there are several research appraisal tools, checklists and frameworks for qualitative studies. It is important to note that tools, checklists and frameworks are not equivalent to one another but actually refer to different approaches to appraising the validity of a research study. As such, it is erroneous to assume that all forms of qualitative appraisal feature the same aims and methods (Hannes et al., 2010Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research, 20(12), 1736–1743. https://doi.org/10.1177/1049732310378656.
https://doi.org/10.1177/1049732310378656...
; Williams et al., 2019Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine. https://doi.org/10.1136/bmjebm-2018-111132.
https://doi.org/10.1136/bmjebm-2018-1111...
).

Generally, research assessment falls into two categories: checklists and frameworks. Checklist approaches are often contrasted with quantitative research, since the focus is on assessing the internal validity of research (i.e. researcher's independence from the study). This involves the assessment of bias in sampling, participant recruitment, data collection and analysis. Framework approaches to research appraisal, on the other hand, revolve around traditional qualitative concepts such as transparency, reflexivity, dependability and transferability (Williams et al., 2019Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine. https://doi.org/10.1136/bmjebm-2018-111132.
https://doi.org/10.1136/bmjebm-2018-1111...
). Framework approaches to appraisal are often challenging to use because they depend on the reviewer's familiarisation and interpretation of the qualitative concepts.

Because of these different approaches, there is some ambiguity in terminology, particularly between research appraisal instruments and research appraisal tools. These terms are often used interchangeably in appraisal literature (Williams et al., 2019Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine. https://doi.org/10.1136/bmjebm-2018-111132.
https://doi.org/10.1136/bmjebm-2018-1111...
). In this paper, research appraisal tool is defined as a method-specific (i.e. it identifies a specific research method or component) form of appraisal that draws from both checklist and framework approaches. Furthermore, a research appraisal tool seeks to inform decision making in EBP or PBE paradigms and provides explicit definitions of the tool's evaluative framework (thus minimising—but by no means eliminat- ing—the reviewers’ interpretation of the tool). This definition will be applied to CaSE (Table 5).

Table 5
Key concept: research appraisal tool

In contrast, research appraisal instruments are generally seen as a broader form of appraisal in the sense that they may evaluate a variety of methods (i.e. they are non-method specific or they do not target a particular research component), and are aimed at checking whether the research findings and/or the study design contain specific elements (e.g. the aims of research, the rationale behind design methodology, participant recruitment strategies, etc.).

There is often an implicit difference in audience between appraisal tools and instruments. Research appraisal instruments are often aimed at researchers who want to assess the strength of their study; however, the process of appraisal may not be made explicit in the study itself (besides mentioning that the tool was used to appraise the study). Research appraisal tools are aimed at researchers who wish to explicitly demonstrate the evidential quality of the study to the readers (which is particularly common in RCTs). All forms of appraisal used in the comparative exercise below are defined as ‘tools’, even though they have different appraisal approaches and aims.

Comparing different qualitative tools

Hannes et al. (2010)Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research, 20(12), 1736–1743. https://doi.org/10.1177/1049732310378656.
https://doi.org/10.1177/1049732310378656...
identified CASP (Critical Appraisal Skills Programme-tool), JBI (Joanna Briggs Institute- tool) and ETQS (Evaluation Tool for Qualitative Studies) as the most frequently used critical appraisal tools by qualitative researchers. All three instruments are available online and are free of charge, which means that any researcher or reviewer can readily utilise CASP, JBI or ETQS evaluative frameworks to their research. Furthermore, all three instruments were developed within the context of organisational, institutional or consortium support (Tables 6, 7 and 8).

Table 6
CASP (Critical Appraisal Skills Programme-tool)
Table 7
JBI (Joanna Briggs Institute-tool)
Table 8
ETQS (Evaluation Tool for Qualitative Studies)

It is important to note that neither of the three tools is specific to systematic case studies or psychotherapy case studies (which would include not only systematic but also experimental and clinical cases). This means that using CASP, JBI or ETQS for case study appraisal may come at a cost of overlooking elements and components specific to the systematic case study method.

Based on Hannes et al. (2010)Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research, 20(12), 1736–1743. https://doi.org/10.1177/1049732310378656.
https://doi.org/10.1177/1049732310378656...
comparative study of qualitative appraisal tools as well as the different evaluation criteria explicated in CASP, JBI and ETQS evaluative frameworks, I assessed how well each of the three tools is attuned to the methodological, clinical and theoretical aspects of systematic case studies in psychotherapy. The latter components were based on case study guidelines featured in the journal of Pragmatic Case Studies in Psychotherapy as well as components commonly used by published systematic case studies across a variety of other psychotherapy journals (e.g. Psychotherapy Research, Research In Psychotherapy: Psychopathology Process And Outcome, etc.) (see Table 9 for detailed descriptions of each component).

Table 9
Comparing the relevance of JBI (Joanna Briggs Institute), CASP (Critical Appraisal Skills Program) and ETQS (Evaluation Tool for Qualitative Studies) for appraising components specific to systematic case studies

The evaluation criteria for each tool in Table 9 follows Joanna Briggs Institute (JBI) (2017aJoanna Briggs Institute (JBI). (2017a). Critical appraisal checklist for qualitative research. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Qualitative_Research2017_0.pdf
https://joannabriggs.org/sites/default/f...
, Joanna Briggs Institute (JBI) 2017bJoanna Briggs Institute (JBI). (2017b). Checklist for case reports. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Case_Reports2017_0.pdf
https://joannabriggs.org/sites/default/f...
); Critical Appraisal Skills Programme (CASP) (2018Critical Appraisal Skills Programme (CASP). (2018). Qualitative checklist. Retrieved from https://casp–uk.net/wp–content/uploads/2018/01/CASP–Qualitative–Checklist–2018.pdf.
https://casp–uk.net/wp–content/uploads/2...
); and ETQS Questionnaire (first published in 2004 but revised continuously since). Table 10 demonstrates how each tool should be used (i.e. recommended reviewer responses to checklists and questionnaires).

Table 10
Recommended reviewer responses to JBI (Joanna Briggs Institute), CASP (Critical Appraisal Skills Program) and ETQS (Evaluation Tool for Qualitative Studies)

Using CASP, JBI and ETQS for systematic case study appraisal

Although JBI, CASP and ETQS were all developed to appraise qualitative research, it is evident from the above comparison that there are significant differences between the three tools. For example, JBI and ETQS are well suited to assess researcher's interpretations (Hannes et al. (2010)Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research, 20(12), 1736–1743. https://doi.org/10.1177/1049732310378656.
https://doi.org/10.1177/1049732310378656...
defined this as interpretive validity, a subcategory of internal validity): the researcher's ability to portray, understand and reflect on the research participants’ experiences, thoughts, viewpoints and intentions. JBI has an explicit requirement for participant voices to be clearly represented, whereas ETQS involves a set of questions about key characteristics of events, persons, times and settings that are relevant to the study. Furthermore, both JBI and ETQS seek to assess the researcher's influence on the research, with ETQS particularly focusing on the evaluation of reflexivity (the researcher's personal influence on the interpretation and collection of data). These elements are absent or addressed to a lesser extent in the CASP tool.

The appraisal of transferability of findings (what this paper previously referred to as external validity) is addressed only by ETQS and CASP. Both tools have detailed questions about the value of research to practice and policy as well as its transferability to other populations and settings. Methodological research aspects are also extensively addressed by CASP and ETQS, but less so by JBI (which relies predominantly on congruity between research methodology and objectives without any particular assessment criteria for other data sources and/or data collection methods). Finally, the evaluation of theoretical aspects (referred to by Hannes et al. (2010)Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research, 20(12), 1736–1743. https://doi.org/10.1177/1049732310378656.
https://doi.org/10.1177/1049732310378656...
as theoretical validity) is addressed only by JBI and ETQS; there are no assessment criteria for theoretical framework in CASP.

Given these differences, it is unsurprising that CASP, JBI and ETQS have limited relevance for systematic case studies in psychotherapy. First, it is evident that neither of the three tools has specific evaluative criteria for the clinical component of systematic case studies. Although JBI and ETQS feature some relevant questions about participants and their context, the conceptualisation of patients (and/or clients) in psychotherapy involves other kinds of data elements (e.g. diagnostic tools and questionnaires as well as therapist observations) that go beyond the usual participant data. Furthermore, much of the clinical data is intertwined with the therapist's clinical decision-making and thinking style (Kaluzeviciute & Willemsen, 2020Kaluzeviciute, G., & Willemsen, J. (2020). Scientific thinking styles: The different ways of thinking in psychoanalytic case studies. The International Journal of Psychoanalysis, 101(5), 900–922. https://doi.org/10.1080/00207578.2020.1796491.
https://doi.org/10.1080/00207578.2020.17...
). As such, there is a need to appraise patient data and therapist interpretations not only on a separate basis, but also as two forms of knowledge that are deeply intertwined in the case narrative.

Secondly, since systematic case studies involve various forms of data, there is a need to appraise how these data converge (or how different methods complement one another in the case context) and how they can be transferred or applied in broader psychotherapy research and practice. These systematic case study components are attended to a degree by CASP (which is particularly attentive of methodological components) and ETQS (particularly specific criteria for research transferability onto policy and practice). These components are not addressed or less explicitly addressed by JBI. Overall, neither of the tools is attuned to all methodological, theoretical and clinical components of the systematic case study. Specifically, there are no clear evaluation criteria for the description of research teams (i.e. different data analysts and/or clinicians); the suitability of the systematic case study method; the description of patient's clinical assessment; the use of other methods or data sources; the general data about therapeutic progress.

Finally, there is something to be said about the recommended reviewer responses (Table 10). Systematic case studies can vary significantly in their formulation and purpose. The methodological, theoretical and clinical components outlined in Table 9 follow guidelines made by case study journals; however, these are recommendations, not ‘set in stone’ case templates. For this reason, the straightforward checklist approaches adopted by JBI and CASP may be difficult to use for case study researchers and those reviewing case study research. The ETQS open-ended questionnaire approach suggested by Long and Godfrey (2004)Long, A. F., & Godfrey, M. (2004). An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology, 7(2), 181–196. https://doi.org/10.1080/1364557032000045302.
https://doi.org/10.1080/1364557032000045...
enables a comprehensive, detailed and purpose-oriented assessment, suitable for the evaluation of systematic case studies. That said, there remains a challenge of ensuring that there is less space for the interpretation of evaluative criteria (Williams et al., 2019Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine. https://doi.org/10.1136/bmjebm-2018-111132.
https://doi.org/10.1136/bmjebm-2018-1111...
). The combination of checklist and framework approaches would, therefore, provide a more stable appraisal process across different reviewers.

Developing purpose-oriented evaluation criteria for systematic case studies

The starting point in developing evaluation criteria for Case Study Evaluation-tool (CaSE) is addressing the significance of pluralism in systematic case studies. Unlike RCTs, systematic case studies are pluralistic in the sense that they employ divergent practices in methodological procedures (research process), and they may include significantly different research aims and purpose (the end-goal) (Kaluzeviciute & Willemsen, 2020Kaluzeviciute, G., & Willemsen, J. (2020). Scientific thinking styles: The different ways of thinking in psychoanalytic case studies. The International Journal of Psychoanalysis, 101(5), 900–922. https://doi.org/10.1080/00207578.2020.1796491.
https://doi.org/10.1080/00207578.2020.17...
). While some systematic case studies will have an explicit intention to conceptualise and situate a single patient's experiences and symptoms within a broader clinical population, others will focus on the exploration of phenomena as they emerge from the data. It is therefore important that CaSE is positioned within a purpose-oriented evaluative framework, suitable for the assessment of what each systematic case is good for (rather than determining an absolute measure of ‘good’ and ‘bad’ systematic case studies). This approach to evidence and appraisal is in line with the PBE paradigm. PBE emphasises the study of clinical complexities and variations through local and contingent settings (e.g. single case studies) and promotes methodological pluralism (Barkham & Mellor-Clark, 2003Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy, 10(6), 319–327. https://doi.org/10.1002/cpp.379.
https://doi.org/10.1002/cpp.379...
).

CaSE checklist for essential components in systematic case studies

In order to conceptualise purpose-oriented appraisal questions, we must first look at what unites and differentiates systematic case studies in psychotherapy. The commonly used theoretical, clinical and methodological systematic case study components were identified earlier in Table 9. These components will be seen as essential and common to most systematic case studies in CaSE evaluative criteria. If these essential components are missing in a systematic case study, then it may be implied there is a lack of information, which in turn diminishes the evidential quality of the case. As such, the checklist serves as a tool for checking whether a case study is, indeed, systematic (as opposed to experimental or clinical; see Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
for further differentiation between methodologically distinct case study types) and should be used before CaSE Purpose-based Evaluative Framework for Systematic Case Studies (which is designed for the appraisal of different purposes common to systematic case studies).

As noted earlier in the paper, checklist approaches to appraisal are useful when evaluating the presence or absence of specific information in a research study. This approach can be used to appraise essential components in systematic case studies, as shown below. From a pragmatic point view (Levitt et al., 2017Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative Psychology, 4(1), 2–22. https://doi.org/10.1037/qup0000082.
https://doi.org/10.1037/qup0000082...
; Truijens et al., 2019Truijens, F., Cornelis, S., Desmet, M., & De Smet, M. (2019). Validity beyond measurement: Why psychometric validity is insufficient for valid psychotherapy research. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.00532.
https://doi.org/10.3389/fpsyg.2019.00532...
), CaSE Checklist for Essential Components in Systematic Case Studies can be seen as a way to ensure the internal validity of systematic case study: the reviewer is assessing whether sufficient information is provided about the case design, procedure, approaches to inquiry, etc., and whether they are relevant to the researcher's objectives and conclusions (Table 11).

Table 11
Case Study Evaluation-tool (CaSE) checklist for essential components in systematic case studies. Recommended responses: Yes, No, unclear or not applicable

CaSE purpose-based evaluative framework for systematic case studies

Identifying differences between systematic case studies means identifying the different purposes systematic case studies have in psychotherapy. Based on the earlier work by social scientist Yin (1984Yin, R. K. (1984). Case study research: Design and methods. SAGE Publications., 1993Yin, R. K. (1993). Applications of case study research. SAGE Publications.), we can differentiate between exploratory (hypothesis generating, indicating a beginning phase of research), descriptive (particularising case data as it emerges) and representative (a case that is typical of a broader clinical population, referred to as the ‘explanatory case’ by Yin) cases.

Another increasingly significant strand of systematic case studies is transferable (aggregating and transferring case study findings) cases. These cases are based on the process of meta-synthesis (Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
): by examining processes and outcomes in many different case studies dealing with similar clinical issues, researchers can identify common themes and inferences. In this way, single case studies that have relatively little impact on clinical practice, research or health care policy (in the sense that they capture psychotherapy processes rather than produce generalisable claims as in Yin's representative case studies) can contribute to the generation of a wider knowledge base in psychotherapy (Iwakabe, 2003Iwakabe, S. (2003, May). Common change events in stages of psychotherapy: A qualitative analysis of case reports. In Paper presented at the 19th Annual Conference of the Society for Exploration of Psychotherapy Integration, New York., 2005Iwakabe, S. (2005). Pragmatic meta–analysis of case studies. Annual Progress of Family Psychology, 23, 154–169.). However, there is an ongoing issue of assessing the evidential quality of such transferable cases. According to Duncan and Sparks (2020)Duncan, B. L., & Sparks, J. A. (2020). When meta–analysis misleads: A critical case study of a meta–analysis of client feedback. Psychological Services, 17(4), 487–496. https://doi.org/10.1037/ser0000398.
https://doi.org/10.1037/ser0000398...
, although metasynthesis and meta-analysis are considered to be ‘gold standard’ for assessing interventions across disparate studies in psychotherapy, they often contain case studies with significant research limitations, inappropriate interpretations and insufficient information. It is therefore important to have a research appraisal process in place for selecting transferable case studies.

Two other types of systematic case study research include: critical (testing and/or confirming existing theories) cases, which are described as an excellent method for falsifying existing theoretical concepts and testing whether therapeutic interventions work in practice with concrete patients (Kaluzeviciute, 2021Kaluzeviciute, G. (2021). Validity, Evidence and Appraisal in Systematic Psychotherapy Case Studies. Paper presented at the Research Forum of Department of Psychosocial and Psychoanalytic Studies, University of Essex, Colchester, UK. https://doi.org/10.13140/RG.2.2.33502.15683
https://doi.org/10.13140/RG.2.2.33502.15...
), and unique (going beyond the ‘typical’ cases and demonstrating deviations) cases (Merriam, 1998Merriam, S. B. (1998). Qualitative research and case study applications in education. Jossey–Bass Publishers.). These two systematic case study types are often seen as less valuable for psychotherapy research given that unique/falsificatory findings are difficult to generalise. But it is clear that practitioners and researchers in our field seek out context-specific data, as well as detailed information on the effectiveness of therapeutic techniques in single cases (Stiles, 2007Stiles, W. B. (2007). Theory–building case studies of counselling and psychotherapy. Counselling and Psychotherapy Research, 7 (2), 122–127. https://doi.org/10.1080/14733140701356742.
https://doi.org/10.1080/1473314070135674...
) (Table 12).

Table 12
Key concept: purpose-based systematic case studies

Each purpose-based case study contributes to PBE in different ways. Representative cases provide qualitatively rich, in-depth data about a clinical phenomenon within its particular context. This offers other clinicians and researchers access to a ‘closed world’ (Mackrill & Iwakabe, 2013Mackrill, T., & Iwakabe, S. (2013). Making a case for case studies in psychotherapy training: A small step towards establishing an empirical basis for psychotherapy training. Counselling Psychotherapy Quarterly, 26(3–4), 250–266. https://doi.org/10.1080/09515070.2013.832148.
https://doi.org/10.1080/09515070.2013.83...
) containing a wide range of attributes about a conceptual type (e.g. clinical condition or therapeutic technique). Descriptive cases generally seek to demonstrate a realistic snapshot of therapeutic processes, including complex dynamics in therapeutic relationships, and instances of therapeutic failure (Maggio, Molgora, & Oasi, 2019Maggio, S., Molgora, S., & Oasi, O. (2019). Analyzing psychotherapeutic failures: A research on the variables involved in the treatment with an individual setting of 29 cases. Frontiers in Psychology, 10, 1250. https://doi.org/10.3389/fpsyg.2019.01250.
https://doi.org/10.3389/fpsyg.2019.01250...
). Data in descriptive cases should be presented in a transparent manner (e.g. if there are issues in standardising patient responses to a self-report questionnaire, this should be made explicit). Descriptive cases are commonly used in psychotherapy training and supervision. Unique cases are relevant for both clinicians and researchers: they often contain novel treatment approaches and/or introduce new diagnostic considerations about patients who deviate from the clinical population. Critical cases demonstrate the application of psychological theories ‘in action’ with particular patients; as such, they are relevant to clinicians, researchers and policymakers (Mackrill & Iwakabe, 2013Mackrill, T., & Iwakabe, S. (2013). Making a case for case studies in psychotherapy training: A small step towards establishing an empirical basis for psychotherapy training. Counselling Psychotherapy Quarterly, 26(3–4), 250–266. https://doi.org/10.1080/09515070.2013.832148.
https://doi.org/10.1080/09515070.2013.83...
). Exploratory cases bring new insight and observations into clinical practice and research. This is particularly useful when comparing (or introducing) different clinical approaches and techniques (Trad & Raine, 1994Trad, P. V., & Raine, M. J. (1994). A prospective interpretation of unconscious processes during psychoanalytic psychotherapy. Psychoanalytic Psychology, 11(1), 77–100. https://doi.org/10.1037/h0079522.
https://doi.org/10.1037/h0079522...
). Findings from exploratory cases often include future research suggestions. Finally, transferable cases provide one solution to the generalisation issue in psychotherapy research through the previously mentioned process of meta-synthesis. Grouped together, transferable cases can contribute to theory building and development, as well as higher levels of abstraction about a chosen area of psychotherapy research (Iwakabe & Gazzola, 2009Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494.
https://doi.org/10.1080/1050330080268849...
).

With this plurality in mind, it is evident that CaSE has a challenging task of appraising research components that are distinct across six different types of purpose-based systematic case studies. The purpose-specific evaluative criteria in Table 13 was developed in close consultation with epistemological literature associated with each type of case study, including: Yin's (1984Yin, R. K. (1984). Case study research: Design and methods. SAGE Publications., 1993Yin, R. K. (1993). Applications of case study research. SAGE Publications.) work on establishing the typicality of representative cases; Duncan and Sparks’ (2020) and Iwakabe and Gazzola's (2009) case selection criteria for metasynthesis and meta-analysis; Stake's (1995Stake, R. E. (1995). The art of case study research. SAGE Publications., 2010Stake, R. E. (2010). Qualitative research: Studying how things work. The Guilford Press.) research on particularising case narratives; Merriam's (1998Merriam, S. B. (1998). Qualitative research and case study applications in education. Jossey–Bass Publishers.) guidelines on distinctive attributes of unique case studies; Kennedy's (1979Kennedy, M. M. (1979). Generalising from single case studies. Evaluation Quarterly, 3(4), 661–678. https://doi.org/10.1177/0193841X7900300409.
https://doi.org/10.1177/0193841X79003004...
) epistemological rules for generalising from case studies; Mahrer's (1988Mahrer, A. R. (1988). Discovery – oriented psychotherapy research: Rationale, aims, and methods. American Psychologist, 43(9), 694–702. https://doi.org/10.1037/0003-066X.43.9.694.
https://doi.org/10.1037/0003-066X.43.9.6...
) discovery oriented case study approach; and Edelson's (1986Edelson, M. (1986). Causal explanation in science and in psychoanalysis—Implications for writing a case study. Psychoanalytic Study of Child, 41(1), 89–127. https://doi.org/10.1080/00797308.1986.11823452.
https://doi.org/10.1080/00797308.1986.11...
) guidelines for rigorous hypothesis generation in case studies.

Table 13
Case Study Evaluation-tool (CaSE) purpose-based evaluative framework for systematic case studies. Recommended responses: open-ended questionnaire

Research on epistemic issues in case writing (Kaluzeviciute, 2021Kaluzeviciute, G. (2021). Validity, Evidence and Appraisal in Systematic Psychotherapy Case Studies. Paper presented at the Research Forum of Department of Psychosocial and Psychoanalytic Studies, University of Essex, Colchester, UK. https://doi.org/10.13140/RG.2.2.33502.15683
https://doi.org/10.13140/RG.2.2.33502.15...
) and different forms of scientific thinking in psychoanalytic case studies (Kaluzeviciute & Willemsen, 2020Kaluzeviciute, G., & Willemsen, J. (2020). Scientific thinking styles: The different ways of thinking in psychoanalytic case studies. The International Journal of Psychoanalysis, 101(5), 900–922. https://doi.org/10.1080/00207578.2020.1796491.
https://doi.org/10.1080/00207578.2020.17...
) was also utilised to identify case study components that would help improve therapist clinical decision-making and reflexivity.

For the analysis of more complex research components (e.g. the degree of therapist reflexivity), the purpose-based evaluation will utilise a framework approach, in line with comprehensive and open-ended reviewer responses in ETQS (Evaluation Tool for Qualitative Studies) (Long & Godfrey, 2004Long, A. F., & Godfrey, M. (2004). An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology, 7(2), 181–196. https://doi.org/10.1080/1364557032000045302.
https://doi.org/10.1080/1364557032000045...
) (Table 13). That is to say, the evaluation here is not so much about the presence or absence of information (as in the checklist approach) but the degree to which the information helps the case with its unique purpose, whether it is generalisability or typicality. Therefore, although the purpose-oriented evaluation criteria below encompasses comprehensive questions at a considerable level of generality (in the sense that not all components may be required or relevant for each case study), it nevertheless seeks to engage with each type of purpose-based systematic case study on an individual basis (attending to research or clinical components that are unique to each of type of case study).

It is important to note that, as this is an introductory paper to CaSE, the evaluative framework is still preliminary: it involves some of the core questions that pertain to the nature of all six purpose-based systematic case studies. However, there is a need to develop a more comprehensive and detailed CaSE appraisal framework for each purpose-based systematic case study in the future.

Using CaSE on published systematic case studies in psychotherapy: an example

To illustrate the use of CaSE Purpose-based Evaluative Framework for Systematic Case Studies, a case study by Lunn, Daniel, and Poulsen (2016Lunn, S., Daniel, S. I. F., & Poulsen, S. (2016). Psychoanalytic psychotherapy with a client with bulimia nervosa. Psychotherapy, 53(2), 206–215. https://doi.org/10.1037/pst0000052.
https://doi.org/10.1037/pst0000052...
) titled ‘Psychoanalytic Psychotherapy With a Client With Bulimia Nervosa’ was selected from the Single Case Archive (SCA) and analysed in Table 14. Based on the core questions associated with the six purpose-based systematic case study types in Table 13(1 to 6), the purpose of Lunn et al.'s (2016Lunn, S., Daniel, S. I. F., & Poulsen, S. (2016). Psychoanalytic psychotherapy with a client with bulimia nervosa. Psychotherapy, 53(2), 206–215. https://doi.org/10.1037/pst0000052.
https://doi.org/10.1037/pst0000052...
) case was identified as critical (testing an existing theoretical suggestion).

Sometimes, case study authors will explicitly define the purpose of their case in the form of research objectives (as was the case in Lunn et al.'s study); this helps identifying which purpose-based questions are most relevant for the evaluation of the case. However, some case studies will require comprehensive analysis in order to identify their purpose (or multiple purposes). As such, it is recommended that CaSE reviewers first assess the degree and manner in which information about the studied phenomenon, patient data, clinical discourse and research are presented before deciding on the case purpose.

Although each purpose-based systematic case study will contribute to different strands of psychotherapy (theory, practice, training, etc.) and focus on different forms of data (e.g. theory testing vs extensive clinical descriptions), the overarching aim across all systematic case studies in psychotherapy is to study local and contingent processes, such as variations in patient symptoms and complexities of the clinical setting. The comprehensive framework approach will therefore allow reviewers to assess the degree of external validity in systematic case studies (Barkham & Mellor-Clark, 2003Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy, 10(6), 319–327. https://doi.org/10.1002/cpp.379.
https://doi.org/10.1002/cpp.379...
). Furthermore, assessing the case against its purpose will let reviewers determine whether the case achieves its set goals (research objectives and aims). The example below shows that Lunn et al.'s (2016Lunn, S., Daniel, S. I. F., & Poulsen, S. (2016). Psychoanalytic psychotherapy with a client with bulimia nervosa. Psychotherapy, 53(2), 206–215. https://doi.org/10.1037/pst0000052.
https://doi.org/10.1037/pst0000052...
) case is successful in functioning as a critical case as the authors provide relevant, high-quality information about their tested therapeutic conditions.

Finally, it is also possible to use CaSE to gather specific type of systematic case studies for one's research, practice, training, etc. For example, a CaSE reviewer might want to identify as many descriptive case studies focusing on negative therapeutic relationships as possible for their clinical supervision. The reviewer will therefore only need to refer to CaSE questions in Table 13(2) on descriptive cases. If the reviewed cases do not align with the questions in Table 13(2), then they are not suitable for the CaSE reviewer who is looking for “know-how” knowledge and detailed clinical narratives.

Concluding comments

This paper introduces a novel Case Study Evaluation-tool (CaSE) for systematic case studies in psychotherapy. Unlike most appraisal tools in EBP, CaSE is positioned within purpose-oriented evaluation criteria, in line with the PBE paradigm. CaSE enables reviewers to assess what each systematic case is good for (rather than determining an absolute measure of ‘good’ and ‘bad’ systematic case studies). In order to explicate a purpose-based evaluative framework, six different systematic case study purposes in psychotherapy have been identified: representative cases (purpose: typicality), descriptive cases (purpose: particularity), unique cases (purpose: deviation), critical cases (purpose: falsification/confirmation), exploratory cases (purpose: hypothesis generation) and transferable cases (purpose: generalisability). Each case was linked with an existing epistemological network, such as Iwakabe and Gazzola's (2009) work on case selection criteria for meta-synthesis. The framework approach includes core questions specific to each purpose-based case study (Table 13 (1-6)). The aim is to assess the external validity and effectiveness of each case study against its set out research objectives and aims. Reviewers are required to perform a comprehensive and open-ended data analysis, as shown in the example in Table 14.

Table 14
Using Case Study Evaluation-tool (CaSE): Lunn et al. (2016)Lunn, S., Daniel, S. I. F., & Poulsen, S. (2016). Psychoanalytic psychotherapy with a client with bulimia nervosa. Psychotherapy, 53(2), 206–215. https://doi.org/10.1037/pst0000052.
https://doi.org/10.1037/pst0000052...
's case ‘Psychoanalytic psychotherapy with a client with bulimia nervosa

Along with CaSE Purpose-based Evaluative Framework (Table 13), the paper also developed CaSE Checklist for Essential Components in Systematic Case Studies (Table 12). The checklist approach is meant to aid reviewers in assessing the presence or absence of essential case study components, such as the rationale behind choosing the case study method and description of patient's history. If essential components are missing in a systematic case study, then it may be implied that there is a lack of information, which in turn diminishes the evidential quality of the case. Following broader definitions of validity set out by Levitt et al. (2017)Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative Psychology, 4(1), 2–22. https://doi.org/10.1037/qup0000082.
https://doi.org/10.1037/qup0000082...
and Truijens et al. (2019)Truijens, F., Cornelis, S., Desmet, M., & De Smet, M. (2019). Validity beyond measurement: Why psychometric validity is insufficient for valid psychotherapy research. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.00532.
https://doi.org/10.3389/fpsyg.2019.00532...
, it could be argued that the checklist approach allows for the assessment of (non-quantitative) internal validity in systematic case studies: does the researcher provide sufficient information about the case study design, rationale, research objectives, epistemological/philosophical paradigms, assessment procedures, data analysis, etc., to account for their research conclusions?

It is important to note that this paper is set as an introduction to CaSE; by extension, it is also set as an introduction to research evaluation and appraisal processes for case study researchers in psychotherapy. As such, it was important to provide a step-by-step epistemological rationale and process behind the development of CaSE evaluative framework and checklist. However, this also means that further research needs to be conducted in order to develop the tool. While CaSE Purpose-based Evaluative Framework involves some of the core questions that pertain to the nature of all six purpose-based systematic case studies, there is a need to develop individual and comprehensive CaSE evaluative frameworks for each of the purpose-based systematic case studies in the future. This line of research is likely to enhance CaSE target audience: clinicians interested in reviewing highly particular clinical narratives will attend to descriptive case study appraisal frameworks; researchers working with qualitative meta-synthesis will find transferable case study appraisal frameworks most relevant to their work; while teachers on psychotherapy and counselling modules may seek out unique case study appraisal frameworks.

Furthermore, although CaSE Checklist for Essential Components in Systematic Case Studies and CaSE Purpose-based Evaluative Framework for Systematic Case Studies are presented in a comprehensive, detailed manner, with definitions and examples that would enable reviewers to have a good grasp of the appraisal process, it is likely that different reviewers may have different interpretations or ideas of what might be ‘substantial’ case study data. This, in part, is due to the methodologically pluralistic nature of the case study genre itself; what is relevant for one case study may not be relevant for another, and vice-versa. To aid with the review process, future research on CaSE should include a comprehensive paper on using the tool. This paper should involve evaluation examples with all six purpose-based systematic case studies, as well as a ‘search’ exercise (using CaSE to assess the relevance of case studies for one's research, practice, training, etc.).

Finally, further research needs to be developed on how (and, indeed, whether) systematic case studies should be reviewed with specific ‘grades’ or ‘assessments’ that go beyond the qualitative examination in Table 14. This would be particularly significant for the processes of qualitative meta-synthesis and metaanalysis. These research developments will further enhance CaSE tool, and, in turn, enable psychotherapy researchers to appraise their findings within clear, purpose-based evaluative criteria appropriate for systematic case studies.

  • Funding
    Arts and Humanities Research Council (AHRC) and Consortium for Humanities and the Arts South-East England (CHASE) Doctoral Training Partnership, Award Number [AH/L50 3861/1].
  • Availability of data and materials
    Not applicable.
  • Declarations
  • Publisher's Note
    Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Acknowledgments

I would like to thank Prof Jochem Willemsen (Faculty of Psychology and Educational Sciences, Université catholique de Louvain-la-Neuve), Prof Wayne Martin (School of Philosophy and Art History, University of Essex), Dr Femke Truijens (Institute of Psychology, Erasmus University Rotterdam) and the reviewers of Psicologia: Reflexão e Crítica/Psychology: Research and Review for their feedback, insight and contributions to the manuscript.

References

  • Almond, R. (2004). “I Can Do It (All) Myself”: Clinical technique with defensive narcissistic self–sufficiency. Psychoanalytic Psychology, 21(3), 371–384. https://doi.org/10.1037/0736-9735.21.3.371
    » https://doi.org/10.1037/0736-9735.21.3.371
  • American Psychological Association (2010). Evidence–based case study. Retrieved from https://www.apa.org/pubs/journals/pst/evidence–based–case–study
    » https://www.apa.org/pubs/journals/pst/evidence–based–case–study
  • Aveline, M. (2005). Clinical case studies: Their place in evidence–based practice. Psychodynamic Practice: Individuals, Groups and Organisations, 11(2), 133–152. https://doi.org/10.1080/14753630500108174
    » https://doi.org/10.1080/14753630500108174
  • Barkham, M., & Mellor-Clark, J. (2003). Bridging evidence-based practice and practice-based evidence: Developing a rigorous and relevant knowledge for the psychological therapies. Clinical Psychology & Psychotherapy, 10(6), 319–327. https://doi.org/10.1002/cpp.379
    » https://doi.org/10.1002/cpp.379
  • Berg, H. (2019). How does evidence–based practice in psychology work? – As an ethical demarcation. Philosophical Psychology, 32(6), 853–873. https://doi.org/10.1080/09515089.2019.1632424
    » https://doi.org/10.1080/09515089.2019.1632424
  • Berg, H., & Slaattelid, R. (2017). Facts and values in psychotherapy—A critique of the empirical reduction of psychotherapy within evidence-based practice. Journal of Evaluation in Clinical Practice, 23(5), 1075–1080. https://doi.org/10.1111/jep.12739
    » https://doi.org/10.1111/jep.12739
  • Bower, P. (2003). Efficacy in evidence-based practice. Clinical Psychology and Psychotherapy, 10(6), 328–336. https://doi.org/10.1002/cpp.380
    » https://doi.org/10.1002/cpp.380
  • Cartwright, N., & Hardie, J. (2012). What are RCTs good for? In N. Cartwright, & J. Hardie (Eds.), Evidence–based policy: A practical guide to doing it better Oxford University Press. https://doi.org/10.1093/acprof:osobl/9780199841608.003.0008
    » https://doi.org/10.1093/acprof:osobl/9780199841608.003.0008
  • Critical Appraisal Skills Programme (CASP). (2018). Qualitative checklist. Retrieved from https://casp–uk.net/wp–content/uploads/2018/01/CASP–Qualitative–Checklist–2018.pdf
    » https://casp–uk.net/wp–content/uploads/2018/01/CASP–Qualitative–Checklist–2018.pdf
  • Davison, G. C., & Lazarus, A. A. (2007). Clinical case studies are important in the science and practice of psychotherapy. In S. O. Lilienfeld, & W. T. O’Donohue (Eds.), The great ideas of clinical science: 17 principles that every mental health professional should understand, (pp. 149–162). Routledge/Taylor & Francis Group.
  • Douglas, H. (2004). The irreducible complexity of objectivity. Synthese, 138(3), 453–473. https://doi.org/10.1023/B:SYNT.0000016451.18182.91
    » https://doi.org/10.1023/B:SYNT.0000016451.18182.91
  • Duncan, B. L., & Sparks, J. A. (2020). When meta–analysis misleads: A critical case study of a meta–analysis of client feedback. Psychological Services, 17(4), 487–496. https://doi.org/10.1037/ser0000398
    » https://doi.org/10.1037/ser0000398
  • Edelson, M. (1986). Causal explanation in science and in psychoanalysis—Implications for writing a case study. Psychoanalytic Study of Child, 41(1), 89–127. https://doi.org/10.1080/00797308.1986.11823452
    » https://doi.org/10.1080/00797308.1986.11823452
  • Edwards, D. J. A. (2013). Collaborative versus adversarial stances in scientific discourse: Implications for the role of systematic case studies in the development of evidence–based practice in psychotherapy. Pragmatic Case Studies in Psychotherapy, 3(1), 6–34.
  • Edwards, D. J. A., Dattilio, F. M., & Bromley, D. B. (2004). Developing evidence–based practice: The role of case–based research. Professional Psychology: Research and Practice, 35(6), 589–597. https://doi.org/10.1037/0735-7028.35.6.589
    » https://doi.org/10.1037/0735-7028.35.6.589
  • Erickson, F. (2012). Comments on causality in qualitative inquiry. Qualitative Inquiry, 18(8), 686–688. https://doi.org/10.1177/1077800412454834
    » https://doi.org/10.1177/1077800412454834
  • Fishman, D. B. (1999). The case for pragmatic psychology New York University Press.
  • Fishman, D. B. (2005). Editor's introduction to PCSP––From single case to database: A new method for enhancing psychotherapy practice. Pragmatic Case Studies in Psychotherapy, 1(1), 1–50.
  • Fishman, D. B., Messer, S. B., Edwards, D. J. A., & Dattilio, F. M. (Eds.) (2017). Case studies within psychotherapy trials: Integrating qualitative and quantitative methods Oxford University Press.
  • Fox, N. J. (2003). Practice–based evidence: Towards collaborative and transgressive research. Sociology, 37(1), 81–102. https://doi.org/10.1177/0038038503037001388
    » https://doi.org/10.1177/0038038503037001388
  • Gabbay, J., & le May, A. (2011). Practice–based evidence for healthcare: Clinical mindlines Routledge.
  • Green, L. W., & Latchford, G. (2012). Maximising the benefits of psychotherapy: A practice–based evidence approach Wiley–Blackwell. https://doi.org/10.1002/9781119967590
    » https://doi.org/10.1002/9781119967590
  • Hannes, K., Lockwood, C., & Pearson, A. (2010). A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qualitative Health Research, 20(12), 1736–1743. https://doi.org/10.1177/1049732310378656
    » https://doi.org/10.1177/1049732310378656
  • Hartling, L., Chisholm, A., Thomson, D., & Dryden, D. M. (2012). A descriptive analysis of overviews of reviews published between 2000 and 2011. PLoS One, 7(11), e49667. https://doi.org/10.1371/journal.pone.0049667
    » https://doi.org/10.1371/journal.pone.0049667
  • Hill, A., & Spittlehouse, C. (2003). What is critical appraisal? Evidence–Based Medicine, 3(2), 1–8.
  • Hilliard, R. B. (1993). Single–case methodology in psychotherapy process and outcome research. Journal of Consulting and Clinical Psychology, 61(3), 373–380. https://doi.org/10.1037/0022-006X.61.3.373
    » https://doi.org/10.1037/0022-006X.61.3.373
  • Horn, S. D., & Gassaway, J. (2007). Practice–based evidence study design for comparative effectiveness research. Medical Care, 45(10), S50–S57. https://doi.org/10.1097/MLR.0b013e318070c07b
    » https://doi.org/10.1097/MLR.0b013e318070c07b
  • Iwakabe, S. (2003, May). Common change events in stages of psychotherapy: A qualitative analysis of case reports. In Paper presented at the 19th Annual Conference of the Society for Exploration of Psychotherapy Integration, New York
  • Iwakabe, S. (2005). Pragmatic meta–analysis of case studies. Annual Progress of Family Psychology, 23, 154–169.
  • Iwakabe, S., & Gazzola, N. (2009). From single–case studies to practice–based knowledge: Aggregating and synthesizing case studies. Psychotherapy Research, 19(4-5), 601–611. https://doi.org/10.1080/10503300802688494
    » https://doi.org/10.1080/10503300802688494
  • Jimenez-Buedo, M., & Miller, L. (2010). Why a Trade–Off? The relationship between the external and internal validity of experiments. THEORIA: An International Journal for Theory History and Foundations of Science, 25(3), 301–321.
  • Joanna Briggs Institute (JBI). (2017a). Critical appraisal checklist for qualitative research. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Qualitative_Research2017_0.pdf
    » https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Qualitative_Research2017_0.pdf
  • Joanna Briggs Institute (JBI). (2017b). Checklist for case reports. Retrieved from https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Case_Reports2017_0.pdf
    » https://joannabriggs.org/sites/default/files/2019–05/JBI_Critical_Appraisal–Checklist_for_Case_Reports2017_0.pdf
  • Kaluzeviciute, G. (2021). Validity, Evidence and Appraisal in Systematic Psychotherapy Case Studies Paper presented at the Research Forum of Department of Psychosocial and Psychoanalytic Studies, University of Essex, Colchester, UK. https://doi.org/10.13140/RG.2.2.33502.15683
    » https://doi.org/10.13140/RG.2.2.33502.15683
  • Kaluzeviciute, G., & Willemsen, J. (2020). Scientific thinking styles: The different ways of thinking in psychoanalytic case studies. The International Journal of Psychoanalysis, 101(5), 900–922. https://doi.org/10.1080/00207578.2020.1796491
    » https://doi.org/10.1080/00207578.2020.1796491
  • Katrak, P., Bialocerkowski, A. E., Massy-Westropp, N., Kumar, S. V. S., & Grimmer, K. (2004). A systematic review of the content of critical appraisal tools. BMC Medical Research Methodology, 4(1), 22. https://doi.org/10.1186/1471-2288-4-22
    » https://doi.org/10.1186/1471-2288-4-22
  • Kennedy, M. M. (1979). Generalising from single case studies. Evaluation Quarterly, 3(4), 661–678. https://doi.org/10.1177/0193841X7900300409
    » https://doi.org/10.1177/0193841X7900300409
  • Laska, K. M., Gurman, A. S., & Wampold, B. E. (2014). Expanding the lens of evidence–based practice in psychotherapy: A common factors perspective. Psychotherapy, 51(4), 467–481. https://doi.org/10.1037/a0034332
    » https://doi.org/10.1037/a0034332
  • Levitt, H. M., Motulsky, S. L., Wertz, F. J., Morrow, S. L., & Ponterotto, J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: Promoting methodological integrity. Qualitative Psychology, 4(1), 2–22. https://doi.org/10.1037/qup0000082
    » https://doi.org/10.1037/qup0000082
  • Lilienfeld, S. O., Ritschel, L. A., Lynn, S. J., Cautin, R. L., & Latzman, R. D. (2013). Why many clinical psychologists are resistant to evidence–based practice: root causes and constructive remedies. Clinical Psychology Review, 33(7), 883–900. https://doi.org/10.1016/j.cpr.2012.09.008
    » https://doi.org/10.1016/j.cpr.2012.09.008
  • Long, A. F., & Godfrey, M. (2004). An evaluation tool to assess the quality of qualitative research studies. International Journal of Social Research Methodology, 7(2), 181–196. https://doi.org/10.1080/1364557032000045302
    » https://doi.org/10.1080/1364557032000045302
  • Longhofer, J., Floersch, J., & Hartmann, E. A. (2017). Case for the case study: How and why they matter. Clinical Social Work Journal, 45(3), 189–200. https://doi.org/10.1007/s10615-017-0631-8
    » https://doi.org/10.1007/s10615-017-0631-8
  • Lunn, S., Daniel, S. I. F., & Poulsen, S. (2016). Psychoanalytic psychotherapy with a client with bulimia nervosa. Psychotherapy, 53(2), 206–215. https://doi.org/10.1037/pst0000052
    » https://doi.org/10.1037/pst0000052
  • Mackrill, T., & Iwakabe, S. (2013). Making a case for case studies in psychotherapy training: A small step towards establishing an empirical basis for psychotherapy training. Counselling Psychotherapy Quarterly, 26(3–4), 250–266. https://doi.org/10.1080/09515070.2013.832148
    » https://doi.org/10.1080/09515070.2013.832148
  • Maggio, S., Molgora, S., & Oasi, O. (2019). Analyzing psychotherapeutic failures: A research on the variables involved in the treatment with an individual setting of 29 cases. Frontiers in Psychology, 10, 1250. https://doi.org/10.3389/fpsyg.2019.01250
    » https://doi.org/10.3389/fpsyg.2019.01250
  • Mahrer, A. R. (1988). Discovery – oriented psychotherapy research: Rationale, aims, and methods. American Psychologist, 43(9), 694–702. https://doi.org/10.1037/0003-066X.43.9.694
    » https://doi.org/10.1037/0003-066X.43.9.694
  • Margison, F. B., et al. (2000). Measurement and psychotherapy: Evidence–based practice and practice–based evidence. British Journal of Psychiatry, 177(2), 123–130. https://doi.org/10.1192/bjp.177.2.123
    » https://doi.org/10.1192/bjp.177.2.123
  • Maxwell, J. A. (2004). Causal explanation, qualitative research, and scientific inquiry in education. Educational Researcher, 33(2), 3–11. https://doi.org/10.3102/0013189X033002003
    » https://doi.org/10.3102/0013189X033002003
  • McLeod, J. (2002). Case studies and practitioner research: Building knowledge through systematic inquiry into individual cases. Counselling and Psychotherapy Research: Linking research with practice, 2(4), 264–268. https://doi.org/10.1080/14733140212331384755
    » https://doi.org/10.1080/14733140212331384755
  • McLeod, J. (2010). Case study research in counselling and psychotherapy SAGE Publications. https://doi.org/10.4135/9781446287897
    » https://doi.org/10.4135/9781446287897
  • McLeod, J., & Elliott, R. (2011). Systematic case study research: A practice– oriented introduction to building an evidence base for counselling and psychotherapy. Counselling and Psychotherapy Research, 11(1), 1–10. https://doi.org/10.1080/14733145.2011.548954
    » https://doi.org/10.1080/14733145.2011.548954
  • Meganck, R., Inslegers, R., Krivzov, J., & Notaerts, L. (2017). Beyond clinical case studies in psychoanalysis: A review of psychoanalytic empirical single case studies published in ISI–ranked journals. Frontiers in Psychology, 8, 1749. https://doi.org/10.3389/fpsyg.2017.01749
    » https://doi.org/10.3389/fpsyg.2017.01749
  • Merriam, S. B. (1998). Qualitative research and case study applications in education Jossey–Bass Publishers.
  • Michels, R. (2000). The case history. Journal of the American Psychoanalytic Association, 48(2), 355–375. https://doi.org/10.1177/00030651000480021201
    » https://doi.org/10.1177/00030651000480021201
  • Midgley, N. (2006). Re–reading “Little Hans”: Freud's case study and the question of competing paradigms in psychoanalysis. Journal of the American Psychoanalytic Association, 54(2), 537–559. https://doi.org/10.1177/00030651060540021601
    » https://doi.org/10.1177/00030651060540021601
  • Rosqvist, J., Thomas, J. C., & Truax, P. (2011). Effectiveness versus efficacy studies. In J. C. Thomas, & M. Hersen (Eds.), Understanding research in clinical and counseling psychology, (pp. 319–354). Routledge/Taylor & Francis Group.
  • Sackett, D. L., Rosenberg, W. M., Gray, J. A. M., Haynes, R. B., & Richardson, W. S. (1996). Evidence based medicine: what it is and what it isn't. BMJ, 312(7023), 71–72. https://doi.org/10.1136/bmj.312.7023.71
    » https://doi.org/10.1136/bmj.312.7023.71
  • Stake, R. E. (1995). The art of case study research. SAGE Publications.
  • Stake, R. E. (2010). Qualitative research: Studying how things work The Guilford Press.
  • Stewart, R. E., & Chambless, D. L. (2007). Does psychotherapy research inform treatment decisions in private practice? Journal of Clinical Psychology, 63(3), 267–281. https://doi.org/10.1002/jclp.20347
    » https://doi.org/10.1002/jclp.20347
  • Stiles, W. B. (2007). Theory–building case studies of counselling and psychotherapy. Counselling and Psychotherapy Research, 7 (2), 122–127. https://doi.org/10.1080/14733140701356742
    » https://doi.org/10.1080/14733140701356742
  • Teachman, B. A., Drabick, D. A., Hershenberg, R., Vivian, D., Wolfe, B. E., & Goldfried, M. R. (2012). Bridging the gap between clinical research and clinical practice: introduction to the special section. Psychotherapy, 49 (2), 97–100. https://doi.org/10.1037/a0027346
    » https://doi.org/10.1037/a0027346
  • Thorne, S., Jensen, L., Kearney, M. H., Noblit, G., & Sandelowski, M. (2004). Qualitative metasynthesis: Reflections on methodological orientation and ideological agenda. Qualitative Health Research, 14(10), 1342–1365. https://doi.org/10.1177/1049732304269888
    » https://doi.org/10.1177/1049732304269888
  • Timulak, L. (2009). Meta–analysis of qualitative studies: A tool for reviewing qualitative research findings in psychotherapy. Psychotherapy Research, 19(4–5), 591–600. https://doi.org/10.1080/10503300802477989
    » https://doi.org/10.1080/10503300802477989
  • Trad, P. V., & Raine, M. J. (1994). A prospective interpretation of unconscious processes during psychoanalytic psychotherapy. Psychoanalytic Psychology, 11(1), 77–100. https://doi.org/10.1037/h0079522
    » https://doi.org/10.1037/h0079522
  • Truijens, F., Cornelis, S., Desmet, M., & De Smet, M. (2019). Validity beyond measurement: Why psychometric validity is insufficient for valid psychotherapy research. Frontiers in Psychology, 10. https://doi.org/10.3389/fpsyg.2019.00532
    » https://doi.org/10.3389/fpsyg.2019.00532
  • Tuckett, D. (Ed.) (2008). The new library of psychoanalysis. Psychoanalysis comparable and incomparable: The evolution of a method to describe and compare psychoanalytic approaches Routledge/Taylor & Francis Group. https://doi.org/10.4324/9780203932551
    » https://doi.org/10.4324/9780203932551
  • van Hennik, R. (2020). Practice based evidence based practice, part II: Navigating complexity and validity from within. Journal of Family Therapy, 43 (1), 27–45. https://doi.org/10.1111/1467-6427.12291
    » https://doi.org/10.1111/1467-6427.12291
  • Westen, D., Novotny, C. M., & Thompson-Brenner, H. (2004). The empirical status of empirically supported psychotherapies: Assumptions, findings, and reporting in controlled clinical trials. Psychological Bulletin, 130(4), 631–663. https://doi.org/10.1037/0033-2909.130.4.631
    » https://doi.org/10.1037/0033-2909.130.4.631
  • Widdowson, M. (2011). Case study research methodology. International Journal of Transactional Analysis Research & Practice, 2(1). https://doi.org/10.29044/v2i1p25
    » https://doi.org/10.29044/v2i1p25
  • Willemsen, J., Della Rosa, E., & Kegerreis, S. (2017). Clinical case studies in psychoanalytic and psychodynamic treatment. Frontiers in Psychology, 8(108). https://doi.org/10.3389/fpsyg.2017.00108
    » https://doi.org/10.3389/fpsyg.2017.00108
  • Williams, V., Boylan, A., & Nunan, D. (2019). Critical appraisal of qualitative research: Necessity, partialities and the issue of bias. BMJ Evidence–Based Medicine https://doi.org/10.1136/bmjebm-2018-111132
    » https://doi.org/10.1136/bmjebm-2018-111132
  • Yin, R. K. (1984). Case study research: Design and methods SAGE Publications.
  • Yin, R. K. (1993). Applications of case study research SAGE Publications.

Publication Dates

  • Publication in this collection
    16 Apr 2021
  • Date of issue
    2021

History

  • Received
    12 Jan 2021
  • Accepted
    09 Mar 2021
  • Published
    19 Mar 2021
Curso de Pós-Graduação em Psicologia da Universidade Federal do Rio Grande do Sul Rua Ramiro Barcelos, 2600 - sala 110, 90035-003 Porto Alegre RS - Brazil, Tel.: +55 51 3308-5691 - Porto Alegre - RS - Brazil
E-mail: prc@springeropen.com