Acessibilidade / Reportar erro

Stigmatized individuals: a case for precision ethics

Emerging technologies have enabled us to create increasingly accurate predictions about the propensity of psychiatric patients to commit criminal offenses.11. Watts D, Moulden H, Mamak M, Upfold C, Chaimowitz G. Predicting offenses among individuals with psychiatric disorders - a machine learning approach. J Psychiatr Res. 2021;138:146-54. Machine learning models raise a variety of opportunities and avenues to develop educational tools, preventive measures, and shape public policy.22. Passos IC, Mwangi B, Kapczinski F. Big data analytics and machine learning: 2015 and beyond. Lancet Psychiatry. 2016;3:13-5. However, despite the promise of predictive algorithms in forensic psychiatry, their use raises an important ethical challenge. Namely, how can we avoid further stigmatizing vulnerable individuals, and instead, ensure our algorithms respect their rights, enhance their safety, and promote their wellbeing? The noted philosopher Joel Feinberg envisioned a form of noncomparative justice, where each person is treated precisely as they deserve, without regard to the way anyone else is treated.33. Feinberg J. Noncomparative justice. Philos Rev. 1974;83:297-338.

To better elucidate this concept, take the example of “voluntary” or “involuntary” criminal acts, which depend on an individual’s intention to commit a crime, otherwise known as mens rea (guilty mind). When voluntary criminals are compared against voluntary criminals, such a system is thought to be fair and just in a legal sense. However, when involuntary criminals are compared with voluntary criminals in the same category, and are punished with similar severity, we can discern a state of injustice because of a difference in criminal culpability. As such, the voluntary nature of the criminal act, regardless of the severity of the crime, is a salient consideration.44. Gerber RJ. Insanity and mens rea. In: Insanity defense. Port Washington: Associated Faculty Press; 1984. p. 98-117.

In many countries, individuals with severe mental illness who commit criminal acts are evaluated according to noncomparative justice.55. Naude B. An international perspective of restorative justice practices and research outcomes. J Juridical Sci. 2006;31:101-20. Rather than simply punishing the offender in proportion to the severity and context of the crime, those with severe mental illness who lack mens rea may be treated in a restorative framework, recognizing the need to aid, treatment, and seek to prevent future reoffending.55. Naude B. An international perspective of restorative justice practices and research outcomes. J Juridical Sci. 2006;31:101-20. In forensic psychiatry, this implies the need for targeted and individualized treatment.

However, several pertinent questions arise when evaluating the utility and implementation of such algorithms. For instance, an important consideration that is often overlooked is model interpretability. So called “black box” methods may perform well in testing and validation datasets, however, without a rudimentary understanding of the directionality, and interaction effects of important features, we lack the transparency required to justify implementing these models in high stakes clinical settings.66. Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1:2016-15. Toward this end, new methods leveraging the internal structure of tree based algorithms can be used to directly measure local feature interaction effects, and provide insight into the magnitude, prevalence, and direction of a feature’s effect.77. Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, et al. From local explanations to global understanding with explainable AI for trees. Nat Mach Intell. 2020;2:56-67.

Similarly, even among classification models that demonstrate high accuracy, there will be instances where individuals are misclassified. In cases where the risks of misclassification are low, this may be largely unimportant. However, when dealing with the complex intersectionality between healthcare, personal freedom, and societal risk, this becomes a challenging consideration. For instance, how can we introduce ethical constraints in our models without significantly impacting their overall accuracy and utility? While this remains open to debate, it may be useful to consider such ethical goals from two distinct frameworks.

Robert Nozick, the renowned American philosopher, once discussed the concept of moral pushes and pulls.88. Nozick R. Philosophical explanations. Cambridge: Harvard University Press; 1981. Moral pushes involve ideals or values that propel us “from within.” From this framework, ethics are a set of principles that help guide us to being more virtuous individuals. Ethical algorithms can favor these individual moral values if the goal is to make us “better people,” allowing us to live a healthier life, or intrinsically, boosting moral dispositions so that we can better operate within society, leading to the benefit of others by proxy. Moral pulls, on the other hand, are constraints about the design of the algorithms. For instance, ensuring that our models are not predicated on immutable characteristics, and ensuring free, informed, and ongoing consent.88. Nozick R. Philosophical explanations. Cambridge: Harvard University Press; 1981. The concept of moral pulls also highlights the importance of patient centered perspectives. We argue that a prerequisite for the successful implementation of predictive models into routine care is for data scientists to meaningfully engage with stakeholders (healthcare providers, patients, and their families) to ensure the scope of the problem, and important ethical considerations, are adequately elucidated.

Altogether, we advocate for a marked transformation in the field, where group level statistical approaches to risk assessment, therapeutic interventions, and rehabilitation are abandoned in favor of more precise, individualized models, developed according to a new, precision ethics approach.

References

  • 1
    Watts D, Moulden H, Mamak M, Upfold C, Chaimowitz G. Predicting offenses among individuals with psychiatric disorders - a machine learning approach. J Psychiatr Res. 2021;138:146-54.
  • 2
    Passos IC, Mwangi B, Kapczinski F. Big data analytics and machine learning: 2015 and beyond. Lancet Psychiatry. 2016;3:13-5.
  • 3
    Feinberg J. Noncomparative justice. Philos Rev. 1974;83:297-338.
  • 4
    Gerber RJ. Insanity and mens rea. In: Insanity defense. Port Washington: Associated Faculty Press; 1984. p. 98-117.
  • 5
    Naude B. An international perspective of restorative justice practices and research outcomes. J Juridical Sci. 2006;31:101-20.
  • 6
    Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1:2016-15.
  • 7
    Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, et al. From local explanations to global understanding with explainable AI for trees. Nat Mach Intell. 2020;2:56-67.
  • 8
    Nozick R. Philosophical explanations. Cambridge: Harvard University Press; 1981.

Publication Dates

  • Publication in this collection
    14 Apr 2023
  • Date of issue
    2023

History

  • Received
    06 July 2021
  • Accepted
    18 July 2021
Associação de Psiquiatria do Rio Grande do Sul Av. Ipiranga, 5311/202, 90610-001 Porto Alegre RS/ Brasil, Tel./Fax: (55 51) 3024 4846 - Porto Alegre - RS - Brazil
E-mail: trends@aprs.org.br