Digital Futures in Mind: Reflecting on Technological Experiments in Mental Health & Crisis Support
96 Pages Posted: 14 Dec 2022
Date Written: September 1, 2022
Abstract
Urgent public attention is needed to make sense of the expanding use of algorithmic and data-driven technologies in the mental health context. On the one hand, well-designed digital technologies that offer high degrees of public involvement can be used to promote good mental health and crisis support in communities. They can be employed safely, reliably and in a trustworthy way, including to help build relationships, allocate resources and promote human flourishing.
On the other hand, there is clear potential for harm. The list of ‘data harms’ in the mental health context is growing longer, in which people are in worse shape than they would be had the activity not occurred.
Examples in this report include the hacking of psychotherapeutic records and the extortion of
victims, algorithmic hiring programs that discriminate against people with histories of mental healthcare, and criminal justice and border agencies weaponising data concerning mental health against individuals. Issues also come up not where technologies are misused or faulty, but where technologies like biometric monitoring or surveillance work as intended, and where the very process of ‘datafying’ and digitising individuals’ behaviour – observing, recording and logging them to an excessive degree – carry inherent harm.
Public debate is needed to scrutinise these developments. Critical attention must be given to current trends in thought about technology and mental health, including the values such technologies embody, the people driving them and their diverse visions for the future. Some trends – for example, the idea that ‘objective digital biomarkers’ in a person’s smartphone data can identify ‘silent’ signs of pathology, or the entry of Big Tech into mental health service provision – have the potential to create major changes not only to health and social services but to the very way human beings experience ourselves and our world. This possibility is also complicated by the spread of ‘fake and deeply flawed’ or ‘snake oil’ AI, and the tendency in parts of the technology sector – and indeed in mental health sciences – to over-claim and under-deliver.
Meredith Whitaker and colleagues at the AI Now research institute observe that disability and mental health have been largely omitted from discussions about AI-bias and algorithmic accountability. This report brings them to the fore. It is written to promote basic standards of algorithmic and technological transparency and auditing, but also takes the opportunity to ask more fundamental questions, such as whether algorithmic and digital systems should be used at all in some circumstances—and if so, who gets to govern them. These issues are particularly important given the COVID-19 pandemic, which has accelerated the digitisation of physical and mental health services worldwide, and driven more of our lives online.
Note:
Funding Information: As noted on p.ii of the report, Dr Gooding received funding and support from the Mozilla Foundation to undertake part of this research. The report was partly produced with funding from the Australian Research Council (project no. DE200100483).
Conflict of Interests: There are no competing interests to declare.
Ethical Approval: No human or animal research was undertaken for the purposes of this paper, nor were any datasets concerning human activity. The research is desk-based work that survey's existing literature, including developments in law and practice.
Keywords: digital mental health, digital care, AI, mental health, critical data studies, disability studies, disability and AI, algorithmic technology
Suggested Citation: Suggested Citation