Healthcare is a frustrating paradox: those who face greater barriers to care are often at higher risk for mental health challenges, but without access, they cannot improve their wellness. According to an essay written by Sarah Wells for Stanford University, 50% of individuals who could benefit from access to therapy are unable to attain the services they need. Therapy is one of the best forms of mental healthcare because it fundamentally alters the way you think, thereby allowing for improved mental wellbeing that extends past the period during which you meet with a therapist. An individual’s experiences directly impact their brain’s function, a phenomenon known as “neuroplasticity.” Through therapy, pathways are built between neurons that strengthen rational thinking and the ability to manage negative stimuli. In addition to being neurologically beneficial, therapy provides coping skills and techniques that patients can use to navigate the stressors of life.
To combat the inaccessibility of traditional therapy, a solution has emerged: AI chatbot therapists. In addition to making therapy services easier to access, AI’s ability to be used anywhere promotes privacy. Furthermore, it allows those who cannot seek out therapy (possibly due to fear of stigmatization) to receive mental health support, as mentioned in an article in the National Library of Medicine by psychiatrist Hayri Can Özden. In the same article, Özden brings up the point that alongside AI technological improvements, augmented reality and virtual reality are experiencing similar growth, and these technologies offer ways to safely conduct exposure therapy in virtually controlled environments.
However, although there are upsides to using AI for mental healthcare, there are numerous ways that artificial intelligence fails to grasp human nuances. In a paper published on Fraser, Pam Dewey and Jessica Enneking point out that AI is unable to pick up on nonverbal cues and recognize the intersectional aspects of a person’s identity. Additionally, they cannot perceive how they affect the individual’s experiences, unless directly fed to the software. Since this is not something people are wholly self-aware of and therefore don’t know to explicitly share, it is neglected in AI’s consideration of what advice to give, leading to generic guidance rather than the type of personal advice granted by a human therapist with a more complete understanding of the individual. On a similar note, AI’s failure to recognize nonverbal cues results in the AI therapist missing expressions of suicidal ideation. In turn, AI therapists enable such thoughts through lack of recognition and proper intervention. Furthermore, AI therapists have been shown to exhibit discriminatory behavior towards conditions such as schizophrenia and substance dependence, as found in a study conducted by researchers at Stanford University. This demonstration of bias can make patients reluctant to continue seeking treatment, negating the effectiveness of therapy.
It is clear that using AI as a replacement for human-to-human therapy can have negative consequences, especially when an individual is dealing with severe mental health challenges. However, it is not a black-and-white issue. There are obvious benefits to be found, such as cheaper cost and more efficient access to healthcare services. Unlike traditional therapy, anyone with a device can talk to a chatbot; there is no need to wait for availability, deal with scheduling hassles, or pay expensive fees. In the same study conducted at Stanford, researchers Jared Moore and Nick Haber concluded that AI tools could benefit people in “less safety-critical scenarios” by providing tools for personal reflection, such as guiding questions, journaling prompts, and offering coping strategies for stress.