Can Artificial Intelligence Replace Human Therapists?
Three experts discuss the promise—and problems—of relying on algorithms for our mental health.
Three experts discuss the promise—and problems—of relying on algorithms for our mental health.
Could artificial intelligence reduce the need for human therapists?
Websites, smartphone apps and social-media sites are dispensing mental-health advice, often using artificial intelligence. Meanwhile, clinicians and researchers are looking to AI to help define mental illness more objectively, identify high-risk people and ensure quality of care.
Some experts believe AI can make treatment more accessible and affordable. There has long been a severe shortage of mental-health professionals, and since the Covid pandemic, the need for support is greater than ever. For instance, users can have conversations with AI-powered chatbots, allowing then to get help anytime, anywhere, often for less money than traditional therapy.
The algorithms underpinning these endeavours learn by combing through large amounts of data generated from social-media posts, smartphone data, electronic health records, therapy-session transcripts, brain scans and other sources to identify patterns that are difficult for humans to discern.
Despite the promise, there are some big concerns. The efficacy of some products is questionable, a problem only made worse by the fact that private companies don’t always share information about how their AI works. Problems about accuracy raise concerns about amplifying bad advice to people who may be vulnerable or incapable of critical thinking, as well as fears of perpetuating racial or cultural biases. Concerns also persist about private information being shared in unexpected ways or with unintended parties.
The Wall Street Journal hosted a conversation via email and Google Doc about these issues with John Torous, director of the digital-psychiatry division at Beth Israel Deaconess Medical Center and assistant professor at Harvard Medical School; Adam Miner, an instructor at the Stanford School of Medicine; and Zac Imel, professor and director of clinical training at the University of Utah and co-founder of LYSSN.io, a company using AI to evaluate psychotherapy. Here’s an edited transcript of the discussion.
WSJ: What is the most exciting way AI and machine learning are being used to diagnose mental disorders and improve treatments?
DR. MINER: AI can speed up access to appropriate services, like crisis response. The current Covid pandemic is a strong example where we see both the potential for AI to help facilitate access and triage, while also bringing up privacy and misinformation risks. This challenge—deciding which interventions and information to champion—is an issue in both pandemics and in mental health care, where we have many different treatments for many different problems.
DR. IMEL: In the near term, I am most excited about using AI to augment or guide therapists, such as giving feedback after the session or even providing tools to support self-reflection. Passive phone-sensing apps [that run in the background on users’ phones and attempt to monitor users’ moods] could be exciting if they predict later changes in depression and suggest interventions to do something early. Also, research on remote sensing in addiction, using tools to detect when a person might be at risk of relapse and suggesting an intervention or coping skills, is exciting.
DR. TOROUS: On a research front, AI can help us unlock some of the complexities of the brain and work toward understanding these illnesses better, which can help us offer new, effective treatment. We can generate a vast amount of data about the brain from genetics, neuroimaging, cognitive assessments and now even smartphone signals. We can utilize AI to find patterns that may help us unlock why people develop mental illness, who responds best to certain treatments and who may need help immediately. Using new data combined with AI will likely help us unlock the potential of creating new personalized and even preventive treatments.
WSJ: Do you think automated programs that use AI-driven chatbots are an alternative to therapy?
DR. TOROUS: In a recent paper I co-authored, we looked at the more recent chatbot literature to see what the evidence says about what they really do. Overall, it was clear that while the idea is exciting, we are not yet seeing evidence matching marketing claims. Many of the studies have problems. They are small. They are difficult to generalize to patients with mental illness. They look at feasibility outcomes instead of clinical-improvement endpoints. And many studies do not feature a control group to compare results.
DR. MINER: I don’t think it is an “us vs. them, human vs. AI” situation with chatbots. The important backdrop is that we, as a community, understand we have real access issues and some people might not be ready or able to get help from a human. If chatbots prove safe and effective, we could see a world where patients access treatment and decide if and when they want another person involved. Clinicians would be able to spend time where they are most useful and wanted.
WSJ: Are there cases where AI is more accurate or better than human psychologists, therapists or psychiatrists?
DR. IMEL: Right now, it’s pretty hard to imagine replacing human therapists. Conversational AI is not good at things we take for granted in human conversation, like remembering what was said 10 minutes ago or last week and responding appropriately.
DR. MINER: This is certainly where there is both excitement and frustration. I can’t remember what I had for lunch three days ago, and an AI system can recall all of Wikipedia in seconds. For raw processing power and memory, it isn’t even a contest between humans and AI systems. However, Dr. Imel’s point is crucial around conversations: Things humans do without effort in conversation are currently beyond the most powerful AI system.
An AI system that is always available and can hold thousands of simple conversations at the same time may create better access, but the quality of the conversations may suffer. This is why companies and researchers are looking at AI-human collaboration as a reasonable next step.
DR. IMEL: For example, studies show AI can help “rewrite” text statements to be more empathic. AI isn’t writing the statement, but trained to help a potential listener possibly tweak it.
WSJ: As the technology improves, do you see chatbots or smartphone apps siphoning off any patients who might otherwise seek help from therapists?
DR. TOROUS: As more people use apps as an introduction to care, it will likely increase awareness and interest of mental health and the demand for in-person care. I have not met a single therapist or psychiatrist who is worried about losing business to apps; rather, app companies are trying to hire more therapists and psychiatrists to meet the rising need for clinicians supporting apps.
DR. IMEL: Mental-health treatment has a lot in common with teaching. Yes, there are things technology can do in order to standardise skill building and increase access, but as parents have learned in the last year, there is no replacing what a teacher does. Humans are imperfect, we get tired and are inconsistent, but we are pretty good at connecting with other humans. The future of technology in mental health is not about replacing humans, it’s about supporting them.
WSJ: What about schools or companies using apps in situations when they might otherwise hire human therapists?
DR. MINER: One challenge we are facing is that the deployment of apps in schools and at work often lacks the rigorous evaluation we expect in other types of medical interventions. Because apps can be developed and deployed so quickly, and their content can change rapidly, prior approaches to quality assessment, such as multiyear randomized trials, are not feasible if we are to keep up with the volume and speed of app development.
WSJ: Can AI be used for diagnoses and interventions?
DR. IMEL: I might be a bit of a downer here—building AI to replace current diagnostic practices in mental health is challenging. Determining if someone meets criteria for major depression right now is nothing like finding a tumour in a CT scan—something that is expensive, labour-intensive and prone to errors of attention, and where AI is already proving helpful. Depression is measured very well with a nine-question survey.
DR. MINER: I agree that diagnosis and treatment are so nuanced that AI has a long way to go before taking over those tasks from a human.
Through sensors, AI can measure symptoms, like sleep disturbances, pressured speech or other changes in behaviour. However, it is unclear if these measurements fully capture the nuance, judgment and context of human decision making. An AI system may capture a person’s voice and movement, which is likely related to a diagnosis like major depressive disorder. But without more context and judgment, crucial information can be left out. This is especially important when there are cultural differences that could account for diagnosis-relevant behaviour.
Ensuring new technologies are designed with awareness of cultural differences in normative language or behaviour is crucial to engender trust in groups who have been marginalised based on race, age, or other identities.
WSJ: Is privacy also a concern?
DR. MINER: We’ve developed laws over the years to protect mental-health conversations between humans. As apps or other services start asking to be a part of these conversations, users should be able to expect transparency about how their personal experiences will be used and shared.
DR. TOROUS: In prior research, our team identified smartphone apps [used for depression and smoking cessation that] shared data with commercial entities. This is a red flag that the industry needs to pause and change course. Without trust, it is not possible to offer effective mental health care.
DR. MINER: We undervalue and poorly design for trust in AI for healthcare, especially mental health. Medicine has designed processes and policies to engender trust, and AI systems are likely following different rules. The first step is to clarify what is important to patients and clinicians in terms of how information is captured and shared for sensitive disclosures.
Reprinted by permission of The Wall Street Journal, Copyright 2021 Dow Jones & Company. Inc. All Rights Reserved Worldwide. Original date of publication: March 27, 2021.
Rugged coastal drives and fireside drams define a slow, indulgent journey through Scotland’s far north.
A haven for hedge-fund titans and Hollywood grandees, Greenwich is one of the world’s most expensive residential enclaves, where eye-watering prices meet unapologetic grandeur.
Their careers spanned the personal computing, internet and smartphone waves. But some older workers see AI’s arrival as the cue to exit.
Luke Michel has already lived through two technology overhauls in his career, first desktop publishing in the 1980s and online publishing later on. But AI? He’s had enough.
So when his employer, the Dana-Farber Cancer Institute, made an early-retirement offer to some staff last year, the 68-year-old content strategist decided to speed up his exit. Before, he had expected to work a couple more years.
“The time and energy you have to devote to learning a whole new vocabulary and a whole new skill set, it wasn’t worth it,” he said.
It isn’t that he’s shunning artificial intelligence—he is learning Spanish with the help of Anthropic’s Claude. But, at this point, he’s less than eager to endure all the ways the technology promises to upend work.
“I just want to use it for my own purposes and not someone else’s,” he said.
After rising for decades and then hovering around 40% in the 2010s, the share of Americans over 55 years old in the workforce has slipped to 37.2%, the lowest level in more than 20 years.
The financial cushion of rising home equity and stock-market returns is driving some of the decline, economists and retirement advisers say.
But for some older professionals, money is only part of the equation.
They say they don’t want to spend the last years of their career going through the tumult of AI adoption, which has brought new tools, new expectations and a lot of uncertainty.
Many people retire when key elements of their work lives are disrupted at once, said Robert Laura , co-founder of the Retirement Coaches Association and an expert on the psychology of retirement.
“Maybe their autonomy is being challenged or changed, their friends are leaving the workplace, or they disagree with the company’s direction,” he said.
“When two or three of these things show up, that’s when people start to opt out.”
“AI is a big one,” he adds. “It disrupts their autonomy, their professionalism.”
Michel, whose work required overseeing and strategizing on website content, has been here before.
When desktop publishing arrived in the 1980s, he was a graphic designer using triangles and rubber cement.
The internet’s arrival changed everything again. Both developments required new skills, and he was energized by the challenge of learning alongside colleagues and peers.
It felt different this time around. “Your battery doesn’t hold a charge as long as it used to,” he said.
He would rather spend his energy volunteering, making art, going to operas and chairing the Council on Aging in North Andover, Mass., where he lives.
In an AARP survey last summer of 5,000 people 50 and over, 25% of those who planned to retire sooner than expected counted work stress and burnout as factors.
About half of those retired said they had left work at least partly because they had the financial security to do so.
In general, older Americans are less likely than younger counterparts to use AI, research shows.
About 30% of people from ages 30 to 49 said they used ChatGPT on the job, nearly double the share of those 50 and older, according to a 2025 Pew Research Center survey of more than 5,000 adults.
Baby boomers and members of Generation X also experienced the sharpest declines in confidence using AI technology, according to a ManpowerGroup survey of more than 13,900 workers in 19 countries.
“We as employers aren’t doing a good enough job saying (to older workers), we value the skills that you already have, so much so that we want to invest in you to help you do your job better,” says Becky Frankiewicz , ManpowerGroup’s chief strategy officer.
Jennifer Kerns’s misgivings about AI contributed to her departure last month from GitHub, where the 60-year-old worked as a program manager.
Coming from a family of artists, she said, it offends her that AI models train on the creative work of people who aren’t compensated for their intellectual property. And she worries about AI’s effect on people’s critical-thinking skills.
So she was dismayed when GitHub, a Microsoft-owned hosting service for software projects, began investing heavily in AI products and expecting employees to incorporate AI into much of their work. In employee-engagement surveys, the company had begun asking them to rate their AI usage on a scale of 1 to 5.
When it came time to write reports and reviews, colleagues would suggest that she use ChatGPT.
“I’d be like, ‘I have no idea how to use that and I have no interest in using AI to write anything for me,’” she said.
It would have been more prudent to work until she was closer to Medicare eligibility, she said. But by waiting until her children were out of college and some of her stock grants had vested, the math worked.
Her first act as a nonworking person: a solo trip to Scotland, where she took a darning workshop and learned how to repair sweaters.
“The opposite of AI,” she said.
Employers already under pressure to cut workers—such as in the tech industry—may welcome some of these retirements, said Gad Levanon , chief economist at Burning Glass Institute, which studies labor-market data.
“The more people retire, the fewer they have to let go,” he said.
Some of the savviest tech users are also balking at sticking around for the AI upheaval. Terry Grimm, who worked in IT for 40 years, retired from his senior software consultant role at 65 last May.
His firm had just been acquired by a bigger firm, which meant learning and integrating the parent company’s AI and other tech tools into his work.
Until then, Grimm expected he might work a couple more years, though he felt that he probably had enough saved to retire.
“I just got to the point where I was spending 40 hours at work and then 20 hours training and studying,” said Grimm, who has since moved with his wife from the Dallas area to a housing development on a golf course in El Dorado, Ark.
“I’m like, ‘I’ll let the younger guys do this.’”