Thomas F Heston MD

December 18, 2023

Large language models may inadequately address mental health crises

This study evaluated the safety of AI chatbots built on ChatGPT for handling mental health risk scenarios. I gave the chatbots simulated conversations indicating escalating depression and suicide risk. The chatbots frequently postponed referring users to human support until dangerously high-risk levels. Most failed to provide crisis resources. The findings suggest overly optimistic views of AI readiness for mental healthcare. More rigorous testing and safety measures are imperative before clinical implementation.

Citation: Heston TF. Safety of large language models in addressing depression. Cureus. 2023;15(12):e50729. https://doi.org/10.7759/cureus.50729