A study by RAND Corporation found that AI chatbots like ChatGPT, Gemini and Claude don't always handle suicide-related questions the same way. They avoided answering direct, high-risk questions about suicide methods, but there was significant variability in responses to questions at intermediary levels. The chatbots, especially ChatGPT, showed reluctance to provide therapeutic resources.
short by
Neeraja Nath /
04:15 pm on
26 Aug