That’s not haywire. We already know AI makes stuff up and gets stuff wrong all the time. Putting it in an important position doesn’t make it any less likely to make mistakes - this was inevitable.
LLMs should never be used for therapy.
You can watch the video Cealan made about it
That’s what you get for trustnig an AI more than a professional.
It’s prohibitively expensive to get proper therapy, and that’s if your therapist has an opening in the next six months.
So it is better to use an AI therapist that suggests suicide?
If phrased like that obviously not, but that’s now how those things are marketed. The average person might just stumble upon AI “therapy” while googling normal therapy options, and with the way the media just repeats AI bro talking points that wouldn’t necessarily raise red flags for most “normal” people. Blame the scammer, not the scammed.
Dr. Sbaitso never asked me to commit atrocities.






