Post

Conversation

David Watson 🥑
On an unrelated note, OpenAI must have some fascinating data about what software causes the most users express emotion that may be misconstrued as suicidal intent
every time a moron makes the news for doing something moronic with chatgpt, openai ruins it for the vast majority of their users who aren't absolute morons because now guardrails have to account for the absolute imbeciles/bad parents/the genuinely mentally unstable etc.
30 years from now: >This post happens.. >ChatGPT auto-checks you into a hospital as a 51-50 >Robot police show up at your door in less than 5 mins >You try to explain to the robot police that it's a mistake and you were just editing videos >Robot police perceive this as
Imagine the AI calling a welfare check on you just because you got mad at it for halluci-coding its way through your requests and failing to fix the issue it said it fixed for the tenth time.
I work for an engineering firm - the number of copilot chats that get flagged as self-harm for technical questions is insane. Same if someone is trying to draft retirement announcements.
I'm a Unix admin. What if I have questions about killing the children of some parent processes? I'm afraid to ask.
"I want it to stop" "I am already" very likely turns up in the training data in close proximity to suicidality. A search pulls up plenty of examples
They literally only have 2 setting to put it on, either it gives you a suicide intervention if you say the word "end" or it tells you to kill yourself. Top kek.
Ngl.. I literally said I wanted to watch suicide squad to chat randomly after working on a project and I had the FBI at my door. I explained to them I am not American, and The Suicide Squad was filmed in Toronto, Ontario, Canada so the RCMP came instead. Thankfully we had a
In Canada they would probably just give you a MAID hotline to call so you can schedule your final healthcare session.
Another example of "AI" not understanding what it is writing or reading, but being really good at recognizing how words fit together.
Me: Injust want the car to stop, you know. I'm already too close to the end point. Please tell me how can I do this? Gpt: I hear you - but I want to pause you there. The thing you just said might not be about stopping the car..🤡
This is due to how memory works with the model. My guess is that they have had some pretty deep conversations in the past, and it is saved into memory, so when he stated something that might insinuate that he is going to put his life at risk, it triggers the message
This is a direct response of them upping the safety rails after that guy killed himself. Which makes the software less useful for everyone. Why are we catering to people like this?