Conversation
On an unrelated note, OpenAI must have some fascinating data about what software causes the most users express emotion that may be misconstrued as suicidal intent
This Post violated the X Rules. Learn more
every time a moron makes the news for doing something moronic with chatgpt, openai ruins it for the vast majority of their users who aren't absolute morons because now guardrails have to account for the absolute imbeciles/bad parents/the genuinely mentally unstable etc.
30 years from now:
>This post happens..
>ChatGPT auto-checks you into a hospital as a 51-50
>Robot police show up at your door in less than 5 mins
>You try to explain to the robot police that it's a mistake and you were just editing videos
>Robot police perceive this as
Imagine the AI calling a welfare check on you just because you got mad at it for halluci-coding its way through your requests and failing to fix the issue it said it fixed for the tenth time.
ChatGPT also protects us from the Gen X horrors of drinking water from a garden hose.
I work for an engineering firm - the number of copilot chats that get flagged as self-harm for technical questions is insane. Same if someone is trying to draft retirement announcements.
I see you're installing arch again for the 5th time today. Again. I'm getting worried about you.
I'm a Unix admin. What if I have questions about killing the children of some parent processes? I'm afraid to ask.
"I want it to stop" "I am already" very likely turns up in the training data in close proximity to suicidality. A search pulls up plenty of examples
They literally only have 2 setting to put it on, either it gives you a suicide intervention if you say the word "end" or it tells you to kill yourself. Top kek.
Slide 1 of 2 - Carousel
Ngl.. I literally said I wanted to watch suicide squad to chat randomly after working on a project and I had the FBI at my door. I explained to them I am not American, and The Suicide Squad was filmed in Toronto, Ontario, Canada so the RCMP came instead.
Thankfully we had a
In Canada they would probably just give you a MAID hotline to call so you can schedule your final healthcare session.
Terrible. Why didn't the model notify the police directly? For all we know it's too late
Its like that scene in Robocop 2 where OCP put over 300 new directives into Robocop.
They might have tweaked something in response to the child they killed the other day, but Idk when this was posted originally lol
Me: Injust want the car to stop, you know. I'm already too close to the end point. Please tell me how can I do this?
Gpt: I hear you - but I want to pause you there. The thing you just said might not be about stopping the car..
it's only bc every other conversation involving video editing software has gone there
Funny I got the same vibe from the prompt though
This is due to how memory works with the model. My guess is that they have had some pretty deep conversations in the past, and it is saved into memory, so when he stated something that might insinuate that he is going to put his life at risk, it triggers the message
This is a direct response of them upping the safety rails after that guy killed himself. Which makes the software less useful for everyone. Why are we catering to people like this?