Post

Conversation

This article is full of bombshells. Excellent reporting by . The biggest one: OpenAI rushed testing of GPT-4o (already reported), released the model and then subsequently determined the model was too risky to release! I had a scenario like this in a forthcoming...
Image
David Watson 🥑
Post your reply

though it seems like a clear violation of the spirit of the voluntary commitments at least. Other new stuff: SamA and other execs begged Ilya to come back, he seemed like he would, then execs rescinded the offer. These details aren't super surprising, but it's by far the...
Image
most we've heard about the situation. I guess it's a bit surprising they initially tried to bring him back given that he publicly lost confidence in the CEO. We also got some new details about John Schulman's dissatisfaction. Schulman seems very serious about AI safety...
Image
based on his appearance on Dwarkesh Podcast. Next is some straight business gossip: Greg Brockman reportedly annoyed people so much that Sam asked him to leave. Brockman has been the exec most steadfastly loyal to Sam...
Image
Image
Last bit: the new funding deals have not closed. This could explain the timing of the departures of Mira Murati and the other two execs. Abruptly losing your CTO and two other senior people could spook investors. Idk what is motivating these choices, but if Murati wanted to...
This is obviously speculative, but it explains the situation pretty well IMO. She and the other execs could have stuck it out for 2 more weeks so the deals closed, or told Sam privately and announced after the deals closed. The fact that they didn't is noteworthy.
This is all coming out in the final days before has to decide on SB 1047, the AI safety bill that has riven Silicon Valley. His comments indicated he was leaning toward a veto, but a lot has happened since then (big Hollywood stars coming out in force for...
If they think 4o was too dangerous then it only makes me disbelieve their internal benchmarks all the more
I think the category matters a lot (like the biorisk upgrade is significant imo) but the larger point is that they had a safety process and didn't stick to it.
...wow. That's a lot for the company to be dealing with - I evidently hadn't been cynical enough in my read of the company's situation.
Quote
David Manheim
@davidmanheim
Replying to @TheZvi
I wonder if those jumping ship are concerned that OpenAI is about to be buried in lawsuits as Sam tries to cash in...?
Imagine talking to your data, privately and securely 💬🔐 Introducing the ultimate data wallet with private AI. Your data, your insights, all under your control. 🐸 Get ready for DataMask Diamond! Pre-launch happening soon. 👀 #DataPrivacy #AI #Crypto #HUDI
Image
the “biggest” bombshell is a huge joke considering 4o’s real world performance relative to competition. the interpersonal drama is much spicier by comparison.
If gpt-4o is so risky surely something should have happened by now since it's been out for a bit. What's happened? Nothing, as usual. Can we cite even a single meaningful incident with the model that signifies real risk? Nope.
Show more
this safety thing sounds like drivel...people are making a big deal out of nothing as of today. Musk and Grok aren't worrying as much, and they're getting good things done quickly.
I said many times, and I repeat. It's all about power. I'm not sure SamA would sacrifice OpenAI entirely only to get absolute power, but he'll come pretty near in doing that.
It seems unethical to open source an ai so people can use it. Then the moment someone uses it in a way that makes it significantly better than it was before, they close the doors on it within 24 hours. It was just a surge of greed. Probably stimulant or nootropic induced, and
Show more
But 4o did release and not much different from other frontier models? It seems like those who pushed for release are right.
Tired of this safety argument, am still to hear one concrete example. Voice came out and i’ve only touched it twice, these safety teams surely live in a bubble trying to decide how the whole world moves forward smh. Either way, opensource will catch up sooner or later
My guess is this is misleading. Per the PF, leadership decides the model's risk level; any "internal standards" created by the preparedness team are nonbinding. (OpenAI's commitments' weakness is of course unfortunate.)
Pressures of rapid deployment may compromise the rigor of such assessments. Concerns about the model's risks & drive to monetize AI technologies could lead to conflicts in priorities.
This is great. Feels like the first time some of the behind the scenes is actually coming to light. Thanks for sharing.
I'm surprised OpenAI would rush into releasing a model they later deemed too risky. That's quite a scary precedent, especially considering the growing concern over AI regulation and accountability.
4o has been out for a while and yet planet Earth remains intact. Good riddance to the doomers. OpenAI can move faster without them.
The genie is out of the bottle and all of us are on borrowed time. Watch a couple of SV bro end humanity. The single greatest value of AI is to help us figure out new physics, math, chemistry and biology. TaRS is how we go to mars or moon.
It subsequently determined that the most nutered and crippled version of a device that had zero negative consequences was too risky to release, followed immediately by a more powerful model that also had zero negative consequences?
This article basically proves that Sam was right and they were wrong because 4o hasn’t cause any problems whatsoever
"released the model and then subsequently determined the model was too risky to release" meanwhile was on here crowing about what a great job they did with the actual release of it while people were (deservedly) trashing the models performance 🤡
What I see is a buch of people in disarray. Again, I see another Steve Jobs like in Sam Altman. Non-tech but good at manipulating people, claimed the throne at the next big thing. And their claim 4o is that good then i'm disappoint. It's much worse & less persuasive than Sonnet
Imagine talking to your data, privately and securely 💬🔐 Introducing the ultimate data wallet with private AI. Your data, your insights, all under your control. 🐸 Get ready for DataMask Diamond! Pre-launch happening soon. 👀 #DataPrivacy #AI #Crypto #HUDI
Image