This article is full of bombshells. Excellent reporting by .
The biggest one: OpenAI rushed testing of GPT-4o (already reported), released the model and then subsequently determined the model was too risky to release! I had a scenario like this in a forthcoming...
Conversation
piece, as a hypothetical relayed to me by someone who used to work at OpenAI, but then it turns out it actually already happened, according to this reporting. Bc all of this is governed by voluntary commitments, OpenAI didn't violate any law...
wsj.com/tech/ai/open-a
though it seems like a clear violation of the spirit of the voluntary commitments at least.
Other new stuff: SamA and other execs begged Ilya to come back, he seemed like he would, then execs rescinded the offer. These details aren't super surprising, but it's by far the...
most we've heard about the situation. I guess it's a bit surprising they initially tried to bring him back given that he publicly lost confidence in the CEO.
We also got some new details about John Schulman's dissatisfaction. Schulman seems very serious about AI safety...
based on his appearance on Dwarkesh Podcast.
Next is some straight business gossip: Greg Brockman reportedly annoyed people so much that Sam asked him to leave. Brockman has been the exec most steadfastly loyal to Sam...
Last bit: the new funding deals have not closed. This could explain the timing of the departures of Mira Murati and the other two execs. Abruptly losing your CTO and two other senior people could spook investors. Idk what is motivating these choices, but if Murati wanted to...
hurt Sam, leaving without warning right now is a pretty effective way of doing it. Murati did push for Sam's reinstatement after the coup, but she also raised concerns about Sam's behavior to the board: nytimes.com/2024/03/07/tec ...
This is obviously speculative, but it explains the situation pretty well IMO. She and the other execs could have stuck it out for 2 more weeks so the deals closed, or told Sam privately and announced after the deals closed. The fact that they didn't is noteworthy.
This is all coming out in the final days before has to decide on SB 1047, the AI safety bill that has riven Silicon Valley. His comments indicated he was leaning toward a veto, but a lot has happened since then (big Hollywood stars coming out in force for...
the bill, which I just covered for the Verge and now these revelations about OpenAI).
This is blowing up!
Consider subscribing to my substack: garrisonlovely.substack.com
If they think 4o was too dangerous then it only makes me disbelieve their internal benchmarks all the more
I think the category matters a lot (like the biorisk upgrade is significant imo) but the larger point is that they had a safety process and didn't stick to it.
...wow. That's a lot for the company to be dealing with - I evidently hadn't been cynical enough in my read of the company's situation.
Quote
David Manheim
@davidmanheim
Replying to @TheZvi
I wonder if those jumping ship are concerned that OpenAI is about to be buried in lawsuits as Sam tries to cash in...?
Imagine talking to your data, privately and securely 
Introducing the ultimate data wallet with private AI. Your data, your insights, all under your control.
Get ready for DataMask Diamond! Pre-launch happening soon.
#DataPrivacy #AI #Crypto #HUDI
the “biggest” bombshell is a huge joke considering 4o’s real world performance relative to competition.
the interpersonal drama is much spicier by comparison.
The underlying risks matter way less than the process and commitment to safety in this case.
Thanks for digging up genuinely new and newsworthy stuff about OpenAI. I know it's not easy!
If gpt-4o is so risky surely something should have happened by now since it's been out for a bit.
What's happened?
Nothing, as usual.
Can we cite even a single meaningful incident with the model that signifies real risk?
Nope.
Show morethis safety thing sounds like drivel...people are making a big deal out of nothing as of today. Musk and Grok aren't worrying as much, and they're getting good things done quickly.
Wow, this is awesome! It's refreshing to finally get a glimpse behind the scenes. I appreciate you sharing this
They rushed it and its still inferior to Claude, and has more alignment issues than Claude, yikes
It seems unethical to open source an ai so people can use it. Then the moment someone uses it in a way that makes it significantly better than it was before, they close the doors on it within 24 hours.
It was just a surge of greed. Probably stimulant or nootropic induced, and
Show more
But 4o did release and not much different from other frontier models? It seems like those who pushed for release are right.
Tired of this safety argument, am still to hear one concrete example. Voice came out and i’ve only touched it twice, these safety teams surely live in a bubble trying to decide how the whole world moves forward smh. Either way, opensource will catch up sooner or later
My guess is this is misleading. Per the PF, leadership decides the model's risk level; any "internal standards" created by the preparedness team are nonbinding. (OpenAI's commitments' weakness is of course unfortunate.)
The silliest part is that o1 was too risky. Like lol really? It’s not even that strong.
Lot of speculation with one sided bias.
One bad thing Sam seemingly did is succumbing to gov.
I'm surprised OpenAI would rush into releasing a model they later deemed too risky. That's quite a scary precedent, especially considering the growing concern over AI regulation and accountability.
4o has been out for a while and yet planet Earth remains intact. Good riddance to the doomers. OpenAI can move faster without them.
The genie is out of the bottle and all of us are on borrowed time. Watch a couple of SV bro end humanity.
The single greatest value of AI is to help us figure out new physics, math, chemistry and biology.
TaRS is how we go to mars or moon.
Why haven't they recalled GPT 4o? Isnt there some way for them to cease GPT 4o services until it can be made safe?
lmao, you could release a model 10x as capable safely
LLMs are not the threat that a superintelligence is
It subsequently determined that the most nutered and crippled version of a device that had zero negative consequences was too risky to release, followed immediately by a more powerful model that also had zero negative consequences?
You can't tell from the headline whether it's just the usual slop.
This article basically proves that Sam was right and they were wrong because 4o hasn’t cause any problems whatsoever
"released the model and then subsequently determined the model was too risky to release" meanwhile was on here crowing about what a great job they did with the actual release of it while people were (deservedly) trashing the models performance 
What I see is a buch of people in disarray. Again, I see another Steve Jobs like in Sam Altman. Non-tech but good at manipulating people, claimed the throne at the next big thing.
And their claim 4o is that good then i'm disappoint. It's much worse & less persuasive than Sonnet
Why am I not surprised by Brockman's character? You can literally feel it through his tweets.
Imagine talking to your data, privately and securely 
Introducing the ultimate data wallet with private AI. Your data, your insights, all under your control.
Get ready for DataMask Diamond! Pre-launch happening soon.
#DataPrivacy #AI #Crypto #HUDI
What did Ilya see? He saw a real threat to the open source project - Sam Altman.
This just proves how wrong and misguided the Safetyists are. GPT-4o too dangerous? lol.