Post
Conversation
I got blisters from new shoes the other day, so I stuffed these
4D cloud insoles in—first step had me jumping! Nothing like those cardboard-thin insoles before; it’s like stepping on Cotton Candy!
Shop now >>vibenests.co/4D-insole
0:00 / 0:24
0:23
Really been confused about the level of surprise on here re GPT-5. Capabilities seem perfectly consistent with widely reported trends on decreasing returns to scaling, internal challenges at OpenAI, etc.
Well
Quote
rohit
@krishnanrohit
This might be heresy, but the delta from GPT2 to GPT3 seems much wider than GPT3 to GPT4. It's getting more usable, but I'm not sure if the exponential improvement in capability hypothesis holds true any longer.
People like have been saying this for a long time too. In fact a lot of researches have been saying that. LLM don’t just scale in to AGI.
Well.. he turned out to be wrong. GPT-5 is miles ahead of GPT-4. It's just not that much better than o3, but o3 was launched in April 2025.
Either something along the lines of limits of input, like feeding it the "sum of human knowledge" includes a crap ton of garbage, or specifically that LLMs cannot reach AGI due to it being just a digital library engine, or both.
It was stupid obvious.
When they kept running different iterations of ChatGPT 4 and the same with all other then current models, you knew that meant they truly didn’t know how to make the next big break.
Negative second derivative.
o1 and reasoners don't release for another year after he says this, and gpt3->gpt4 was objectively kinda whatever. Nice, but definitely not a "holy shit, we're looking at something new here" the way everything up to that point was.
I don't think people realize that GPT-5 being free, cheaper in API than 4, and better quality *is* a breakthrough
OpenAI’s GPT-5 is available today in Azure AI Foundry.
This new suite of first-class reasoning models from OpenAI delivers the most intelligent and safe responses of any model we have released to date: perfect for building AI applications and agents.
Learn more:
The gpt is at the top of the s curve because we ran out of data
Need another technology breakthrough to improve the LLM
the very obvious plateau from GPT 2->3->4 along with quadratic time complexity with parameter count scaling that everyone else not blinded and deafened by marketing hype saw?
Same thing he saw saying the internet will be a fad.
Only difference is that this man is able to change his opinion fast as soons as he realizes that he is wrong.