Post
Conversation
Seven out of 13 experts said OpenAI's response was at or near the level of an experienced professional. Ten compared it to an intern or entry-level worker. People were not as impressed with Google's responses.
Overall, 16 out of 19 of my expert judges thought OpenAI's response was better, compared to only three who thought Google's was better.
Read some of the experts' thoughts and my analysis here.
Those were not up yet when I started collecting data. Will put it on my to-do list.
Not coping for Google here but I'm not sure how this is a relevant comparison. Gemini Deep Research is free for anyone with a Workspace or Google 1 sub or $20, and OpenAI is $200 a month. + G's Deep Research product is on a model from May 2024 (Gemini 1.5).
what base model does Google's Deep Research use? Open AI uses o3, right?
OpenAI uses a fine-tuned version of o3 yes. I would expect Google's is based on a fine-tuned Gemini 1.5 Pro (it came out too early to be based on Gemini 2) but I don't know if they've officially said that.
Google's one is via their 1.5 Pro, so I think this will change once they update the model, and the method they use to extract info from it.
10+ hours? Sounds like a long coffee break.
Makes sense they weren't included given recency but would be interesting to see comparisons alongside Perplexity and Grok's offerings
I don't necessarily dispute your overall point, but I'd push back a bit on the usefulness of school advisory doc. First, designing an advisory is really outside of the regular work of a classroom teacher. Second, the quality of edu research is so variable it's hard to assess.
FOR PRUDENT INVESTORS ONLY
ChatGPT has unleashed the golden age of AI. Many investors will make a fortune from AI stocks. Get the list of 18 AI stocks to grab your share of the profits — no cost to you.
0:09
18 Stocks To Profits From Artificial Intelligence — No Cost To You
Discover more
Sourced from across X
GPT 4.5 + interactive comparison :)
Today marks the release of GPT4.5 by OpenAI. I've been looking forward to this for ~2 years, ever since GPT4 was released, because this release offers a qualitative measurement of the slope of improvement you get out of scaling pretraining
Show more
welcome, gpt-4.5
i've spent a lot of time playing with this model recently, and it's left me feeling the agi
some thoughts