Post
Conversation
Still a couple of studies left, as Rem points out (Kim et al is no longer online, probably as it is under review).
I wrote about the effects of AI on skill inequality back in 2023. I think the possibilities still hold up.
It depends on the task and how performance is measured.
I still think LLMs make smart people smarter and dumb people dumber; at least when it comes to real-world problems.
It's a U curve. Left and right tails get a big boost. Middle gets the least out of it.
My hunch is that lowest performers and highest performers are currently getting the biggest boost.
Low performers get a big boost from naive adoption - delegating their work to the LLM. Top performers employ smart adoption - augmenting their work, know when to use & when not to
What about high performers using LLMs to go cross-domain? Like a backend python based developer suddenly becoming world class at UX/UI much faster
Typically top performers have glaring weaknesses outside of their top strengths. Especially in sales and marketing.
Ai can easily offset those weaknesses to create a new baseline for top performers.
This is what I have seen in 20+ company transformations
It makes sense because AI expands by a magnitude the Information Space available, just as language writing and printing press did before it. This liminal space between Order of Information / Bayesian Theorem & Disorder of Chaos Theory/ Entropy is where creativity/ consciousness
is "multiplier" the right lens for high performers & a.i.?
it's like asking how a starship makes a horse faster
the vehicle changes the destination itself
they're not just accelerating to old finish lines
they're charting courses to entirely new ones
I suppose it’s human tradition to rank, categorize, and differentiate ourselves from each other intellectually, but I think these kinds of studies are useless in the face of rapidly advancing LLMs; not to mention bearish on both ASI and AGI as well
is there some sort of barrier to a proper study? Fewer "high" performers?
I think it’s much trickier to measure (and will be harder and harder going forward as everyone gets used to offloading some tasks to the LLMs).
I am in the it make smart people smarter camp. But to go “Elon smart” you need to have a big IT AI team, lots of specialized data and invest millions. Finally, first mover advantage and luck will play a big role too. I think 1% of the top 1%, go “Taylor Swift”.
There is a trade off con context switching that should be considered
Still early days
As a chemist, and someone who is pretty keen on AI, I’ve struggled to find chemical-technical things the LLMs excel at. Great for writing, but useless at ideation and technical implementation. That’s why I was pretty sure that paper was faked.
Sounds like we've built glorified spelling checkers. Until we see ROI for top performers, it's just a pricey auto-correct for the masses.
I imagine this is going to be very task dependent. Take coding for instance.
Building a small web app that would have taken 3 days: low performers see a much larger level up
Build a production web app that would have taken 6 months: high performers see a larger level up
Everyone’s a low performer at something. Usually high performers are very undercompensated in other areas, because they’ve made hard choices and prioritized very few things.
Imagine going to an office in the 80s and putting a PC w/ spreadsheet software on every employee desk w/ no training. You'd prob see a company wide decrease in productivity, including amongst low & high performers. It might take time for actionable data from broad surveys.
What this tells us is that LLM's have not yet exceeded the capability of high performers, but does exceed the capability of low performers.
That is hardly a new observation.
Doesn't your jagged frontier working paper says above average performers benefit? What's your/the def of high performer? Even for experts, a non expert assistant helps.
Google alphaEvolve seems related.