Conversation
Most people who claim AI just โguesses the next word,โ are mostly suffering from cognitive dissonance. They canโt handle the reality that actual human intelligence boils down to merely inference.
we doin this meme but instead of applying new inventions to the world we're applying it to ourselves
But they are clearly literally just statistics! Stochastic parrots and all!
Truly a masterpiece. We really do have terrible wetware when it comes to compute, ram and even hard disk space. Our power supply is decent though; many possible sources.
Without using an external calculation tool, almost all LLMs find it very difficult to multiply or even add two large numbers. This is because an LLM cannot process a whole large number at once-when we provide a large number, it gets broken down into multiple tokens
Perhaps Iโm mistaken โ but this looks like a strawman fallacy to me. The example given conflates the ability to generalize with computational horsepower.
I donโt think anyone reasonable would doubt that increasing the context window can enable the application of learned
The article is satire! It's turning around common complaints about why LLMs aren't ever going to work, and showing that they apply to humans :)
Hiring. Engineers in Palo Alto.
+ hardcore team
+ ships lots of code
+ comp โฅ 2x last role
.
Never seen this so succinctly expressed. This is an argument I keep trying to make. The โintelligenceโ is knowing the limit, knowing the problem domain, identifying a solution and executing.