Superforecasters and subject matter experts tend to disagree about existential AI risk (subject matter experts are more worried) and this is a great exploration of what the cruxes of the disagreement are. forecastingresearch.org/s/AI…

Mar 11, 2024 · 6:22 PM UTC

Replying to @mattyglesias
This is not universally true of superforecasters. Forecasting group @SamotsvetyF has given significantly higher % of AI risk than the forecasters in this study, more in line with the AI experts forum.effectivealtruism.org/…
Replying to @mattyglesias
The big headline news here for me-a super who participated in the original supers X-risk forecasting tournament-is that if you extend the timeline out to 1000 years, supers actually see a big (30%) chance of a disastrous outcome from AI after all.
Replying to @mattyglesias
When was the last time a computer expert was right about something?