main claim is LLMs are still really narrow intelligence, like playing Go; missing a lot of critical stuff, creativity, long term planning, etc
Being able to book a flight is not a job. Jobs are so much stuff all bundled together
(Updates me towards: scaffolding is less of a bottleneck than I thought, really you probably need to do some sort of RL/other training. In-context memory isn't good enough to get better at stuff)
The big unlock for humans was language/culture. Being able to learn from the experiences of others / really high quality dense training data
So we shouldn't expect the same jump from humans to ASI
(well, I think we should for other reasons, like way faster serial speed)
Really good point that just knowing the answer of what future technology will exist is not at all sufficient. Getting there is an iterative process (learning curve), and you need the whole supply chain to come into existence around it, which is gradual
Knowing about dams is not enough to make the hoover dam, nor is even having the blueprint. Where are you going to get all that concrete?
Realization: swe is easy
Lots of stuff is just not reasoning bottlenecked. You can think about stuff abstractly from far away all you want, but there is just critical detail that you can only see up close by interacting with it
Realization: SWE is the only industry where a large fraction of the tacit knowledge is online. No easy way to get that knowledge for other sectors. Coding should be the easiest thing by far
(counterpoint: learning to learn. If you really had a human level intelligence that was 50x faster, it could pick things up pretty quickly. Take at most 5 years to get that tacit knowledge)