

On the code competition, I think it can do like 2 or 3 lines in particular scenarios. You have to have an instinct for “are the next three lines so blatantly obvious it is actually worth reading the suggestion, or just ignore it because I know it’s going to screw up without even looking”.
Very very very rarely do I find prompt driven coding to be useful, like very boilerplate but also very tedious. Like “show user to specify these three parametets in this cli utility”, and poof, you got a reasonable argv handling pretty reliably.
Rule of thumb is if a viable answer could be expected during an interview by a random junior code applicant, it’s worth giving the llm a shot. If it’s something that a junior developer could get right after learning on the job a bit, then forget it, the LLM will be useless.
Problem with the “benchmarks” is Goodhart’s Law: one a measure becomes a target, it ceases to be a good measurement.
The AI companies obsession with these tests cause them to maniacly train on them, making then better at those tests, but that doesn’t necessarily map to actual real world usefulness. Occasionally you’ll see a guy that interviews well, but it’s petty useless in general on the job. LLMs are basically those all the time, but at least useful because they are cheap and fast enough to be worth it for super easy bits.