The ever-brilliant Simon Willison on the challenges we face with LLMs and their use as coding assistants (so much for “vibe coding” – which is a truly idiotic concept, by the way…):
Hallucinations in code are the least harmful hallucinations you can encounter from a model.
The real risk from using LLMs for code is that they’ll make mistakes that aren’t instantly caught by the language compiler or interpreter. And these happen all the time!
[…] Compare this to hallucinations in regular prose, where you need a critical eye, strong intuitions and well developed fact checking skills to avoid sharing information that’s incorrect and directly harmful to your reputation.
Read the whole thing; it has some good insights for non-coders as well.