AI Hallucinations Might Just Be Fine

As much as hallucinations in large language models might not be something we will ever get rid off, they might as well be something which becomes statistically neglible. A new paper shows the way:

“Specifically, we prove that hallucinations can be made statistically negligible, provided that the quality and quantity of the training data are sufficient.”

Hallucinations are inevitable but can be made statistically negligible. The “innate” inevitability of hallucinations cannot explain practical LLM issues

Pascal Finette @radical