As much as hallucinations in large language models might not be something we will ever get rid off, they might as well be something which becomes statistically neglible. A new paper shows the way:
“Specifically, we prove that hallucinations can be made statistically negligible, provided that the quality and quantity of the training data are sufficient.”