A damning study from Columbia University, analyzing AI search engines' accuracy and their ability to cite sources:
Collectively, they provided incorrect answers to more than 60 percent of queries.
“More than 60% of queries” is pretty abysmal. It gets worse:
Most of the tools we tested presented inaccurate answers with alarming confidence, rarely using qualifying phrases such as 'it appears,' 'it's possible,' 'might,' etc., or acknowledging knowledge gaps.
On top of this, AI search engines also clearly have indexed material they were not supposed to (or more precisely, allowed to) access.
Perplexity Pro was the worst offender in this regard, correctly identifying nearly a third of the ninety excerpts from articles it should not have had access to.
It’s bad. Here is the study.