When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
Why does AI hallucinate?
Sometimes they guess incorrectly, resulting in a hallucination.
Conflicting knowledge might cause variance in the answers it gives, increasing the change of AI hallucinations.
Curtis says LLMs are not good at generalizing unseen information or self-supervising.
Why is it important to eliminate hallucinations?
As the two New York lawyers found out, AI hallucinations arent just an annoyance.
As long as AI hallucinations exist, we can’t fully trust LLM-generated information.
She suggests legal contracts, litigation case conclusions, or medical advice should go through a human validation step.
Air Canada is one of the companies thats already beenbitten by hallucinations.
And it all begins with the datasets the models are trained on.
Vasyte asserts high-quality, factual datasets will result in fewer hallucinations.
Instead, he suggests companies should use a representative dataset thats been carefully annotated and labeled.
Experts also point toretrieval augmented generation (RAG)for addressing the hallucination problem.
It is believed outputs from RAG-based generative AI tools are a lot more accurate and trustworthy.
Here again, though companies must ensure the underlying data is properly sourced and vetted.
Beregovaya says the human-in-the-loop fact-checking approach is the safest way to ensure that hallucinations are caught and corrected.
This, however, she says, happens after the model has already responded.