When you purchase through links on our site, we may earn an affiliate commission.Heres how it works.
Large language models (LLMs) likeChatGPThave wowed the world with their capabilities.
But theyve also made headlines for confidently spewing absolute nonsense.
Sure, you could argue that everyone should rigorously fact-check anything AI suggests (and Id agree).
LLMs dont retrieve facts like a search engine or a human looking something up in a database.
Instead, they generate text by making predictions.
LLMs are next-word predictors and daydreamers at their core, says software engineer Maitreyi Chatterjee.
They generate text by predicting the statistically most likely word that occurs next.
We often assume these models are thinking or reasoning, but theyre not.
Theyre sophisticated pattern predictors and that process inevitably leads to errors.
Theyre not sitting there working it out like we would not really.
Another key reason is they dont check what theyre pumping out.
And when they dont know the answer?
They often make something up.
Rather than admitting uncertainty, many AI tools default to producing an answer whether its right or not.
Other times, they have the correct information but fail to retrieve or apply it properly.
This can happen when a question is complex, or the model misinterprets context.
This is why prompts matter.
The hallucination-smashing power of prompts
Certain types of prompts can make hallucinations more likely.
Weve already covered ourtop tips for leveling up your AI prompts.
For example, ambiguous prompts can cause confusion, leading the model to mix up knowledge sources.
But more detail isnt always better.
Providing a reference is another effective way to keep AI on track.
Few-shot prompting, where the model is given a series of examples before answering, can also improve accuracy.
Even with these techniques, hallucinations remain an inherent challenge for LLMs.
As AI evolves, researchers are working on ways to make models more reliable.