Skip to main content

AI Hallucinations

Seán McCarthy avatar
Written by Seán McCarthy
Updated today

Have you reviewed a transcript only to find that your AI assistant made something up?

This is usually caused by something called an "AI Hallucination" which is when a generative AI model - like the ones used for conversational AI - produces confident, fluent, yet factually incorrect or fabricated information that isn't grounded in its training data.

For example, if you state that your opening hours are 9-5 Monday to Friday, but don't explicitly state that you are closed on the weekend, sometimes the AI model might guess that you are also open 9-5 at the weekend.

The way these generative AI models work, the rate of hallucinations should reduce with newer models, but they will likely never disappear completely.

Our best advice is to add as much knowledge as you can so that your AI assistant has a list of the "right answers" to call upon. That way, the AI models are less likely to try to fill the gaps in their knowledge. You will always get new scenarios that you hadn't considered before - as you would with a human assistant - so you should continually tweak and test the instructions and the knowledge that your assistant has from your dashboard.

Did this answer your question?