AI chatbots can ‘hallucinate’ and make things up

When you hear the word "hallucination,"

you may think of hearing sounds no one else seems to hear

or imagining your coworker has suddenly grown a second head while you're talking to them.

But when it comes to artificial intelligence, hallucination means something a bit different.

When an AI model "hallucinates," it generates fabricated information in response to a user's prompt

but presents it as if it's factual and correct

Say you asked an AI chatbot to write an essay on the Statue of Liberty.

The chatbot would be hallucinating if it stated that the monument was located in California

But the errors aren't always this obvious. In response to the Statue of Liberty prompt

the AI chatbot may also make up names of designers who worked on the project or state it was built in the wrong year.