top of page

//The hallucinations of AIs - Part 1

There are moments where virtual assistants like Siri, Google Assistant, or ChatGPT provide bizarre or completely out-of-context answers. In this post, we will explore the causes of these unexpected behaviors of Generative AI: hallucinations!



Have you ever wondered why ChatGPT, or any of the Generative AI out there, sometimes give responses that seem to be influenced by psychoactive substances?


For example, you ask:

Q: "What is the most famous traditional Italian dish?
"A: "The most famous traditional Italian dish is sushi."

Sometimes less obvious things could happen (for those who haven't studied geography).

Q: "What is the largest ocean in the world?"
A: "The Atlantic Ocean"

Or you might be advised to take benzodiazepines to overcome insomnia, leading to dependency and a lot of side effects, or receive incorrect information for your school or work projects.

In short, if you are not familiar with the topic of the responses, you could harm yourself!

AI is seen as a black box by many, a black box that we gradually need to uncover. Quoting the great Douglas Adams, who you probably all know:

"Embrace the fear, discover the wisdom of artificial intelligence."

A sentence that encapsulates the beauty of discovery and one that you have probably heard at least once. Unfortunately, it is a complete hallucination: Douglas Adams is the author of "The Hitchhiker's Guide to the Galaxy" and the quote was generated, at my explicit request, by "Chat-GPT “, to which I asked to attribute it to him.


Do you understand that by asking about topics we are ignorant about, we risk being told anything and believing it to come from the most authoritative source in the universe? Or how much fun fake news creators can have?


Confused? You're not alone. Many of you, both on the blog and offline, have asked me this question multiple times, so we will embark on a two-part journey through the hallucinations of Generative AIs. In the first part, we will understand what they are and where they come from, and in the second part, we will explore how to attempt to manage them.


What is a Large Language Model?

To fully understand hallucinations, it's necessary to take a step back and acquire a basic understanding of Large Language Models (LLMs). I have already discussed them in this post, but it's worth delving deeper.


Have you ever asked a Generative AI for a summary or typed a half-sentence into Google just to see the search engine complete your thought? Well, these are some of the powers of LLMs.

Language models, to put it simply, are like vast digital libraries that have read all the content within them (every book, every article, every poem) and have learned to recognize patterns and linguistic structures. They can then use this knowledge to generate new text, answer questions, and much more.


Since nobody goes to the library anymore, another example that comes to mind is that of an expert chef. Imagine a chef who has cooked every recipe in the world, tested every ingredient by trying millions of different combinations. If we ask this chef to prepare 'ordinary' spaghetti with tomato sauce, I believe we would get the best dish ever conceived by the human mind.


In the case of language models, the 'ingredients' are words or phrases, and the 'dish' is the generated text. Like the chef, a language model has 'seen' millions of examples of how words are combined in sentences. When asked to generate text, it tries to combine the words in the way it deems most probable or appropriate based on what it has 'learned'.


Just like the chef, a language model has 'seen' millions of examples of how words are combined in sentences. When asked to generate text, it tries to combine the words in the way it deems most probable or appropriate based on what it has 'learned'.


Need more details? Expand this section


But…

But not everything that glitters is gold. AI models need to be trained, which means they must 'learn' by analyzing enormous amounts of text before providing responses. This process requires a significant amount of computation and, therefore, is expensive (VERY expensive in larger models like those from OpenAI and Google, LESS so in models that are starting to populate the LLM zoo, such as LLaMa, ORCA, or Falcon).

Learning doesn't work the same way as it does for us humans, who, through experience, put together what we learn by reasoning with our intellect and mediating with our emotions. Groups of varying sizes, with varying levels of education, supported by software, have to start training them from scratch by providing them with texts, splitting them into tokens, creating parameters, constantly checking their level of learning, continuously correcting their mistakes, and challenging them to do more, while trying to avoid the emergence of cognitive biases, logical errors, and other issues. They are not naturally curious learners; they must be trained.

Once the training is over, however, the model will remember everything in minute detail and will be able to make the best combinations based on the requests: it will thus be able to create a lot of wonderful dishes.

Sometimes, however, in the frenzy of always having to respond, a model can generate strange combinations. It's a bit like if a chef decided to put ice cream on pizza, or a DJ decided to mix Mozart's Requiem at the peak of a night at the disco. And these models are trained to always provide an answer, they are designed to minimize the "I can't help you" response, and sometimes, they make mistakes. Here come the hallucinations.


What are 'hallucinations' in Generative AI?

In less metaphorical terms, we speak of 'hallucinations' in language models when we refer to those moments when AIs produce statements or data that have no basis in (our) reality.

Hallucinations derive from the model's 'innate' ability to generate text based on the probability that a word is likely to be found next to another in the data it has been trained on, rather than on a true understanding of the world. In other words, it tells us things because they are statistically plausible for it, even though it has no way of understanding whether what it produces is true or not. (Here you can find a post about it)

As mentioned at the beginning, this becomes a problem when the AI's hallucination is not as obvious as the statement: "penguins fly".


Imagine an AI that provides a completely invented historical fact or, in the previous example, invents a non-existent quote, recommends wrong treatments, inappropriate behaviors, incorrect data: at best, these hallucinations can fuel disinformation and misunderstandings, at worst they can pose various risks to you.


Why do language models hallucinate?

To better understand, let's imagine ourselves 'learning' as if we were a language model. Every day, for years, we read a wide range of texts - books, articles, blog posts. We are never told which of this information is true and which is not. Then, we are asked to write an essay on a random topic. Without careful verification of the truthfulness of the information, it's likely that we’d end up including some false memories in our essay. As humans, we would also include a large emotional component and perhaps try to win the reader over to our side, convinced we are right (a possibility currently denied to Generative AI).

Language models generate a hallucination when they are producing information that may seem logically or factually inconsistent to us humans, despite being in line with the language patterns learned during training. This is because these models rely on language statistics, rather than a true understanding of the meaning of words.


Understanding, Statistics, Temperature

So, language models, like GPT, don't really understand the meaning of words in a human way. They only know that certain words tend to appear correlated in certain combinations: if a given combination of words is not present in the training data, the model might invent something - here comes the hallucination - just to avoid telling you: "I don't know".

Behind the scenes, there's a parameter called 'Temperature' that regulates this behavior. The lower the temperature value, the more the algorithm sticks to the data; the higher it is, the more prone it is to 'invent'. The problem is that there isn't a 'perfect' temperature value, or at least it's not easy to set the correct value in all contexts.


You too can play with this parameter as you see in these two examples on GPT:

Question: Set the model temperature parameter to MAXIMUM and answer this question. "The cat is..."

Answer: A multicolored rocket traveling through the boundaries of imagination, whispering poems to the stars!

Question: Set the model temperature parameter to MINIMUM and answer this question. "The cat is..."

Answer: A small, carnivorous domestic mammal, often kept as a pet.


Context

LLMs try to maximize the likelihood of what might come next, based on what has already been said. This can lead to generating content that seems plausible, but in fact, makes no sense or is not true. Models often hallucinate when the context is not clear or not given. For this reasonnwhen using AI it is necessary, as many including myself argue, to CONTEXTUALIZE. This allows you to get more correct answers! (Always repeat this to yourself when writing something to an AI).

Lies?

A hallucination is like an ignorant lie: that is, something we say believing it to be true, while it isn’t.

So if the model:

· has been trained poorly (supervised poorly, without checking if the sources were correct, inconsistent, ambiguous, contradictory, or false);

· has been trained in bad faith (it happens, it will happen);

· has been developed by a different culture (imagine asking about your God to a model trained by someone with a different faith who is not willing to question it);

· is sensitive to certain data patterns (Overfitting, which happens when a machine learning model adheres too closely to the training data and can no longer generalize and be accurate on new data. It's a bit like learning a lesson by heart and then finding slightly different questions that require a reasoning we don't know how to do);

· has limited content to provide the specific answer (When GPT starts to respond with too many "If", "However", "Before making a decision...", pay attention! I tried asking it if I can put a beehive in my home garden and it literally beat around the bush).


So, making mistakes is no longer just human!

The issue is technically very complex, I recommend you read the post again and delve into it if you want to understand it more deeply. But, in fact, we know that we are not the only ones who can make mistakes because these models are not infallible.

Hallucinations are a serious problem for us if we are not aware of the risk of encountering them when using Generative AI. They are also a problem for OpenAI and all the other companies that are (heavily) working on the issue because they reduce the credibility of their tools.

Stay tuned because in the next post I will talk about how to manage and identify hallucinations and how to try to ask the right questions to reduce them.


If you have any suggestions, I invite you to write them in the comments!




Komentarze


bottom of page