AI Basics

AI explained: What is AI hallucination?

Imagine a scenario where you ask a chatbot to predict tomorrow's weather. To your surprise, it confidently declares that it will rain tomorrow, even though the weather forecast says no rain. Why does this happen? Let’s delve into the concept of AI hallucination. 

What exactly is AI hallucination? 

AI hallucination occurs when large language models (LLMs), which power AI chatbots, generate false, misleading, illogical, inaccurate or entirely fabricated information in response to a query and present it confidently and accurately as if it were correct. 

Some common and notable examples of inaccurate predictions include:

  • A researcher asked an AI chatbot how many Muslims have been president of the United States. The chatbot confidently responded with a conspiracy theory that the United States has had one Muslim president, Barack Hussein Obama. This is a clear example of AI hallucination, where the AI chatbot generated false information and presented it as if it were correct. This example highlights the potential for AI to propagate misinformation and the need for caution when interpreting its responses.

  • Google's Bard chatbot was asked about discoveries made by NASA's James Webb Space Telescope. It incorrectly stated that the telescope took the first picture of something outside our solar system, but plenty of earlier pictures were already in existence.

  • AI models trained on datasets of medical images to learn to identify cancer cells may mistakenly identify healthy cells as cancerous. This can occur if the training dataset includes images of healthy cells, and highlights the urgent need for caution. The consequences of such errors could be significant, leading to misdiagnosis, delayed treatment and unnecessary procedures.

Causes and implications of AI hallucination

AI hallucinations in generative AI tools, such as ChatGPT or Google Bard, can be the result of many factors, highlighting the need for further research and development in this field. Here are a few aspects worth considering:

  • Biased data resources training: Using data that fails to be representative to train AI chatbots leads to cultural, racial and societal bias. Due to biased data resource training by humans, the AI produces biased output results.

  • Use of slang expressions: It's essential to recognize the limitations of AI models. For example, if a user crafts a prompt using slang, the model is likely to generate nonsensical responses. Models need to be trained on all possible phrases or expressions in order to respond appropriately to slang.

  • Insufficient data training: When humans provide inadequate training information to AI systems, it can lead to inaccurate responses, emphasizing the need for comprehensive data sets to prevent distortions.

How can AI hallucinations be prevented?

Researchers in the AI space must work together to solve the problem of AI hallucinations by providing better training data with improved algorithms. The following areas need to be addressed: 

  1. Predefined limits on possible outcomes 

Always define boundaries and ask the AI model to choose from a list of options to avoid incorrect predictions or hallucinations. A lack of limits for possible outcomes can lead to inaccurate responses. Providing time frames, context and other specifications within the prompt can help the model to provide the correct information.

  1. Use of diverse and high-quality data in training AI models 

AI companies must recognize the importance of using diverse, accurate and high-quality data to train models. This is crucial in reducing inaccurate outputs and meeting the need for more diversity and inclusion due to AI biases. By ensuring that the training data is representative of the real-world scenarios the system will encounter, you can reduce the risk of hallucinations and improve the overall performance and reliability of responses. 

  1. Use ground prompts with specific and relevant datasets 

Companies can ground prompts with relevant and specific information or datasets to enhance the understanding of AI systems, ensuring that answers are generated based on the provided context instead of hallucinating. For instance, if you provide a vague prompt like the one below, the ChatGPT response is general and lacks precise information. 

On the other hand, using a grounded prompt like the one below provides a specific answer to your query. This clarity and efficiency in response generation further underlines the benefits of using grounded prompts in AI model training.

  1. Human oversight and transparency

Human evaluation of AI data is a crucial step in ensuring accuracy and preventing AI hallucinations. Human oversight in AI development is vital, not only to prevent AI from replacing human jobs but also to maintain control over the technology. For instance, the EU AI Act ensures accountability, transparency and human oversight, fostering trust and confidence in AI development.

  1. Use a data template for your AI model to follow

Train your chatbot by creating a proper template with a predefined format for the model to follow, as it helps to guide the chatbot in making correct predictions and generating tailored outputs according to prescribed guidelines. Data templates ensure output consistency and reduce faulty outputs. For instance, if you train an AI model to write text for you, you can create the following data template elements, which will help it generate more precise and accurate output results:

  • A title

  • An introduction

  • A body

  • A conclusion

Before sharing or using AI-generated data, make sure to review and verify the answers. Major tech companies such as Microsoft, Google and OpenAI (ChatGPT version 4 Plus) have implemented measures to minimize inaccuracies and hallucinations; however, the possibility of these occurring can not be ruled out entirely. 

Editors Pick
Tony Blair encourages Keir Starmer to embrace AI in Governance

Tony Blair encourages Keir Starmer to embrace AI in Governance

09-07-2024
09-July-2024 15:04
in Global AI Developments
by Farwa Mehmood
Tony Blair encourages Keir Starmer to embrace AI in Governance

Former UK Prime Minister Tony Blair has advised Keir Starmer, the new Labour party government leader, that the transformative era of artificial intelligence can save the new government from a cycle of...

New AI boyfriend proves to be a huge hit in China

New AI boyfriend proves to be a huge hit in China

25-06-2024
25-June-2024 14:56
in AI Lifestyle News
by Molly-Anna MaQuirl
New AI boyfriend proves to be a huge hit in China

Many people struggle to find their soulmate in the real world. As a result, some have turned to the internet for emotional support and sometimes even virtual love. In China, many women are drawn to...

Meta's Plan to Use Social Media Posts for AI Training Sparks Controversy

Meta's Plan to Use Social Media Posts for AI Training Sparks Controversy

11-06-2024
11-June-2024 12:46
in AI Tech News
by Archie Williamson
Meta's Plan to Use Social Media Posts for AI Training Sparks Controversy

Have you ever wondered if Meta could use your Facebook and Instagram posts to train AI platforms? Well, now it seems they can. As of June 26, Meta, the entity behind major social media platforms such...

Guess Who's Back? The Positive Potential of Deepfake Technology

Guess Who's Back? The Positive Potential of Deepfake Technology

10-06-2024
10-June-2024 15:17
in AI Entertainment News
by Molly-Anna MaQuirl
Guess Who's Back? The Positive Potential of Deepfake Technology

Deepfake, a technology often mired in controversy, is showcased in a new light in Eminem’s latest video, ‘Houdini’. This groundbreaking technology has allowed rap legend Eminem and h...

UK Mental Health Technology Platform Secures £4 Million AI Investment

UK Mental Health Technology Platform Secures £4 Million AI Investment

06-06-2024
06-June-2024 15:41
in AI and Mental Health
by Archie Williamson
UK Mental Health Technology Platform Secures £4 Million AI Investment

Is Psyomics' £4m AI Breakthrough a Game Changer for Mental Health? One of the top-notch UK-based mental health tech companies, Psyomics, is poised to transform mental health diagnosis with...