Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Preventing AI Hallucinations with Effective User Prompts

Preventing AI Hallucinations with Effective User Prompts

Publication Date: 19 Dec 2024
WHAT?

AI hallucinations occurs when an LLM generates information that is not based on real-world facts or evidence. This can include fictional events, incorrect data or irrelevant outputs.

WHY?

Learn to create effective prompts that can help AI generate accurate and reliable content.

EFFORT

Less than 15 minutes of reading.

1 What causes AI hallucinations?

The most common causes of hallucinations are:

  • Ambiguous prompts. Vague queries can lead to random or inaccurate answers.

  • Lack of clear context. When the language model lacks context, it can fabricate answers.

  • Long generation length. The longer the generated response, the higher the chance that hallucinations can happen.

  • No retrieval-augmented process. LLMs without access to external sources—such as databases or search engines—can produce errors when they need to generate specific information.

2 How can I prevent AI from generating hallucinations?

You can help the AI model generate more reliable and precise content by creating effective prompts. This process is called prompt engineering. This section outlines several techniques to create a good prompt with real-life examples.

2.1 Set clear expectations

The clearer the prompt, the less the LLM relies on assumptions or creativity. A well-defined prompt guides the model toward specific information, reducing the likelihood of hallucinations.

Techniques:
  • Use specific language that guides the model.

  • Focus on known data sources or real events.

  • Request summaries or paraphrasing from established sources.

Example
  • Ambiguous prompt: Tell me about space.

  • Clearer prompt: Give me a summary of NASA's recent Mars missions, including factual details from their official reports.

Example
  • Ambiguous prompt: What is quantum computing?

  • Clearer prompt: Explain the basic principles of quantum computing, specifically how qubits work compared to classical bits.

2.2 Break down complex prompts

Break down complex or broad prompts into manageable pieces. This keeps the language model focused on a narrower scope and reduces the chance of hallucination.

Example
  • Complex query: Explain AI and how it can change the world.

  • Broken down prompt: What are the most recent advancements in AI? How are these advancements being applied in the healthcare industry?

2.3 Use retrieval-augmented generation (RAG)

When crafting prompts, encourage the model to retrieve relevant information instead of generating from scratch. Integrating a RAG system allows the LLM to query a specific database or resource.

Techniques
  • Include context cues, for example, Based on the following document or From the official Web site to point the model toward facts.

  • If using a tool like Milvus or ChromaDB, structure your prompt to refer to specific collections or documents. This reduces hallucination by grounding the LLM in real data.

Example
  • Prompt without RAG: Tell me about the company's AI products.

  • Prompt with RAG: Based on the technical-info collection in Milvus, provide details about the company's AI product line.

2.4 Constrain the output

Limit the length or scope of the language model's response. Shorter, more direct answers reduce the chances of the model drifting off-topic or hallucinating extra details.

Technique
  • Use tokens or word limits where possible to enforce the output length.

Example
  • Unconstrained prompt: Give me a detailed report on quantum mechanics.

  • Prompt with limited output: In 100 words or fewer, explain the main concept of quantum entanglement.

2.5 Prompt for verification

You can structure prompts to ask the LLM for clarification or to cite the source of its statements. This leads the model to produce more grounded and reliable responses.

Examples
  • Where did you find this information?

  • Verify this answer against known historical facts about the event.

2.6 Use chain-of-thought (CoT) prompting

By guiding the model through logical steps, you can control the reasoning path and help the model arrive at accurate conclusions. This method is especially helpful when asking the model to explain complex processes.

Example
  • Step-by-step prompt: Explain the following concepts step by step: 1. How do neural networks learn from data? 2. How is backpropagation used in this process?

2.7 Use templates for complex tasks

For complex tasks, for example, answering requests for proposals or technical questions, templates help provide a structure that minimizes hallucinations. This is achieved by making the desired format and content explicit.

Example
  • Based on the document provided, summarize the key technical features of the product. Format the response as: 1. Feature, 2. Benefit, 3. Use case. Use only factual information.

2.8 For more information