
This is a well-known problem with AI: you ask it a technical question, but as soon as you glance over the answer, you realise that it is incorrect. The information sounds plausible, but on closer inspection, it is incorrect. That's more stupid than intelligent, isn't it?
In short, it's because it's based on data and statistical probabilities. It does not 'know' what is true, but searches through vast amounts of data for relevant facts. It fills in the inevitable gaps with information that fits the context. This is similar to how we humans confabulate when we can only vaguely remember the details of a past situation and invent something to make the story complete.
Hallucinations can range from the entertaining to the questionable. One study by ETH Zurich and the University of Seoul investigated this using the example of North Korea (see comments for link). Reports ranged from the country claiming the 2014 World Cup (which is not true) to the persistent rumour that its leader, Kim Jong Un, had died. While this may enrich private discussions, in business such hallucinations are counterproductive and dangerous.
If false assumptions are not corrected but continue to be passed on within the company, this creates risks such as:
.png)
The good news is that it is relatively quick to wean AI off hallucinations.
Our top three tips for avoiding hallucinations in AI are:
The old principle of "garbage in, garbage out" also applies to AI applications: imprecise questions lead to imprecise answers. The more precise and specific my query, the lower the risk that the AI will generate incorrect information.
Ask yourself: When do I need facts, and when do I need ideas? Don't overcomplicate your questions. Questions that are too creative or open-ended provoke more hallucinations. Provide contextual information: Who is the sender and who is the recipient? What is the background to the question? Remind your tool of choice of the "journalistic principle": Every statement must be substantiated by two independent sources. This alone will significantly improve the quality of the answers. You can also give explicit instructions not to make anything up. You should also specify what to do if a piece of information is unavailable. This may seem unfamiliar and tedious at first, but you will be rewarded with better answers, and it will become easier over time. The more I use 'my AI', the smarter it becomes.
Retrieval-augmented generation involves connecting to reliable, verified sources, databases and documents, such as studies from renowned research institutes, your own knowledge databases and real data collected by you or your partners. This means that AI can look up facts instead of inventing them.
Hallucination rates drop significantly: While normal LLM language models still hover around 50%, regularly trained and further developed hybrid RAG models land between 20% and 5%. Applications trained specifically for my tasks tend towards zero. Therefore, if I need reliable and optimal results for my business operations, I can no longer use 'quick and dirty' tools, but instead opt for a better RAG model that works for my company and my specific everyday use cases. Ideally, it should be integrated into my own processes and specifications.
Perhaps most importantly, human intelligence verifies AI responses by comparing them with independent sources. This internal fact-checking of AI results and human control is still faster than researching and combing through all the documents yourself.
This is especially important for important milestones or critical points, where the reliability of all statements and assumptions is essential, so a human correction loop must be built in. This is particularly important when other people are to continue working with the results or when customer approval is pending. We have had very good experiences with "human in the loop" in our projects.
Overall, doing your homework within the company as thoroughly as possible helps immensely – this saves time and correction loops
What cases of hallucination have you experienced? How do you deal with hallucinations in your favourite tool?
To prevent the AI from hallucinating, our prompt engineers invest a great deal of time in prompting and testing. To obtain reliable results, simply enter your keywords and the AI will provide the information you need. Want to see how it works? Make an appointment with us and prepare to be amazed:


Kennen Sie das? Sie sollten sich eigentlich täglich einen Überblick über den Markt, neue Gesetze, den Wettbewerb, neue Technologien oder ähnliches verschaffen. Doch die Informationsflut ist überwältigend und unübersichtlich, Quellen werden undurchsichtiger und fragwürdiger. Und die Zeit für ein kontinuierliches Monitoring fehlt sowieso. Unser KI-Agent „Monitoring Feeds“ ist genau dafür gemacht – für die kontinuierliche Überwachung von Trends, Technologien und Wettbewerb.
.png)
Complex issues demand concentration, patience, time and exceptional care from people. Whether the subject is technology, chemistry, medicine or economics. You often don’t have that time in day-to-day business – for example, if you are asked to present the latest information on the current state of research and knowledge relating to your project at a colloquium arranged at short notice. But complex research tasks are tailor-made for AI agents! Our Research Agent replicates scientific working methods – orchestrated within a multi-stage thinking and analysis process.
.jpg)
In every sales team, there are target customers: Those who need exactly the product they want to sell. Which you would like to have on the reference list. And especially those with whom you can generate good sales — whether you want to reduce short-term sales targets or achieve strategic growth goals.