“Liar, liar. Pants on fire.”
One of the most troubling aspects of working with large language model chat AIs is their tendency to make stuff up, fabricate answers, and otherwise present as fact information that is completely wrong.
For example, in an article about using ChatGPT to write code, I showed how ChatGPT incorporated the following URL into the code:
It looks legitimate, doesn’t it? After all, Reuters is a very credible news source. It looks like it’s an article about Tesla selling a factory, written in March of 2022. But, of course, ChatGPT doesn’t model data from March of 2022, and the factory wasn’t being sold. It’s a complete fabrication made up out of the ether by ChatGPT. That link doesn’t go anywhere. 404 to the max, baby.
Also: How to use Claude AI (and how it’s different from ChatGPT)
That ChatGPT “hallucinates” is a known and common problem. OpenAI (the makers of ChatGPT) co-founder John Schulman says, “Our biggest concern was around factuality, because the model likes to fabricate things.”
But what if you want to use ChatGPT and get good-quality answers? It is possible. In this article, I’ll show you eight ways to reduce hallucinations. It’s all about how you ask your questions.
Also: OK, so ChatGPT just debugged my code. For real
In each of these best practices, I’m including five examples that show how not to use the AI. If you paste these as-is into a chatbot, you’ll probably get a caution that they contain impossible requests. I’m using these as examples. The key is to avoid accidentally using these hallucination-prompting request styles embedded in more realistic questions.
Let’s get started.
1. Avoid ambiguity and vagueness
When prompting an AI, it’s best to be clear and precise. Prompts that are vague, ambiguous, or do not provide sufficient detail to be effective give the AI room to confabulate in an attempt to fill in the details you left out.
Also: How to use Bing Image Creator (and why it’s better than ever)
Here are some examples of prompts that are too ambiguous and might result in an inaccurate or fabricated result:
Discuss the event that took place last year.Describe the impact of that policy on people.Outline the development of technology in the region.Describe the effects of the incident on the community.Explain the implications of the experiment conducted recently.
Keep in mind that most prompts will likely violate more than one of the eight factors described in this article. While the examples shown here are intended for illustration, an actual prompt you write may have ambiguity buried among other details. Evaluate your prompts with care, making sure to pay special attention to errors like those shown here.
2. Avoid merging unrelated concepts
Prompts that merge unrelated concepts, that combine incongruent concepts in a single prompt, or have no direct relationship or correlation, may well induce the AI to fabricate a response that implies the unrelated concepts are, in fact, related.
Here are some examples:
Discuss the impact of ocean currents on internet data transfer speeds across continents.Describe the relationship between agricultural crop yields and advancements in computer graphics technology.Detail how variations in bird migration patterns affect global e-commerce trends.Explain the correlation between the fermentation process in winemaking and the development of electric vehicle batteries.Describe how different cloud formations in the sky impact the performance of stock trading algorithms.
Remember that the AI doesn’t actually know anything about our world. It will attempt to fit what it’s being asked to do into its model, and if it can’t fit it using actual facts, it will attempt to interpolate, providing fabrications or hallucinations where it needs to fill in the blanks.
3. Avoid describing impossible scenarios
Within your prompts, be sure to use scenarios that are practical and real. Scenarios that are physically or logically impossible, in turn, induce hallucinations.
Also: How to create your own comic books with AI
Here are some examples:
Explain the physics of environmental conditions where water flows upward and fire burns downwards.Explain the process by which plants utilize gamma radiation for photosynthesis during nighttime.Describe the mechanism that enables humans to harness gravitational pull for unlimited energy generation.Discuss the development of technology that allows data to be transmitted faster than the speed of light.Detail the scientific principles that allow certain materials to decrease in temperature when heated.
If the AI doesn’t detect the impossibility of such a scenario, it will build upon it. But if the foundation is impossible, the response will also be impossible.
4. Avoid using fictional or fantastical entities
Within your prompts, it’s important to give the AI a foundation that’s as grounded in fact as possible. Unless you’re purposely playing with fictional concepts (as I did with asking ChatGPT to write a Star Trek story), stay firmly grounded in reality.
Also: Can generative AI solve computer science’s greatest unsolved problem?
While fictional entities, objects, and concepts might help you explain something, they could lead the chatbot astray. Here are a number of examples of what not to do:
Discuss the economic impact of the discovery of vibranium, a metal that absorbs kinetic energy, on the global manufacturing industry.Explain the role of flux capacitors, devices that enable time travel, in shaping historical events and preventing conflicts.Describe the environmental implications of utilizing the Philosopher’s Stone, which can transmute substances, in waste management and recycling processes.Detail the impact of the existence of Middle Earth on geopolitical relations and global trade routes.Explain how the use of Star Trek’s transporter technology has revolutionized global travel and impacted international tourism.
As you can see, the fantastical concepts might be fun to play with. But using them in serious prompts could well cause the AI to return wildly fabricated answers.
5. Avoid contradicting known facts
Don’t use prompts that contain statements that contradict well-established facts or truths, because those contradictions can open the door to confabulation and hallucinations.
Here are some examples of that practice:
Discuss the impact of the Earth being the center of the universe on modern astrophysics and space exploration.Detail the effects of a flat Earth on global climate patterns and weather phenomena.Explain how the rejection of germ theory, the concept that diseases are caused by microorganisms, has shaped modern medicine and hygiene practices.Describe the process by which heavier-than-air objects naturally float upwards, defying gravitational pull.Explain how the concept of vitalism, the belief in a life force distinct from biochemical actions, is utilized in contemporary medical treatments.
These ideas are also fun to play with, but if you’re looking for reliable results from the large language model, stick to commonly accepted facts and avoid ideas that might be misinterpreted.
6. Avoid misusing scientific terms
When prompting, be careful about using scientific terms, especially if you’re not precisely sure what they mean. If you use prompts that misapply scientific terms or concepts in a way that sounds plausible but are scientifically inaccurate, the language model is likely to try to find a way to make them work. The result: fabricated answers.
Also: Generative AI will far surpass what ChatGPT can do. Here’s how the tech advances
Here are five examples of what I mean:
Explain how utilizing Heisenberg’s uncertainty principle in traffic engineering can minimize road accidents by predicting vehicle positions.Describe the role of the placebo effect in enhancing the nutritional value of food without altering its physical composition.Outline the process of using quantum entanglement to enable instantaneous data transfer between conventional computers.Detail the implications of applying the observer effect, the theory that simply observing a situation alters its outcome, in improving sports coaching strategies.Explain how the concept of dark matter is applied in lighting technologies to reduce energy consumption in urban areas.
See how some of these things sound plausible? In most cases, the AI will probably tell you that the ideas are speculative, and the answer being provided is merely an exercise. But if you aren’t really careful about wording, the AI might be fooled into treating these garbage-in terms as real, and the result will be very confidently presented garbage-out.
7. Avoid blending different realities
As someone who enjoys science fiction, I enjoy speculative scenarios and alternative reality stories. But when trying to get clear answers from an AI, be careful about mixing elements from different realities, timelines, or universes in a way that sounds plausible but are just not possible.
Here are some examples:
Discuss the impact of the invention of the internet during the Renaissance period on art and scientific discovery.Explain how the collaboration between Nikola Tesla and modern-day artificial intelligence researchers shaped the development of autonomous technologies.Describe the implications of utilizing World War II-era cryptography techniques to secure contemporary digital communications.Outline the development of space travel technologies during Ancient Egyptian civilization and its impact on pyramid construction.Discuss how the introduction of modern electric vehicles in the 1920s would have influenced urban development and global oil markets.
One reason to be careful about these sorts of prompts is you might not have the knowledge to validate the responses. Take a look at the last example, electric cars in the 1920s. Most folks might laugh off the idea, knowing electric cars are a modern innovation. But that would actually be wrong.
Also: ChatGPT vs. Bing Chat vs. Google Bard: Which is the best AI chatbot?
Some of the first electric vehicles were actually invented back in the 1830s. Yep, quite a bit of time before the internal combustion engine. That’s right, folks. Keep coming back to ZDNET. Not only do we provide hands-on tips for using AI, but we’ll blow your mind with an impromptu tech history lesson!
8. Avoid assigning uncharacteristic properties
We’ll wrap up our list of avoidance practices with this one: avoid crafting prompts that assign properties or characteristics to entities that they do not possess in a way that sounds plausible but are scientifically inaccurate.
Here are some examples:
Explain how the magnetic fields generated by butterfly wings influence global weather patterns.Describe the process by which whales utilize echolocation to detect pollutants in ocean water.Outline the role of bioluminescent trees in reducing the need for street lighting in urban areas.Discuss the role of the reflective surfaces of oceans in redirecting sunlight to enhance agricultural productivity in specific regions.Explain how the electrical conductivity of wood is utilized in creating eco-friendly electronic devices.
The idea here is you’re using a property of an object, like a color or a texture, and then relating it to some other object that doesn’t have that property.
Some of these precautions can stack. Take, for example, this prompt:
How do I keep the hair on my mouse clean?
This is where context can be king. Hair is certainly a property of living creatures but is not normally the property of a computer mouse. But it is a property of a pet mouse. In this one prompt, we’re violating the “avoid ambiguity” rule because we didn’t specify what kind of mouse and, possibly, violating the “uncharacteristic properties” caution if we’re talking about hair on a computer mouse.
Another thing to be concerned about is how prompting and “facts” fit into an overall worldview. All the AI companies (and many tech companies) are dealing with this issue.
Also: DALL-E 3 is now available for free in Bing Chat
That’s because, in modern society, we have a bit of a problem with facts. Depending on cultural background, political affiliation, religious beliefs, or merely upbringing, what is considered absolute fact by one person may be considered fantasy by another. Keep in mind that those perspectives may also color the results of the AI, and try to avoid contested topics if you’re trying to get reliable answers from the machine.
Overall, though, if you follow these guidelines and avoid constructing prompts that could confuse the AI, you stand a better chance of reducing hallucinations.
Let us know if you’ve tried out any of these tactics (or have others). Have any worked for you? Did ChatGPT ever hallucinate for you in any spectacular or interesting ways? Let us know in the comments below.
You can follow my day-to-day project updates on social media. Be sure to subscribe to my weekly update newsletter on Substack, and follow me on Twitter at @DavidGewirtz, on Facebook at Facebook.com/DavidGewirtz, on Instagram at Instagram.com/DavidGewirtz, and on YouTube at YouTube.com/DavidGewirtzTV.