Trusting Sources and Adding Context

Search Engines have become a secondary resource with Generative AI LLMs being the first stop for many searches and queries. It’s just the nature of technology to utilize the new shiny and the format it gives information is easy to digest and the Generative AI can often expand upon a subject if asked.

But should that information be trusted? The Generative AI is scanning the sources found and using its own personal repository of info to help with any prompts but that doesn’t mean it completely understands how to relay it in a truthful way. The Generative AI does its best but it is not always correct or sometimes gives mis-information, even though the technology has advanced far beyond what it was capable of only months ago,

It is up to the prompter to explore if the result outputted is factual and not a hallucination of the Generative AI throwing what it has seen / found in the wrong order or context. It shouldn’t be too much difficult for the prompter to discover the facts as the Generative AI has given them the results name, supposed purpose, and location, A quick look up from another source, either a search engine, visiting the page, or a glance at Wikipedia, should provide enough information that the result the Generative AI has produced and the context it gave are true, in the correct order, and being explained properly.

This doesn’t have to be done with every query but the prompter has to keep themselves aware of what is ‘factual’ and what doesn’t look correct. To just trust every Generative AI prompt as being correct would be foolish. The prompter has to have a base knowledge of the subject atleast and if they are searching for something new then the Generative AI will provide more information about the new subject but the prompter has to do their own research to see how truthful the prompt result was compared to what participants in the subject know as fact.

Here is my example: I was showing ChatGPT to a class and I told them not to entirely trust the process it explains because it might not be factual. One person was a pottery maker and I asked them to give me a name of a technique for a certain decoration on the clay. I then asked ChatGPT to explain the process to create that technique and the prompt result seemed correct up to a certain point. The pottery maker then said some steps were incorrect because doing it the way ChatGPT described would harm the clay to some degree.

I’ve told this story before in an earlier blog post regarding Photo Generation but I figured I may as well cover all the bases when the subject came up.

Be aware of what is being outputted and if it is unfamiliar than it is a responsibility to ensure the information is correct / factual. Otherwise context is lost and the wrong instruction gets lodged deeper into the foundation of the information.

Stay Knowledgeable!