Before you click “send”: The Information Literacy Thread

Edit: I’ve removed the quote link since @klw shared something they found on FaceBook, and my post address that content rather than anything klw has said.


It’s funny in a way, AI creators are not marketing their programme’s content as reliable. In fact, if you ask chatGPT it will tell you:

My responses are generated based on patterns in the text data I’ve been trained on, up until my last knowledge update in September 2021. While I strive for accuracy, I’m not infallible and can’t guarantee 100% factual accuracy. It’s always wise to cross-check information from multiple sources, especially for critical decisions or research. ~ Chat GPT 3.5

I think the shock (and outrage) that AI might not be reliable is another expression of what Makoto Fujimura called trust in the form rather than the content. People trust the information because it comes from an AI on the internet, not because they’ve checked the content and found it accurate.

The truth is what you get from a generative AI depends on a number of factors including but not limited to:

  • The AI in question, inc. whether you’re using GPT 3.5 or 4.0
  • The scope and framing of the initial prompt, and the temperature (room for interpretation) the prompt implies
  • The topic being discussed. The more technical or niche the topic, the more likely it is to get things wrong.
  • most importantly, what you are asking it to do.

For example, yesterday I asked ChatGPT 4 to help write some key performance indicators. There’s loads of info about this online, and so CGPT’s training is thorough. I gave it some parameters, then asked it to give me ten possibilities. I then used an initiative process to get the 10 down to three brilliant KPIs in about 15 minutes. It would have taken four times that easily to do the same work and they still wouldn’t have been as good.

Later that day I asked for some piano compositions that demonstrated a particular aesthetic. About 50% of it’s suggestions didn’t exist. The ones that did though were spot on.

Why the difference in experience? ChatGPT and many AIs on the market at the moment are generative. They are about using existing content, to produce new content that matches your criteria. As in my first example of creating some KPIs.

What they are not are research tools. Sure you can do some low-level research with ChatGPT but it is akin to what you’d do on Wikipedia or YouTube. It offers a starting point from which you can probe further. But once you push deeper it starts creating novel information and sources because that is what it is supposed to do - generate stuff.

Ultimately using ChatGPT for research is like using a hammer to tighten a pipe. Sure it will get you part of the way, but really it is the wrong tool for the wrong job.

In my humble opinion, the question is not, “Are generative AIs reliable sources for information?” but rather “What jobs are generative AI best suited to?”

7 Likes