Before you click “send”: The Information Literacy Thread

As a librarian, I work with information, not only as a source of answers to questions, but as something to be evaluated in itself. While having access to information is essential to an open society, so is developing the skills needed to evaluate the likelihood that information is reliable, accurate, timely, appropriate, useful, etc.

I hope that this thread can be a useful place where people can share their information literacy techniques, tools and stories. Eventually, it could be a valuable resource for other lay-people who are interested in improving their ability to deal with the overwhelming amount of information they face constantly.

Share the good, bad and ugly of experiences you’ve had receiving references to scientific or “scientific” information, AND how you dealt with the “gift.”

And please, share resources and tools you find useful for evaluating research articles, or any other source that comes across your screen.


Here are a few good websites, some from college libraries, aimed at helping people figure out how to deal with an article that crosses their screen:

Quite Thorough:


Quick and DIrty

and something Excellent from the UK


Joel Duff often includes outstanding information literacy components to his videos. This video knocked it out of the park.

Time stamps for Joel Duff’s Video “Let’s Talk About Peer Review” aka “What’s Wrong and Right with Peer Review?”

00:00 Discussion of a retracted article in a open-source, peer-reviewed journal

16:55 Problems associated with open-source, peer-reviewed journals

24:25 Discussion of predatory journals as a business model and a degradation of the peer review process.

25:53 Red flags that indicate an article may not be trustworthy

29:13 How predatory journal editors can abuse their positions as editors; how this practice reinforces the bad reputations of these journals among serious scientists.

37:34 The purpose and peer review and a critique of the traditional system

51:40 Tweaks and alternatives to peer-review

1 Like

Thanks for these articles, @kendel. As you know, @Christy wrote this great one for Biologos, too.

I did not see till now, but the Institute for Creation Research made an interesting rebuttal…Is Evolution ‘Fake Science’? | The Institute for Creation Research

It is good to try to filter through their response.

1 Like

Here are two from Christianity Today, from 2021 and 2015 (so a bit older, but it is good to read the perspectives). One pouts out how the fake news often gears to make us hate who and what we want to hate.


This is motivating me to get my act together and just get (or renew) my subscription to CT. Thoses look like a couple of good articles - but behind the registration wall.


I’m not sure how CT handles gift article links. There are probably limits on how many times these links can be used, but here they are:


Oh so sorry! Thank you Kendell!

1 Like

Not against, but to compliment, this brilliant thread, here’s an intriguing perspective on this issue:

Our culture has long been driven by information; many people are inclined to believe only what can be verified and rationally ascertained.

More recently, we have been surprised by our vulnerability to “fake news” and false information delivered through the “trusted” medium of the internet; we seem to “verify” by relying on the “rationality” of technology, but indeed, we are easily manipulated because we trust in the form, rather than the content.

We have not done a good job of articulating the difficulties and gaps present in communication, or how to incarnate ideas into reality. We have assumed that informational “recipes” are the sole basis for knowing truth.

It is time for us all to taste the actual fruit of the act of making. The act of making [being co-creators with God, instead of consumers] is the antidote to our current malaise, to the collapse of communication that has resulted, in the words of David Brooks, in “a rapid, dirty river of information coursing through us all day,” resulting in the need for “an internet cleanse.”

Source: Makoto Fujimura, Art and Faith: a Theology of Making, 2020, p24. Emphasis and paragraphing mine.

Makoto’s interesting observation suggests that rationality cannot counter fake news very effectively, because what people trust is the"medium of the internet". The fact that it is ‘up there’ gives it credibility. Something that is compounded by our consumer culture that encourages us to leap before we look.

He suggests throughout the chapter that we need to view ourselves as Holy Spirit-infused co-creators with God in the New Creation project. Such a view opens us to assess sources of ‘truth’ based on more than (but not less than) rationality.


I love Mako’s point that people feel that things “being on the Internet” inherently have some sort of credibility.

At the reference desk I am often faced by patrons, who “found it on the Internet” which makes tracing their steps back to the source nearly impossible.

The conversation usually follows pretty closely with this script:
Me: Hmmm. Where on the Internet? It would help, if I can see the source and determine where they got this information.
Patron: Well,…you know…on the Internet.
Me: Hmmmm. The Internet’s a big place. Do you remember what website you saw that on?
Patron: Um…no…
Me: Hmmm. You know that every cat and dog with a password can post stuff on the Internet, right?
Patron: Well…um…yeah.

I think, part of the problem is that people can’t “see” the Internet. They don’t recognize that it is a vast, vast city of cities with billions of different users, “authors,” purposes. People don’t seem to recognize that it the same as finding “information” in public.

Out in the real public, we get so many clues about the context and source. There is a great deal of difference between what you see on a bathroom or subway station wall, or find with a long, determined search in a library.


As classist or non-politically correct as this comparison may be … perhaps it would also help if one could at least recognize if they are in a ghetto or in a ‘respectable’ neighborhood. Not that crime never happens in respectable neighborhoods. It does. And beautiful acts of love can and do happen in ghettos too. But it still helps to be aware of the reality of one’s surroundings.


Thank you so much, Kendel! And CT’s wise marketing strategy is already working because I’m still motivated to get myself subscribed so that I can share these with others too.


I wonder if a ‘drinking’ analogy would serve better. Are they filling their cup from the tap [faucet] or the toilet?


Whelp, they got me. I guess evolution is fake science.

Truly dizzying intellects over there.


From Jake Heberts “rebuttal” to @Christy:

This tendency to “explain away” contradictory data was illustrated by a recent article purporting to explain why crocodiles have remained the same for 200 million years.2,3 The very first sentence in the news article claims that “a ‘stop-start’ pattern of evolution, governed by environmental change, could explain why crocodiles have changed so little since the age of the dinosaurs.”2 Of course, if evolution were true, one would expect creatures to not remain the same for hundreds of millions of years. Creationists would argue that crocodiles have not evolved simply because evolution isn’t true.

In other words, if crocodiles evolved from crocodiles, why are there still crocodiles? Brilliant.


Also this bit:

Another characteristic of pseudoscience is that “[i]deas from outside the realm of science are presented as scientifically established.”1 Evolutionists do this with a vengeance, especially in the field of cosmology. Speculative ideas such as inflation theory are invoked to solve problems with Big Bang cosmology even when there is zero evidence for those ideas. Some evolutionists are willing to invoke other (unobservable) universes, in part because it’s demanded by inflation theory,9 and in part. After all, they (mistakenly) think that it removes the need for a Creator.10

Depressingly, this is just an evolution and origins of the universe bait and switch, followed by the evolutionist (whatever that is) is at war with God fallacy.


Regarding the use of AI (or chat GPT) as a source, I ran across this post on my Facebook feed. I didn’t know how to share it directly here, so copy-and-paste FYI:

Jessica Cail


I teach science writing. After a summer of faculty hubbub about the impact generative AI will have on our ability to ensure students are actually learning how to write, I decided to work it into my classes’ writing pipeline. I’m not burying my head in the sand. I’m not coming off as a Luddite. But I also want to teach students to be critical of anything they read, especially when it comes from AI.

For context, students have spent the last couple weeks reading through the literature and selecting articles that they will be using in their literature review. This means, they now know something about the field, its main concepts, and key players. I then had them ask one of the generative AI programs to write a 3-page, APA style, literature review on their topic, and highlight the content and the sources provided in the following way:

GREEN: This information is accurate, the source exists, and its findings match what the AI says. I will incorporate this info into my draft.

YELLOW: This information is accurate, the source exists, and its findings match what the AI says, but it is not relevant enough to my paper to include in my draft.

RED: This information is inaccurate or this source doesn’t exist.

Aggregate results of 18 student papers are below.

While not all students highlighted every line, it was plenty to arrive at the general consensus that while AI might provide one or two accurate pieces of information, the majority of stuff it spit out was OUTRIGHT WRONG. They were shocked and horrified: “You can’t trust AI to get anything right. You have to check everything it spits out.”

Good. Mission accomplished. I told them to tell all their friends


Edit: I’ve removed the quote link since @klw shared something they found on FaceBook, and my post address that content rather than anything klw has said.

It’s funny in a way, AI creators are not marketing their programme’s content as reliable. In fact, if you ask chatGPT it will tell you:

My responses are generated based on patterns in the text data I’ve been trained on, up until my last knowledge update in September 2021. While I strive for accuracy, I’m not infallible and can’t guarantee 100% factual accuracy. It’s always wise to cross-check information from multiple sources, especially for critical decisions or research. ~ Chat GPT 3.5

I think the shock (and outrage) that AI might not be reliable is another expression of what Makoto Fujimura called trust in the form rather than the content. People trust the information because it comes from an AI on the internet, not because they’ve checked the content and found it accurate.

The truth is what you get from a generative AI depends on a number of factors including but not limited to:

  • The AI in question, inc. whether you’re using GPT 3.5 or 4.0
  • The scope and framing of the initial prompt, and the temperature (room for interpretation) the prompt implies
  • The topic being discussed. The more technical or niche the topic, the more likely it is to get things wrong.
  • most importantly, what you are asking it to do.

For example, yesterday I asked ChatGPT 4 to help write some key performance indicators. There’s loads of info about this online, and so CGPT’s training is thorough. I gave it some parameters, then asked it to give me ten possibilities. I then used an initiative process to get the 10 down to three brilliant KPIs in about 15 minutes. It would have taken four times that easily to do the same work and they still wouldn’t have been as good.

Later that day I asked for some piano compositions that demonstrated a particular aesthetic. About 50% of it’s suggestions didn’t exist. The ones that did though were spot on.

Why the difference in experience? ChatGPT and many AIs on the market at the moment are generative. They are about using existing content, to produce new content that matches your criteria. As in my first example of creating some KPIs.

What they are not are research tools. Sure you can do some low-level research with ChatGPT but it is akin to what you’d do on Wikipedia or YouTube. It offers a starting point from which you can probe further. But once you push deeper it starts creating novel information and sources because that is what it is supposed to do - generate stuff.

Ultimately using ChatGPT for research is like using a hammer to tighten a pipe. Sure it will get you part of the way, but really it is the wrong tool for the wrong job.

In my humble opinion, the question is not, “Are generative AIs reliable sources for information?” but rather “What jobs are generative AI best suited to?”


Oh my goodness! All the time!
People come to the library wanting information that expands on the garbage they found “On The Internet” as if that IS authority.

They get impatient, when I search the catelog,shifting from foot to foot, huffing under their breath. Because they don’t have a clue what I am doing (even when I explain) or what choices I make in the searching. Even when I explain.

“Why don’t you add in ……?” “Because that will give me zero/way too many results.”

The Internet doesn’t evaluate things as it searches. But the patron doesn’t understand that that is just what I am doing. With millions of books (plus other formats), it’s kind of important to eliminate a lot before you just start roaming the stacks,

“AI told me.”


This topic was automatically closed 6 days after the last reply. New replies are no longer allowed.