Stop quoting Chat GPT

Some people have found Chat GPT answers to questions fun and informative. But the quoting has gotten out of hand.

The BioLogos Forum is a discussion board for humans and we do not need to be informed what Chat GPT says about questions raised here.

Please keep any queries that you use to inform yourself of issues under discussion to yourself and refrain from quoting Chat GPT answers as part of a discussion. Moderators reserve the right to delete posts that ignore this directive.



Biologos will be top on the list :joy:


Yeah I got bored with it really quick. It was fun for a few days having it make weird poems about recipes of vegan desserts from the perspective of a cryptids. It was neat seeing its suggestions for native plants based off of random desires like ornamental bark and Sandy soil but it’s not my thing. I am much more of a “‘ask a person “ type of person. A traditionalist lol. Even in chat rooms or forums, I’ll often ask a question if I don’t know the answer instead of googling it. It’s crazy the amount of people online that get mad about that.


Yes, I see why that was objectionable.

Your queries were definitely the most creative!

And remember, when you need a human, you can always ask a librarian!




The scary thing about ChatGPT is how good it is. I asked it for a discussion on the Hebrew grammar of Genesis 1:1-3 and it was frighteningly thorough and accurate. I almost asked it to discuss its sources for what it had said but wasn’t sure I wanted to know since I’d probably end up spending all day chasing them down!

The big issue with it is that it’s already been shown to be biased on controversial topics, not by its own nature but due to the volume of passionate views on the internet and its apparent ability to be prejudiced towards “enthusiastic” positions. I’ve tested it on other things I know about, such as painting houses and dune ecology, and it is very, very good on those.

OTOH it seems remarkably ignorant of some things. For example, I asked it if Germany should restart its nuclear plants, and it only discussed environmental, safety, and public opinion aspects and had not a word about the situation regarding Ukraine and Russian oil.

Given the evidenced bias plus its apparent blindness about some things, and how volatile some issues raised here are, I wouldn’t consider it something to be quoted unless it had an insight the user hadn’t previously encountered or an elegant and/or pointed turn of phrase – and even then, anything it says should be in quote marks (I notice a bit of a deficiency in utilizing that feature) and clearly identified as to source. If quoted, it should be treated at best as a journalistic source even if – as with my question about Hebrew grammar – it provides a scholarly response.

Overall I’d say that ChatGPT material should be treated like images – meaningless unless they serve to spark useful thoughts by the people posting them – or like articles on science by popular science journalists when writing a paper for a university course, i.e. not to be mentioned but as bridges to finding actual sources.

I can’t resist: I asked ChatGPT how to use its answers in online discussions:

  • Verify the information: While ChatGPT is designed to provide accurate information, it’s always a good idea to fact-check the information provided before using it in your own posts or replies. Look for additional sources to confirm the accuracy of the information.
  • Use ChatGPT’s answers as a starting point: ChatGPT’s responses can provide a helpful starting point for further discussion or research. Use the information provided as a launching pad to explore the topic in more detail.

Which can be read as I said: ChatGPT isn’t a source that should be quoted, it’s a tool for drawing on like any other, and what you post should be your “own work”, not quotes!

And in terms of science and theology, its material should be treated the same as that of a very good science journalist or theology journalist trying to explain things on a popular level… and who because they are not scientists or theologians can get things very wrong.


More like this:

“Danger! Danger!”


Good points. I think there’s some exaggeration in how bad the problem was with how ChatGPT was being used to engage the discussion here, or to give better expression to a concept that was challenging.

I think it was in the thread about BioLogos interviewing ChatGPT for the podcast, that I quoted a statement ChatGPT made about the impossibility of forming an infinite series through successive addition. Intuitively I’ve known it’s an impossibility for nearly 20 years and have debated atheists and skeptics about this, often struggling to find the right words, and ChatGPT said it in a way that was pure perfection. So it’s a mixed bag of results. Sometimes it’s great and sometimes it needs help.

They did what?! :grin:

1 Like

Um, what? I asked ChatGPT “Is it possible to form an infinite series through successive addition?” and its answer was “Yes, it is possible to form an infinite series through successive addition.”

Which seems to be a good example of the problem with using ChatGPT!

1 Like

Knowing ChatGPT it said more than that. Early on I had conflicting results, and I had to explain why the series never goes from being finite to being infinite.

Later when the ChatGPT was being interviewed by BioLogos, I asked the question again and got a flawless result. Let me see if I can find it.

Found it:

1 Like

It definitely needs to be used with circumspection! It can provide some references quickly and nicely list and summarize several at time, so there’s that.

1 Like

Yes ChatGPT’s ability to pull a quote from a library of information based on a vague concept is like using Google for the first time and realizing it’s capability.

1 Like

I also cannot imagine what librarian would not appreciate the ability to conceptually search the text of an entire library. ChatGPT’s faults and obvious lapses in judgement will continue to be corrected into the foreseeable future

1 Like

One thing to remember about it and current events is that it does not have access to recent stuff. I think it stopped back in 2020 or maybe 2019 or something. I don’t remember.


Yes, it went into an explanation of how it works.

Okay, that’s not actually the same thing that I asked. There’s a difference between constructing a mathematical infinite series and making an infinite number of additions in the real world.