The different answers are a clue to the real problem. ChatGPT isn’t designed to provide factual answers to factual questions. In the words of James McGrath, it’s not the computer from Star Trek:
OpenAI has emphasized from the outset what Chat-GPT is. It emulates human speech patterns. It does this remarkably well. They have been very clear what it is not. It is not a search engine. It is not designed, as it shuffles language into new patterns, to preserve and convey things that humans would identify as factual answers to our questions. It does not understand what you are saying nor what it is saying, and thus has no capacity to identify facts in the enormous sample of human text that serves as its basis for its own generated text.
You know how someone would address the computer on Star Trek and ask it for information? You can do that, and Chat-GPT will answer in something more like human speech than the computer on Star Trek. But because it does not provide information and has no capacity to do so other than accidentally on occasion as a byproduct of shuffling textual patterns that have information woven into them, it made things up.
McGrath wondered about John the Baptist’s statement that he wasn’t worthy to carry Jesus’ sandals, so the religion prof asked a few questions to ChatGPT:
Please list all references to someone carrying sandals in ancient Jewish literature.
Thank you, these are very helpful. Are there others outside the Bible?
Who might carry someone else’s sandals in the ancient Mediterranean world?
As you can see, even though I never asked about John the Baptist, Chat-GPT had in its textual corpus more mentions of John the Baptist together with strap and sandal than probably all other references combined. It told me what I already knew, that this is traditionally interpreted as a sign of humility. It invented a quote from the Dead Sea Scrolls and mentioned Geza Vermes, not because it is trying to pretend to know but because these kinds of words are in relevant text from which it shuffled and reorganized. It doesn’t know it is lying, even though it seems as though it is intentionally trying to make made-up information seem authentic. Can something that isn’t a self-aware person lie? But by virtue of how it designed, when it does what it was designed to, it ends up imitating human speech as though it were the speech of an untrustworthy person prone to fabrication. It weaves in details that make what it says sound plausible not because it has a desire to deceive (it has no desire whatsoever) but because details that are correct happen to show up as it creatively shuffles what others have put in text for with related key words.
So please, for the love of all that’s holy, stop quoting ChatGPT at us. I’d like to see the mods ban it, frankly.