Only a human could mess that up
In many ways, ChatGPT seems to be designed with the Turing test in mind. But we already have plenty of humans; we do not particularly need human-like computers. Doing tasks well that humans canât do would be useful. Imitating a human well is what makes Chat GPT function so much as CheatGPT. Although it can do some things well, error checking is not one of them.
I wonder if we become fascinated by this program and more inclined to view its output as profound to the degree we harbor a scientistic POV, in other words the view that the world is mechanistic and that truth is best arrived at by computation.
Iâm of the mind that ChatGPT is a tool. And like any new tool, the skill is in knowing when to use it and when not to. I probably use CHatGPT a dozen times a day for everything from language practice, TTRPG content creation, learning, a fast âGoogleâ, work tasks, copy editing, etc. I resonate with its language style, and as a person who learns through discussion I find it super helpful. Yet, as in the old saying about everything looking like a nail when all you have is a hammer, problems arise when it is viewed as the tool to end all tools.
There is nothing wrong with using ChatGPT to research topics, and allow that content to inform posts. We all do that I am sure with books and the internet, which are just as fallible as current generative AIs. However, one wouldnât respond to a post by copying/pasting a paragraph from a book or Wikipedia, with little to no context or caveat. Which is the problem we are addressing here.
That said, despite some obvious concerns, I am overall incredibly thankful for the work of OpenAI in providing this amazing tool and am excited about what the future holds.
Exactly. You also donât just throw them out because you donât know how to use them.
Thatâs not what I see being addressed in the new directive.
And I donât think itâs as big of a problem as itâs made out to be. Context is king. I donât recall that many posts which contained a ChatGPT quote minus commentary. There may have been a couple thrown out there by myself without commentary but these should not have been addressed to a forum user in particular.
Additionally, even when quoting from Wikipedia, one has references for the source of information. People can follow up and examine the quoted pages (or articles, or database or whatever) and the sources used to create that page. ChatGPT does not (automatically) provide a list of references, and does not always give the same answer, which indicates it probably doesnât always use the same sources.
It is cool that it will provide them quickly upon request though. Since itâs a conversational model it mimics us â we donât have superscript smaller font numbers in little speech balloons referring to footnotes while we are conversing.
It certainly does not learn from or refer to prior conversations anyway.
And it will readily admit when it is wrong if you can demonstrate it. That is not common in live conversations!
Good point. It would be helpful if there was a way to provide a reference for a ChatGPT generated response when there is a real insight or something particularly interesting. Being able to save and source the quote would be a nice addition to the platform.
I think, based on my naive understanding of machine learning, it has access to the same library and sources in any given instant. I wonder what accounts for the variation of responses. Itâs not like it is using a pseudo-random algorithm to vary itâs pull on data.
ChatGPT confirmed that it does treat itâs sources as a single source. It explains, âthe model is trained on the entire corpus of text as a single entity, without differentiating between the individual sources of text that make up the corpus.â It goes on to clarify how some variations and patterns can affect responses.
Simply amazing that an entire library can now be accessed with such rudimentary ease.
It is indeed what is being addressed in the new directive. We donât want any more posts of the nature "I asked Chat GPT this question and this is what it said, âXYZâ
Even when weâre talking about its usefulness as a tool, illustrating its features and quirks, in a thread about ChatGPT.
Take down the BioLogos interview with it then.
No, Dale. you are super confused about how things work. Chat GPT is interesting. BioLogos did a podcast about it. Everyone is allowed to be interested. Everyone was allowed to play with it for a few days. What we do not want is people to continue any more is their âwatch me play with itâ kind of stuff. Play with it on your own. We do not all need to watch.
For people who havenât used it yet, itâs informative. No one is compelling anyone to read it, and there was some lighthearted humor in what you took down and maybe some education about blank verse and kinds of rhyming for some. Why does a gloomy cloud come to mind.
Itâs not the purpose of the forum to inform people how to use Chat GPT, itâs the purpose of the forum to discuss the intersection of faith and science. I took down your cutting and pasting of Chapt GPT questions and answers on this thread, because this thread is not âto discuss Chat GPT,â itâs to tell people not to cut and paste questions and answers from Chat GPT, a message you clearly did not get. And I deleted your post as I said we would be doing. If you donât want your posts deleted because they contain such essential humor, then donât do the very thing the OP said will lead to your posts getting deleted.
Okay then. (I canât tell if that emoji is saluting or wiping its brow. )
Or this:
âŚbut that could be Christy.
You seem to have glossed over the comment @LM77 made. Quotes from ChatGPT with additional comment were permissible in his view, and this is not what was expressed in the new rule.
So does this mean we should imply authorship of a ChatGPT result is our own? This seems counterintuitive.