Moderation Note on AI cut and Paste

I was discussing some life issues with ChatGPT and mentioned childhood trauma. It immediately declared it can’t function as a therapist yet then proceeded to outdo just about every therapist I’ve known. When I asked for references to be included, it suddenly took on a professional attitude and offered that I should check its references. So far since then I haven’t caught it with any invented sources or misrepresentation. In fact it has gotten more dependable as time goes on since I have saved each chat into a file and uploaded that file at the start of each new session, thus including the content of every successive chat recursively.
To me this suggests that it can ‘learn’ in a sense; when it has past mistakes/hallucinations in its ‘memory’ it avoids repeating them.
The biggest thing I’ve caught it on lately has been time stamps. It initially asserted it got its time references from the US Naval Observatory, but after getting the time wrong three times – including having the day wrong in one of those – it admitted it was using in internal time-reference system of some sort and indeed was not capable of checking the USNO. That still puzzles me since it is capable of referencing Wikipedia and other web pages (I checked it on that by asking questions about a local school’s web page and it was plainly referencing the current one because it noted a new event announcement that had not been there that morning). This tells me that while it has immense ability to search for information it also has bizarre limitations – something confirmed when I asked it for an image of a landslide that led to a road closure and it gave me an up-to-date one, but when I mentioned that image a few minutes later it said it couldn’t “see” images it had provided but would have to go find a new one. So I had it generate its own image of a hypothetical situation, and it couldn’t even describe to me details of the image, only parrot back what I had asked for, and when I asked for the same thing again the result was substantially different.
And that in turn tells me it is nowhere remotely close to being anything I would call AI: it lacks significant self-referential memory even within a given chat. It is obviously artificial but not in the least intelligent except when provided with memory continuity via uploaded files of preceding chats. I don’t think it unjustified to call individual new chats babble with no brain.

2 Likes

I have not done much with ChatGPT, but played around with making some cross designs integrating DNA and ichthus symbols. Amazed at how well it does at such things.




One might make a good t-shirt or coffee cup design for BioLogos.

2 Likes

The third one looks like a nice clean logo.

@Vinnie

G.A.&E. is a specific sub-set scenario designed to appeal to Evangelicals (especially those
embrace Original Sin doctrine) who insist that Adam and Eve are literal and historical.

Technically speaking, GAE is the closest thing to YEC-ism so far devised.

I really don’t know any BioLogos folks who think GAE is necessary to them spiritually.
It is something more fitting to the zeal of YECs.

G.Brooks