Stop quoting Chat GPT

Conceivably, as I noted, A can cite B and then A can check to see if B exists. It has nothing to do with truthtelling or the concept of truth.

Yes, if you’re writing traditional code, it’s no problem at all to check the references. Unfortunately, that’s not how these things work! Filtering the output to avoid the ribald, profane, and idiotic is a common approach which could be used to check references. But if you find and delete imaginary references, should you really trust the text that cites them!

I was not suggesting that is how they work now. It obviously would have to be specified algorithmically just as they are currently specified to suppress hate speech.

Why do you think I said that?

They aren’t specified algorithmically, they’re just filtered. They’re a black box of billions of numbers optimized with an incredible amount of processing on graphics cards. But what comes out you can easily filter, and they do. Bing, for instance.

1 Like

A could be programmed to verify if B exists before A cites B.

These are excellent ideas about building in checks that cited sources actually exist. And I think that should be done.

But that is only step 1. One could write a paper, building some kind of fanciful argument, citing valid references along the way, but that doesn’t make it a credible paper. To be a credible source, the references have to be used in an appropriate context, and there has to be a logical argument. That is what the peer review process is for. In a sense, Wikipedia has that, as it can be updated by users (hopefully with bad actors filtered out by editors).

As noted by @Jay313 quoting ChatGPT, is like quoting someone whose research methods haven’t been made known.

@Dale, I know you aren’t trying to say that ChatGPT is a trusted reference source, but I just wanted to suggest that to be such a thing is a lot more complex than verifying its references.

2 Likes

It would be a good start.

2 Likes

This is a very important statement as it highlights the exact point I made earlier…AI only spits out what is PROGRAMMED into it!

Even machine learning requires coding from an intelligent designer…the future question would be whether or not a computer can “become like God knowing good and evil”?

You don’t appear to understand about AI and machine learning, let alone AI ‘hallucinations’ and emergent behaviors.

1 Like

Perhaps we are misunderstanding each other because of imprecise language. I’ll explain what I mean when I say that this statement about machine learning is too simplistic.

Let me see if I can illustrate with a simple example. I stress that this example is simple, and that real machine learning is far more sophisticated than this.

Let’s think of creating the following simple chat program. The program periodically generates random strings and prints them out. It has a list of strings that it determines to be “useful”, which is initially empty. It also has built-in rules with which it recognizes patterns in its input, and uses to group strings it prints out. We then put a laptop running the program in a public place.

People walk by and look at the screen. Most of them see gibberish and walk away shaking their heads. Some type in something and get back gibberish, and walk away. Then, one day the program prints out the string “Hello” as one of its random strings. Now, more people stop and type in “Hello” or “Hey” or “What’s up” or something that the program determines are patterned responses. The program then adds “Hello” to its list of “useful” strings. The program continues on printing useful strings and maybe strings that people type in as responses, mixed with more random strings. After a while, the program’s list of useful strings grows, and the program determines that there are useful patterns in which to print the strings, based on the responses typed in. Eventually, the program can have conversations with people walking by. And so on.

The question is: is this program merely “spitting” out what was “programmed in”? Yes, the program has built-in capabilities to learn, but in some sense it has learned to interact with people. It was actually trained by its environment, and would be somehow unique based upon the sequence of events that trained it. For example, the chat program would behave differently if it was trained in New York, than if it was trained in Tokyo.

I reiterate that no one would actually use a simple method like this as a machine learning algorithm because it would take a zillion years to learn enough to be thought of as AI.

But are you saying that such a program is “spitting out what was programmed in”? It seems to me that, while it has built-in programming, it actually learns how to interact with people. The program would change over time, depending on the unique environment that it was placed in.

Now that, I think is a deep question (although some other people don’t) :slightly_smiling_face:. I think it is beyond the scope of this thread.

An interesting article on DuoLingo (linguists1 might be interested) which down the page talks about AI and chatbots. (I’m certainly not a subscriber, but there is no paywall if your IP address has not used up its monthly freebies.)

What caught my eye for the purposes here is that they have one monitoring the other:

Duolingo is being more cautious. Bicknell explained to me that, as GPT-4 and the human user generate dialogue in RolePlay, a separate machine-learning model monitors the results, and registers whether they are within the projected range of appropriate conversation. “If it’s out of scope,” he said, “then we just tell the learner, ‘Hey, I think you’re straying a little off topic.’ ”

That could be an answer for ‘hallucinations’, but maybe not ‘emergent behaviors’.

 


1 @Christy :slightly_smiling_face:

2 Likes

I don’t know enough to comment. AI is useful for translation and evaluating natural language only when it has a huge corpus of natural language data to work with and can be programmed to adequately differentiate between registers and social contexts. Situationally appropriate communication is more than grammar and syntax and semantics and that is where computers struggle.

3 Likes

I was wondering about DuoLingo itself.

Is anyone else familiar with it? My wife is using it to learn(/play with? ; - ) some Latin, but I have no clue how effective it is.

My kids have used it. You can learn vocabulary and obtain a certain level of passive comprehension skills. It can be an effective ancilliary tool for language learning. It’s not going to make you a competent speaker on its own though, you need to actually negotiate meaning with real people in social settings.

2 Likes

Few and far between for Latin. :grin:

1 Like

Quod erat demonstrandum.

4 Likes

Junior high school geometry proofs are a few score years behind me (not of course the only place you see that). :slightly_smiling_face:

My favorite though… the Latin anagram answers Pilate’s question:

Quid est veritas? “What is truth?”
Est vir qui adest: “It is the man who is here.”

(John 18:33-38)

1 Like

Yep, that’s my experience. It is a shame, DuoLingo could be great, but the courses often feel like the exercises at the end of the Grammar chapters with the chapter content shoved in the lesson notes.

This has not been my experience. There are whole Reddit threads and Discord groups dedicated to social Latin.

1 Like

I’m not experienced at Discord (except here :grin:) and Reddit.

A little light reading and embedded video: QED is also a initialism for quantum electrodynamics. :slightly_smiling_face: