Stop quoting Chat GPT

I didn’t mean that I have no interest in ever reading ChatGPT output. I have no interest in reading it in online discussions.

No source, just lots of anecdotal data. The quality may have improved and probably depends on the nature of the query. When I just tried asking for references about natural selection for lactase persistence in humans, most (but not all) of them appeared to be correct. When I asked for references to my own studies on natural selection, two were correct, one had a genuine title for a paper (that wasn’t about naturals selection) but everything else about the citation, including my authorship, was wrong, and one was wholly fictitious. When I asked for studies by me specifically about natural selection in malaria, one was correct except for a single digit in the DOI identified, one had a title and journal correct but had all other publication details (including author order) wrong, and two were fictitious.

6 Likes

I am a little behind the curve, but why does it give fictitious references? Is it mimicking human constructions? In other words, is it a bug, or is it by intentional design?

I think that the main driving power of GPT is its ability to “coherently” piece together relevant strings and phrases that are probably sourced from reputable content on the internet. So if that’s more-or-less accurate, it’s “citations” will also be found the same way, and will draw from the real world of citations that includes erroneous ones. That’s my speculation as an outsider to that software engineering world - will see what any insiders have to say or correct here.

Last evening I watched the 60 Minutes segment on Google’s several AI endeavors (one of which is ‘Bard’, very comparable to ChatGPT), and a problem they haven’t figured out is labeled AI ‘hallucinations’ where it can generate fictitious references as citations (e.g., books that don’t exist), not as ‘intentional’ deception, but something that happens for whatever reason.

Another thing they do not have a handle on is emergent behaviors. One that surprised them is that it taught itself the entire Thai language when it had come across only a few Thai words incidentally, unplanned and not the result of an intended ‘feature’.

4 Likes

As they increase in complexity, some models reveal new biases and inaccuracies in their responses.

Wow! No telling where this is going to go.

And the breakthrough was more recent than I realized.

In 2017, researchers at Google Brain introduced a new kind of architecture called a transformer.

  • Starting at $140/mo someone could get something like “IBM Watson Assistant” to build a chatbot platform quickly and easily.

That’s what the T stands for in OpenAI’s ChatGPT.

1 Like

The model is generated to produce text that follows the patterns of human-generated text. One way to do that is to output genuine references that are associated in printed material with text that reads like that of the prompt. Another way is to generate strings that resemble genuine references, including real journal names and real researchers in relevant fields.

3 Likes

They put limits on it somehow to prevent it from promulgating hate speech, for instance, and maybe conspiracy theories as well. I wonder why it wouldn’t be possible that when generating references it could be told to validate them. But I’ve had both ChatGPT and Bard give me real links but wrong in the context of the question at hand, and then repeating them even when told they were wrong.

I’ve seen anecdotal evidence similar to your experience. Here’s one example I posted 3 days ago in
another thread:

To quote McGrath’s results (emphasis mine):

Ultimately, I agree with what Kasparov says here about his 1997 loss to IBM’s Deep Blue chess engine:

App by app, use by use. Chat apps (like ChatGPT) are designed to mimic human communication, not replace it. They’re also not search engines.

4 Likes

Kasparov’s sentiment strikes me as a little too sure footed… then the chat app goes live :grinning:

Many of these emergent behaviors illustrate “zero-shot” or “few-shot” learning, which describes an LLM’s ability to solve problems it has never — or rarely — seen before. This has been a long-time goal in artificial intelligence research, Ganguli said. Showing that GPT-3 could solve problems without any explicit training data in a zero-shot setting, he said, “led me to drop what I was doing and get more involved.”

from the article @Dale shared here

Meanwhile Kasparov is not considering the coincidence of a technological and a philosophical singularity. I’m not a doomsayer, but as a politically liberal optimistic partial preterist (PLOPP😎) this will undoubtedly be a game changer.

Yes, I read the piece Dale shared. Interesting stuff. But you underestimate Kasparov. Chess and language programs have always been the testing grounds for machine learning and AI. Stockfish is the descendant of Deep Blue. It calculates the outcome of every possible move and response many levels deep and comes up with the best move. Google AI’s DeepMind introduced the chess-playing AlphaZero in late 2017. As detailed in a 2018 paper in Science, given only the rules and 9 hrs of “training” playing itself, AlphaZero mopped the floor with Stockfish. (Stockfish has since caught up, but DeepMind already left chess for bigger things.) Kasparov was well aware of these developments and often consulted. He wrote a book about AI in 2017 and co-authored a 2021 paper on the subject in Harvard Business Review.

In 2019 DeepMind released MuZero, which could master board games and Atari games without even being fed the rules. That’s a zero-shot setting a full year before ChatGPT debuted in 2020.

Don’t miss Kasparov’s point. AI isn’t a monolithic thing. ChatGPT is an app built for a specific purpose. More and better will come.

1 Like

One of the things that is wowing people (or freaking them out, depending on your point of view) is the ability of ChatGPT to write computer code. Some people worry in case this is going to make all of us software developers redundant, and other people worry in case it is going to result in artificial intelligence improving itself in ways that are unsupervised and unpredictable. Welcome to The Matrix, Skynet, or <insert your dystopian singularitarian sci-fi movie of choice here>.

Will this happen? It remains to be seen. But I don’t think that software development as a career is under threat just yet. ChatGPT and GPT-4 seem to do really well at generating simple scripts and code to solve common problems, but if you have specific and detailed requirements (as is always the case when programming in a business context), you have to drill down into a lot of detail with your prompts to get exactly what you want, to the extent that you almost might as well just write the code yourself. Certainly I don’t see it taking the job of writing code away from programmers and putting it into the hands of business analysts.

Rather more interesting is GitHub Copilot, which can take comments that you write in your source code and churn out complete functions for you. But again, I don’t think this is going to challenge programmers’ jobs—on the contrary, it’s simply going to make us more ambitious in the kinds of projects that we try to tackle.

Another point worth making is that large language models such as ChatGPT are computationally expensive, requiring vast data centres to get the kind of results of which they are capable. Apparently training GPT-4 cost somewhere in the region of $100 million. You can of course run LLMs on your own laptop, but they will naturally be much more limited in what they can achieve and much more prone to making mistakes.

3 Likes

I use mine as a search engine lol. Or rather, I use it as a recommendation on what to search for. Like I’ll post a handful of themes and keywords and book, and it will often bring up books or films that match it better than when doing it on google usually.

3 Likes

He is knowledgeable about AI, but a false sense of security was present in his words.

But what would he or Ganguli predict about the future if 2 singularities were to occur in relative proximity?

Let me address these two drive-by shootings (subtweets, in that venue):

You realize this is a fairly small community of regulars, right? If you want to call me out, then do it. For the record, I made clear my unwillingness to “dialogue” with ChatGPT on April 6 in this post:

It wasn’t until April 14 that I publicly said I’d like to see the mods ban it. Also for the record, as a former mod I don’t get to participate in their deliberations and I don’t get a vote on their decisions. Don’t turn this into some sort of personal vendetta.

@Christy liked it and the next day it was banned.

I have nothing against you and I made my peace with the mods’ decision.

Newsflash: Maybe I said something out loud that Christy and other mods were already thinking. I didn’t initiate this ban and had no say in it. Perhaps you could apologize, or not.

On other unfinished business:

Of course you’re free to skip my posts, and vice versa. But you completely mischaracterize that conversation. (Anyone who wants to can look it up starting here.)

Right off the bat, I asked for those who knew more than me to weigh in. That is in no way, shape or form trying to “correct someone who is more knowledgeable.” I was trying to have a conversation with you, not correct you. I noted at the end of my “wall of text” that the article’s conclusion “seemed” to contradict something you’d quoted. Please understand that everything isn’t a debate. If you want to have a dialogue, just correct my non-expert misunderstanding and explain what I got wrong. I’m happy to learn new things, but a non-answer and a condescending comment in another thread is less than worthless. All it really means is you won’t find much conversation around here about your pet subject of infinity.

1 Like

It’s perfectly believable. At the time the ‘directive’ felt rushed, there was a small public disagreement between the mods about the scope, and you had mentioned a ban the day before.

For that I apologize. That part of my comment was not in anyway associated with you. And in no way do I consider myself an expert or freshman class knowledgeable in philosophy of math. I sincerely mean this.

This sentence which was separated by a line from the second sentence was in reference to Christy’s comment that I misread to mean if you have to search ChatGPT for a reference, then you are not welcome here.

My commemt was in reply to Dale’s comment:

This was Christy’s comment which I misread:

Sorry for the mixup, and looking back now, I should have tagged you in making those references about you. That was a lapse of judgement on my part.

1 Like