Stop quoting Chat GPT

One of the things that is wowing people (or freaking them out, depending on your point of view) is the ability of ChatGPT to write computer code. Some people worry in case this is going to make all of us software developers redundant, and other people worry in case it is going to result in artificial intelligence improving itself in ways that are unsupervised and unpredictable. Welcome to The Matrix, Skynet, or <insert your dystopian singularitarian sci-fi movie of choice here>.

Will this happen? It remains to be seen. But I don’t think that software development as a career is under threat just yet. ChatGPT and GPT-4 seem to do really well at generating simple scripts and code to solve common problems, but if you have specific and detailed requirements (as is always the case when programming in a business context), you have to drill down into a lot of detail with your prompts to get exactly what you want, to the extent that you almost might as well just write the code yourself. Certainly I don’t see it taking the job of writing code away from programmers and putting it into the hands of business analysts.

Rather more interesting is GitHub Copilot, which can take comments that you write in your source code and churn out complete functions for you. But again, I don’t think this is going to challenge programmers’ jobs—on the contrary, it’s simply going to make us more ambitious in the kinds of projects that we try to tackle.

Another point worth making is that large language models such as ChatGPT are computationally expensive, requiring vast data centres to get the kind of results of which they are capable. Apparently training GPT-4 cost somewhere in the region of $100 million. You can of course run LLMs on your own laptop, but they will naturally be much more limited in what they can achieve and much more prone to making mistakes.

3 Likes

I use mine as a search engine lol. Or rather, I use it as a recommendation on what to search for. Like I’ll post a handful of themes and keywords and book, and it will often bring up books or films that match it better than when doing it on google usually.

3 Likes

He is knowledgeable about AI, but a false sense of security was present in his words.

But what would he or Ganguli predict about the future if 2 singularities were to occur in relative proximity?

Let me address these two drive-by shootings (subtweets, in that venue):

You realize this is a fairly small community of regulars, right? If you want to call me out, then do it. For the record, I made clear my unwillingness to “dialogue” with ChatGPT on April 6 in this post:

It wasn’t until April 14 that I publicly said I’d like to see the mods ban it. Also for the record, as a former mod I don’t get to participate in their deliberations and I don’t get a vote on their decisions. Don’t turn this into some sort of personal vendetta.

@Christy liked it and the next day it was banned.

I have nothing against you and I made my peace with the mods’ decision.

Newsflash: Maybe I said something out loud that Christy and other mods were already thinking. I didn’t initiate this ban and had no say in it. Perhaps you could apologize, or not.

On other unfinished business:

Of course you’re free to skip my posts, and vice versa. But you completely mischaracterize that conversation. (Anyone who wants to can look it up starting here.)

Right off the bat, I asked for those who knew more than me to weigh in. That is in no way, shape or form trying to “correct someone who is more knowledgeable.” I was trying to have a conversation with you, not correct you. I noted at the end of my “wall of text” that the article’s conclusion “seemed” to contradict something you’d quoted. Please understand that everything isn’t a debate. If you want to have a dialogue, just correct my non-expert misunderstanding and explain what I got wrong. I’m happy to learn new things, but a non-answer and a condescending comment in another thread is less than worthless. All it really means is you won’t find much conversation around here about your pet subject of infinity.

1 Like

It’s perfectly believable. At the time the ‘directive’ felt rushed, there was a small public disagreement between the mods about the scope, and you had mentioned a ban the day before.

For that I apologize. That part of my comment was not in anyway associated with you. And in no way do I consider myself an expert or freshman class knowledgeable in philosophy of math. I sincerely mean this.

This sentence which was separated by a line from the second sentence was in reference to Christy’s comment that I misread to mean if you have to search ChatGPT for a reference, then you are not welcome here.

My commemt was in reply to Dale’s comment:

This was Christy’s comment which I misread:

Sorry for the mixup, and looking back now, I should have tagged you in making those references about you. That was a lapse of judgement on my part.

1 Like

Anyone who is established and is good at what they do should be ok. A career in telephone customer support isn’t looking very promising.

It was not a unilateral decision, I just get delegated to do the bossy stuff because other people don’t want to. It’s actually not my preferred role in life, it’s just how it often shakes out here. There was no public disagreement between Liam and I, some people just took it that way.

3 Likes

Not to split hairs, but there was, unintentional as it may be. Anyone can go back and read the part of the comment that was in bold type.

This “program” and all things called “Artificial Intelligence” are not intelligent at all. They are programmed to do certain tasks. They do not learn anything they are not programmed to be able to learn, they do not do anything they are not programmed to be able to do. And they do not think or feel. Artificial Intelligence, such as portrayed in sci-fi films, or in the way those such as Elon Musk WANTS you to view it, are not real and are currently, perhaps forever, out of the ability of man to create.

True Artificial Intelligence does not exist.

I’m going to ignore the bulk of your post and its purpose in order to hyperfixate on your comment of how sci fi films wants us to view AI. There are a whole lot of AI presentations in sci ranging from the synthetics like David in Prometheus to less models of them in some of the novels that are so bound up emotionlessly in their program that it won’t even open up a door with seconds to spare until the leading human dies and the power to override finally falls into the final girls hands to say “ override open “ . Often, though less so now, AIs were essentially just these voices on a ship that are clearly not conscious beings but like a smart house program that only becomes a antagonist after some hacker infects it with malware.

There is a contradiction in what you’ve stated. It is certainly true that they can only do what their programming permits but perhaps you underestimate our capacity to program for learning. It may well be that, having been programmed to learn, they might also learn how to program themselves - though to what ends will be a test of our own wisdom and goodness, not theirs. While they can go on doing what their programming permits any apparently malevolent values can only reflect our own IMO.

Yes, I wouldn’t worry about your job as a software developer just yet as regards to ChatGPT :slightly_smiling_face:.

I saw an example of someone asking it to find the next prime number after a given 10-digit number. It gave a number that was not prime. It also gave some Python code that it said was the Miller-Rabin primality test (Miller–Rabin primality test - Wikipedia) but the code was a simple primality test using brute force trial division, which is not Miller-Rabin.

So, ChatGPT is certainly not an “expert” in mathematics. However, as a language model it is very impressive. And it is still impressive that it has enough understanding to attempt an answer like that, even though it didn’t get it right.

I am not a stereotypical “AI-skeptic” - I have been open to discussing with people the possibility that AI will someday achieve something like human intelligence. I just don’t think we are close to that.

2 Likes

@Andy7 mentioned being an AI skeptic. I am too but not carte blanche. Obviously sufficiently powerful computers can be programmed to learn to solve problems that involves learning aspects of the solution themselves. I’ll grant that but still feel there are limits some of which were discussed in a very recent PERSPECTIVA video of a discussion between Phoebe Tickell, the author of a book on morality and imagination, with Iain McGilchrist beginning at about the 27 minute mark. My attempt to transcribe a key part of that discussion:

… at some point some creation of ours might achieve a kind of consciousness. I just can’t rule that out partly because I believe that consciousness is the original primary stuff of the cosmos. In any case what you mean there and what I mean by imagination is not what this [bot] is doing. What it is doing is exactly what an imaginative person doesn’t do -which of course a human being couldn’t do- in this speed. What machines do is just do a trolley Dash around Wikipedia and come up with with an answer and it will even make stuff up and be quite sure that it’s right because it sees a pattern and jumps to conclusions …

A machine can be made by human Ingenuity to simulate almost anything. … the question is, is it simulating something or is it actually achieving it? So far there’s no evidence that it’s achieving it. It’’s he logical conclusion of the Industrial Revolution which was to take things that took a lot of craftsmanship or occurred in nature and try and imitate them and then produce thousands of them very very fast so it’s exactly what was going on already in the early part of the 19th century. It’s just been heightened to to another level of technological sophistication. But it’s … never going to produce works that have profound human meaning because you have to have a body, emotions, a sense of relationship, the knowledge that you’re going to die, to have suffered and to be aware of the limitations to what it is that you can experience and I just don’t think this is this can be fed by a clever I.T expert into a computer that seems to say those things but it can’t actually do them.

2 Likes

I havent seen Chat GPT, however, it must clearly promote ideas that a problematic for this forum? I would be interested in knowing what those doctrinal problems are exactly…i will have to go over and take a look…thanks for the heads up on a resource i had not known about before.

and that is exactly the reason why rather than simply state “Do not quote it on this forum” which is an archaic demand to make in a world that promotes democracy and freedom, we should simply allow all arguments to be made and analysed.

Of course all of us who know anything about the wonderful advent of computer technology are well aware that its programming simply spits out what is programmed in!

The point is, if Chat GPT is producing mainstream views, then rather than hide those views, one should closely examine why such views may be out there in the first place. What evidence supports such views and where is that evidence problematic. Is that not also part of the scientific method?

1 Like

We’re not performing science here. Just trying to have conversations with actual people. If I wanted to talk to bots, I can post something about Ukraine or vaccines on Twitter and talk to them all day.

4 Likes

I’m actually an AI enthusiast. AI has already surpassed human intelligence in chess, just like human physical limits were surpassed by machines in the industrial revolution. But consciousness is a totally different thing than intelligence. I’m a total skeptic that AI can achieve anything close to consciousness. I doubt anyone read Kasparov’s article in the Harvard Business Review that I linked above, but it makes a helpful distinction between AI and human intelligence and suggests a way forward – Augmented Intelligence.

From the article (emphasis mine):

Machine Intelligence vs. Human Intelligence

In general, people recognize today’s advanced computers as intelligent because they have the potential to learn and make decisions based on the information they take in. But while we may recognize that ability, it’s a decidedly different type of intelligence what we posses.

In its simplest form, AI is a computer acting and deciding in ways that seem intelligent. In line with Alan Turing’s philosophy, AI imitates how humans act, feel, speak, and decide. This type of intelligence is extremely useful in an organizational setting: Because of its imitating abilities, AI has the quality to identify informational patterns that optimize trends relevant to the job. In addition, contrary to humans, AI never gets physically tired and as long it’s fed data it will keep going.

These qualities mean that AI is perfectly suited to put at work in lower-level routine tasks that are repetitive and take place within a closed management system. …

Human abilities, however, are more expansive. Contrary to AI abilities that are only responsive to the data available, humans have the ability to imagine, anticipate, feel, and judge changing situations, which allows them to shift from short-term to long-term concerns. These abilities are unique to humans and do not require a steady flow of externally provided data to work as is the case with artificial intelligence.

In this way humans represent what we call authentic intelligence — a different type of AI, if you will. This type of intelligence is needed when open systems are in place. In an open management system, the team or organization is interacting with the external environment and therefore has to deal with influences from outside. Such work setting requires the ability to anticipate and work with, for example, sudden changes and distorted information exchange, while at the same time being creative in distilling a vision and future strategy. In open systems, transformation efforts are continuously at work and effective management of that process requires authentic intelligence.

Although Artificial Intelligence (referred to as AI1 here) seems opposite to Authentic Intelligence (referred to as AI2 here), they are also complimentary. In the context of organizations, both types of intelligence offer a range of specific talents.

Which talents – operationalized as abilities needed to meet performance requirements – are needed to perform best? It is, first of all, important to emphasize that talent can win games, but often it will not win championships — teams win championships. For this reason, we believe that it will be the combination of the talents included in both AI1 and AI2, working in tandem, that will make for the future of intelligent work. It will create the kind of intelligence that will allow for organizations to be more efficient and accurate, but at the same time also creative and pro-active. This other type of AI we call Augmented Intelligence (referred to as AI3 here).

2 Likes

Perhaps you can argue it could have been better communicated, but the idea is that we value human interaction and opinion, not quote mining without comment or interpretation. ChatGTP can be used as a resource, or as reference, but what is important is not what ChatGPT, or Wikipedia, or Google says, but rather what you think, and of you sharing with and learning from others. If preferring thoughtful communication rather than proof texting quotes is seen as encroaching on freedom, perhaps this is not a good fit.

3 Likes

You are hilarious, Adam. Yeah, that was the problem, the robots were proseletyzing in a direction we didn’t appreciate…

6 Likes