Anyone who is established and is good at what they do should be ok. A career in telephone customer support isn’t looking very promising.
It was not a unilateral decision, I just get delegated to do the bossy stuff because other people don’t want to. It’s actually not my preferred role in life, it’s just how it often shakes out here. There was no public disagreement between Liam and I, some people just took it that way.
Not to split hairs, but there was, unintentional as it may be. Anyone can go back and read the part of the comment that was in bold type.
This “program” and all things called “Artificial Intelligence” are not intelligent at all. They are programmed to do certain tasks. They do not learn anything they are not programmed to be able to learn, they do not do anything they are not programmed to be able to do. And they do not think or feel. Artificial Intelligence, such as portrayed in sci-fi films, or in the way those such as Elon Musk WANTS you to view it, are not real and are currently, perhaps forever, out of the ability of man to create.
True Artificial Intelligence does not exist.
I’m going to ignore the bulk of your post and its purpose in order to hyperfixate on your comment of how sci fi films wants us to view AI. There are a whole lot of AI presentations in sci ranging from the synthetics like David in Prometheus to less models of them in some of the novels that are so bound up emotionlessly in their program that it won’t even open up a door with seconds to spare until the leading human dies and the power to override finally falls into the final girls hands to say “ override open “ . Often, though less so now, AIs were essentially just these voices on a ship that are clearly not conscious beings but like a smart house program that only becomes a antagonist after some hacker infects it with malware.
There is a contradiction in what you’ve stated. It is certainly true that they can only do what their programming permits but perhaps you underestimate our capacity to program for learning. It may well be that, having been programmed to learn, they might also learn how to program themselves - though to what ends will be a test of our own wisdom and goodness, not theirs. While they can go on doing what their programming permits any apparently malevolent values can only reflect our own IMO.
Yes, I wouldn’t worry about your job as a software developer just yet as regards to ChatGPT .
I saw an example of someone asking it to find the next prime number after a given 10-digit number. It gave a number that was not prime. It also gave some Python code that it said was the Miller-Rabin primality test (Miller–Rabin primality test - Wikipedia) but the code was a simple primality test using brute force trial division, which is not Miller-Rabin.
So, ChatGPT is certainly not an “expert” in mathematics. However, as a language model it is very impressive. And it is still impressive that it has enough understanding to attempt an answer like that, even though it didn’t get it right.
I am not a stereotypical “AI-skeptic” - I have been open to discussing with people the possibility that AI will someday achieve something like human intelligence. I just don’t think we are close to that.
@Andy7 mentioned being an AI skeptic. I am too but not carte blanche. Obviously sufficiently powerful computers can be programmed to learn to solve problems that involves learning aspects of the solution themselves. I’ll grant that but still feel there are limits some of which were discussed in a very recent PERSPECTIVA video of a discussion between Phoebe Tickell, the author of a book on morality and imagination, with Iain McGilchrist beginning at about the 27 minute mark. My attempt to transcribe a key part of that discussion:
… at some point some creation of ours might achieve a kind of consciousness. I just can’t rule that out partly because I believe that consciousness is the original primary stuff of the cosmos. In any case what you mean there and what I mean by imagination is not what this [bot] is doing. What it is doing is exactly what an imaginative person doesn’t do -which of course a human being couldn’t do- in this speed. What machines do is just do a trolley Dash around Wikipedia and come up with with an answer and it will even make stuff up and be quite sure that it’s right because it sees a pattern and jumps to conclusions …
A machine can be made by human Ingenuity to simulate almost anything. … the question is, is it simulating something or is it actually achieving it? So far there’s no evidence that it’s achieving it. It’’s he logical conclusion of the Industrial Revolution which was to take things that took a lot of craftsmanship or occurred in nature and try and imitate them and then produce thousands of them very very fast so it’s exactly what was going on already in the early part of the 19th century. It’s just been heightened to to another level of technological sophistication. But it’s … never going to produce works that have profound human meaning because you have to have a body, emotions, a sense of relationship, the knowledge that you’re going to die, to have suffered and to be aware of the limitations to what it is that you can experience and I just don’t think this is this can be fed by a clever I.T expert into a computer that seems to say those things but it can’t actually do them.
I havent seen Chat GPT, however, it must clearly promote ideas that a problematic for this forum? I would be interested in knowing what those doctrinal problems are exactly…i will have to go over and take a look…thanks for the heads up on a resource i had not known about before.
and that is exactly the reason why rather than simply state “Do not quote it on this forum” which is an archaic demand to make in a world that promotes democracy and freedom, we should simply allow all arguments to be made and analysed.
Of course all of us who know anything about the wonderful advent of computer technology are well aware that its programming simply spits out what is programmed in!
The point is, if Chat GPT is producing mainstream views, then rather than hide those views, one should closely examine why such views may be out there in the first place. What evidence supports such views and where is that evidence problematic. Is that not also part of the scientific method?
We’re not performing science here. Just trying to have conversations with actual people. If I wanted to talk to bots, I can post something about Ukraine or vaccines on Twitter and talk to them all day.
I’m actually an AI enthusiast. AI has already surpassed human intelligence in chess, just like human physical limits were surpassed by machines in the industrial revolution. But consciousness is a totally different thing than intelligence. I’m a total skeptic that AI can achieve anything close to consciousness. I doubt anyone read Kasparov’s article in the Harvard Business Review that I linked above, but it makes a helpful distinction between AI and human intelligence and suggests a way forward – Augmented Intelligence.
From the article (emphasis mine):
Machine Intelligence vs. Human Intelligence
In general, people recognize today’s advanced computers as intelligent because they have the potential to learn and make decisions based on the information they take in. But while we may recognize that ability, it’s a decidedly different type of intelligence what we posses.
In its simplest form, AI is a computer acting and deciding in ways that seem intelligent. In line with Alan Turing’s philosophy, AI imitates how humans act, feel, speak, and decide. This type of intelligence is extremely useful in an organizational setting: Because of its imitating abilities, AI has the quality to identify informational patterns that optimize trends relevant to the job. In addition, contrary to humans, AI never gets physically tired and as long it’s fed data it will keep going.
These qualities mean that AI is perfectly suited to put at work in lower-level routine tasks that are repetitive and take place within a closed management system. …
Human abilities, however, are more expansive. Contrary to AI abilities that are only responsive to the data available, humans have the ability to imagine, anticipate, feel, and judge changing situations, which allows them to shift from short-term to long-term concerns. These abilities are unique to humans and do not require a steady flow of externally provided data to work as is the case with artificial intelligence.
In this way humans represent what we call authentic intelligence — a different type of AI, if you will. This type of intelligence is needed when open systems are in place. In an open management system, the team or organization is interacting with the external environment and therefore has to deal with influences from outside. Such work setting requires the ability to anticipate and work with, for example, sudden changes and distorted information exchange, while at the same time being creative in distilling a vision and future strategy. In open systems, transformation efforts are continuously at work and effective management of that process requires authentic intelligence.
Although Artificial Intelligence (referred to as AI1 here) seems opposite to Authentic Intelligence (referred to as AI2 here), they are also complimentary. In the context of organizations, both types of intelligence offer a range of specific talents.
Which talents – operationalized as abilities needed to meet performance requirements – are needed to perform best? It is, first of all, important to emphasize that talent can win games, but often it will not win championships — teams win championships. For this reason, we believe that it will be the combination of the talents included in both AI1 and AI2, working in tandem, that will make for the future of intelligent work. It will create the kind of intelligence that will allow for organizations to be more efficient and accurate, but at the same time also creative and pro-active. This other type of AI we call Augmented Intelligence (referred to as AI3 here).
Perhaps you can argue it could have been better communicated, but the idea is that we value human interaction and opinion, not quote mining without comment or interpretation. ChatGTP can be used as a resource, or as reference, but what is important is not what ChatGPT, or Wikipedia, or Google says, but rather what you think, and of you sharing with and learning from others. If preferring thoughtful communication rather than proof texting quotes is seen as encroaching on freedom, perhaps this is not a good fit.
You are hilarious, Adam. Yeah, that was the problem, the robots were proseletyzing in a direction we didn’t appreciate…
With respect, that is far too simplistic of a characterization of modern AI. These systems use deep learning algorithms (Deep learning - Wikipedia) which actually learn things that the designers did not “program in”.
Oh, I am an “AI-enthusiast” too . I was only trying to say that when I say skeptical things about AI, I could be misunderstood as an “AI-skeptic”. At the risk of repeating myself from another thread, I think of AGI (Artificial General Intelligence) and ASI (Artificial Specialized Intelligence). I think these roughly correspond to AI2 and AI1 respectively in your quote from Kasparov’s article.
Chess-playing, Jeopardy!-playing etc. are ASI and I will not minimize the amazing progress in those. I kind of think of AI language models as ASI, but maybe the boundary between AGI and ASI is becoming blurry.
In terms of AGI, no one has convinced me yet that AI systems are capable of feeling pain, or having emotions, which is a big part of being human. But maybe there are deep scientific and philosophical questions there about what constitutes pain and emotion in biological systems.
However, I guess this isn’t the thread for those discussions - the decision at hand is to stop quoting ChatGPT, which I am fine with.
I simply do not believe in the form of AI that those such as Elon Musk and Stephen Hawking warn/ed us about. It does not exist. I maintain that it never will. A machine will never be a person and must never be treated as one. Regardless of how convincing it may be. It is still an illusion.
Mankind cannot create entirely new life forms. Humanity can replicate itself, or modify existing life forms, but never create something truly new.
I love the terminology “hallucitations” for what chatGPT does. It often writes a short essay and completely invents multiple citations none of which exist at all. It is not a case of copying a bad citation from the internet–it just invents plausible ones! This is not as good as a search engine, which has real references!
Bard has real-time access to the internet, and since it’s a Google product, ideally it would have full access to content in books.google without the user restrictions we run into for copyrighted material. (It may not, since I presume they are independent projects.) Then it should be able to factcheck itself with respect to some libraries. I’m not holding my breath.
The problem is that these large language models have no concept of “truth” or “facts,” just clusters of words that make plausible connections to communicate some meaning or other. Their fake references are very plausible, often using real authors. Musk points this out in his (hopeless) effort to develop a “truth” GPT that is better.
Right. It’s sort of like trusting people who “do their own research” on the internet. One reason I’m not in favor of quoting ChatGPT here is that it’s just another source of potential misinformation that itself must be fact-checked at every step along the way. Similar to everyone’s Fox News addicted grandpa, ChatGPT can’t sort out reputable from spurious sources. It wasn’t built for that task.