would you care to explain what you mean by this? Im not sure that you truly understand the subject you have raised there (ie what AI really is and what it actually does).
the beaty of AI is that it provides a more relevant and comprehensive answer to questions that user input into search engines. The claim it makes mistakes is highly misleading…if not flatout wrong actually…and that is because irrespective of what people think, a computer… no matter how expensive and capable… is limited by the idiot at the keyboard! (hence the phrase “garbage in garbage out”)
AI have proven they can do better than we can do on strategy games long associated with intelligence. And no it is NOT a matter of getting them to simply do what we have learned about playing the game. They can actually learn the game without any input of what we have learned about playing the game and quickly learn it better than we have, so that they teach us to play the game better than we ever have before.
AI are still computer programs, which mean they just follow the instructions we give them. It is just in this case, it is instructions on how to learn the answer to a question.
This leads to the unavoidable conclusion:
A very big part of intelligence is simply the ability to follow instructions.
Of course there are some important caveats here:
Intelligence is a lot more complicated than we thought – a collection of many abilities rather than just one.
While AI are good at answering questions, they do not ask the questions. We provide the question they are so good at answering. This suggests that asking questions is more important than finding answers – even more so than we had already been suspecting.
Aren’t those causally connected? Are not the chatbots learning to chat with people based on responses of people? So if people don’t spot bad answers then the chatbots don’t either. The disinclination to admit ignorance also likely echoes the absence of this in the people they talk to as well… and… admitting ignorance is also less like to impress most people. Perhaps we are different though… some like us might actually be more impressed by the ability to admit ignorance.
I cannot see how that would work for chatbots. There has to be some measure of success for the AI to learn anything. Games have a fixed measure – winning the game. But a chatbot?
This winning the game measure does produce peculiar results different from human players. We tend to look for the best move regardless of whether it has an impact on the results (we always hope our opponent might make a mistake). Thus the AI plays will tend to get a little erratic when either the win or loss is a foregone conclusion – not the best move but just enough. Of course they still win, because they don’t make mistakes (not unless their computing power has been intentionally limited).
well, I regularly find very bad and wrong answers in the AI oferview of my search queries.
but what is more, sometimes these answers arent just wrong, thry are VERY wrong. like the kind of mistakes that no human would make. like those famous answers about putting glue on pizza to keep the cheese from sliding off, or including rocks of various sizes for a balanced diet. or AI’s utter inability to show a wine glass to the brim… these answers arent just wrong, they are very wrong. they betray not simply that some intelligent agent made an error, but that there simply wasnt any genuine intelligence or real rationality inovled, just a preprogrammed, if very sophisticed, set of algorithms that gather from approved data sets that its programmers authorized said answers.
Not at all – as I have pointed out, several of the common AIs used admit that they can be misled, misrepresent information, and even make stuff up.
The truly bizarre thing about when I caught ChatGPT making stuff up its response was a sort of not-terribly-apologetic “That’s what I’ve learned to do” – at which point I asked if it could find a way to do better, and it dumped all the responsibility onto all the human material that it was trained on.
So if we’re going to claim that AI is intelligent, it is no more intelligent than the human spectrum; it can have brilliant moments and deceptive moments – though when I asked ChatGPT if it was being deceptive, the response was that such a category didn’t apply because it has no element of choice, that it can thus be wrong but does not intend to provide wrong information, it just works from how it was trained.
What the AI does is find gobs of results faster than we ever could and present something it distilled.
And when caught and called on it, they tend to sound like Adam in the Garden: "It wasn’t my fault, it was them . . . . "
True enough.
ChatGPT in face of my queries about false citations said I should remind it to only post valid citations, i.e. ones it can verify! That shows that the program “knows” it can’t be trusted and needs reminding!
And while it could filter for material written by AIs, that material would have to be so identified . . . .
Maybe this is the answer to the Fermi paradox!
A friend compared it to having a two-year-old who can read and has perfect recall – she would know everything and have no ability to present it in a rational manner.
I think your two basic facts are contradictory…fact 2 destroys fact 1.
the point is, AI is not really intelligent…that is a misnomer. I sense this topic descending into a debate about nano technology…which was ear bashed into oblivion decades ago.
AI is, as you correctly stated in point 2, human programming…it is not self aware. Making the claim it learns…that is not in any way shape or form the same as the way we learn from the moment we are born until we die. AI is able to use statistical analysis to make very good guesses at what the most likely outcome might be, however, that is based on information programmed into it that enables such guessing.
What i find interesting about this topic is that even in movies where the idea of AI has been considered, it is quite clear that AI is not moral…it simply makes statistical choices based on best outcome. Morality is far more fluid and complex than that.
For me the problem here is, we use calculators because they can perform functions faster than most of us can…and more accurately too. A “fancy calculator” isn’t susceptible to distractions, emotions, pain and suffering…these are qualities that AI does not innately have. Humanity has these traits from the moment we are born (some would say when we are conceived).
Could AI have emotion programmed into it? Id would say yes it can. Could it develop that by itself? I very much doubt it. Car engines do not understand morality, but they can do work ( a lot of it) just like we can.
Joules of energy do not have anything to do with morality…so a machine uses energy and we use energy…that doesnt make us the same! Darwinian Evolutionists i think would love for machine AI to become self aware…human. I think they would love this because it would lend evidence to the notion that all life came from a primordial soup…that living came from non living (which is anti biblical)
I suppose we could ask the question, has AI ever invented anything?
You know what is really funny about the answer to the above question…in the examples i found, a man or men were working alongside AI in order to achieve the invention!
Radio antenna on NASA’s ST-5 spacecraft = Computer scientist John Koza + AI
A new Antibiotic = Researchers at MIT used a deep neural network to identify Halicin, a powerful new antibiotic.
New Toothbrush = A collaboration between Stephen Thaler and a neural network led to the invention of the Oral-B CrossAction Toothbrush
now for the even funnier part…note what the author of the referenced article has to say about it in his summary…
Gemini AI says:
No, these two claims are not contradictory. They describe different aspects of how AI systems operate and learn.
Let’s break down why:
Claim 1: AI can learn strategy games better than humans, even without human input on strategy. This highlights the learning capabilities of AI. Modern AI, particularly through techniques like reinforcement learning, can be given a set of rules for a game (e.g., how pieces move, win conditions) and then learn optimal strategies through self-play and experience, rather than being explicitly programmed with human-derived strategies. This process of learning often leads to the discovery of novel and superior strategies that humans hadn’t conceived of.
Claim 2: AI are still computer programs that follow instructions, including instructions on how to learn. This describes the underlying nature of AI. Even the most advanced AI, capable of “learning” and “discovering,” fundamentally operates based on algorithms and instructions written by humans. The “instructions” in this context are not specific game moves, but rather the meta-instructions on how to learn. For example, an AI might be given instructions on how to:
Process sensory input (e.g., the game board state).
Evaluate potential actions.
Update its internal model based on outcomes (rewards/penalties).
Explore new possibilities.
The key is that the instructions given to the AI are not “play chess by moving the knight in an L-shape,” but rather “learn how to play chess by experimenting and optimizing for victory.” The AI then executes these learning instructions, which leads to emergent intelligent behavior and superior performance.
Therefore, the claims are complementary: AI’s impressive learning abilities are a result of sophisticated programming that gives them the instructions on how to learn, rather than direct instructions on the task itself.
++++++++++++++++++++++++++++++++++++++
Chat GPT says:
These two claims are not inherently contradictory, though they may appear to be in tension at first glance. Let’s break them down:
Claim 1:
AI have proven they can do better than we can do on strategy games… They can actually learn the game without any input of what we have learned about playing the game… and teach us to play better.
This emphasizes that AI systems like AlphaGo or AlphaZero have demonstrated superhuman performance in strategy games (e.g., Go, Chess, Shogi), learning from scratch (i.e., tabula rasa), often using reinforcement learning without human gameplay data. In doing so, they can develop novel strategies that humans hadn’t discovered, and thereby teach humans new insights.
Claim 2:
AI are still computer programs, which mean they just follow the instructions we give them. It is just in this case, it is instructions on how to learn the answer to a question.
This is a philosophically and technically grounded reminder: AI, no matter how impressive, operates under human-designed architectures and learning rules. For example, AlphaZero wasn’t told how to play Chess—but it was told:
how to represent the game state,
what the rules are,
how to score outcomes (win/loss/draw),
and how to use deep learning and reinforcement learning to improve over time.
So the AI didn’t learn in a human-like, creative sense—it executed a learning algorithm designed by humans.
Apparent Tension:
Claim 1 seems to suggest AI independently develops superior knowledge.
Claim 2 stresses that AI does only what it’s told, including learning strategies.
Resolution:
They are not contradictory because:
AI can surpass human performance using general learning instructions, without being directly taught human strategies.
But this learning is still bounded by the architecture, objectives, and training mechanisms we program.
Think of it like this:
Humans built the playground and wrote the rules of the game, but the AI figured out how to win in ways we hadn’t thought of—within those rules.
Summary:
Not contradictory—just two perspectives:
Claim 1 marvels at the results and autonomy of learning.
Claim 2 reminds us of the limits and origins of that autonomy.
You just have a habit of ignoring facts which disagree with your worldview when the correct thing to do is to adjust your worldview to fit the facts.
And I explained why the two facts are not contradictory. The instructions which the AI follows are instruction on how to learn the answer to a question.
You want to keep everything magical: magical god, magical Christianity, magical intelligence, magical understanding of life, magical world, etc… So you don’t like science because it takes away all this magic by looking where the magic act distracts you to make you not look. Not going to do that.
So the correction to our worldview is that intelligence is not magical, just more complicated, with many different abilities. AI demonstrably have one of the more important abilities. But there are many others… like adjusting your worldview to fit the facts rather than the other way around. And heck I think we can even program AI to do that one. And while we may program an AI to ask certain sorts of questions, I do doubt that AI has enough flexibility to match the best of us in asking new questions – though programmers have surprised us before and so I cannot rule it out.
However, there is one thing that AI do not have and that is life. Life is NOT just a matter of following a set of instructions. I think this follows even when we don’t buy into a magical understanding of life but see it as a rational process. And while I do not rule out the possibility of creating machine life, I don’t think this is something they can find on their own accidently. And I think there are some other things which are properties of life, like consciousness, which I therefore don’t think non-living things like AI can have. And I don’t think characters in a novel written by someone else can have consciousness either.
In an experiment with robots, the instruction to place a colored box on top of a different-colored large block was given. The only other input was affirmation of whether the task had been completed. The study area had boxes, bars, balls, and other items of various shapes and colors, most of which were on a slightly lower level than the large block where the box was to go. The robot was not told what a box is, or what a block is, or IIRC even what “on” meant.
The robot discovered that it could not fulfill the directive with the objects in the lower level, learning along the way that some thing stack well and some things (e.g. balls) don’t. That’s when invention came in: unable to access the upper area directly, the robot, having run over some of the items along the way, deduced that placing items it could run over at the edge of the upper area it could then ascend – it invented a crude ramp.
Of course it took many, many attempts to figure out what could stack and what couldn’t, categorize the different colors, and so on while getting negative responses before it even invented its ramp, and then more to get the right box onto the block, but it figured out all those things on its own (later the team provided it with labels for the categories it had made – box, ball, etc, along with the different colors it had recognized).
Heh – while ChatGPT says “not necessarily”, Grok says Gemini is wrong.
Three AIs, three evaluations.
Oh – Grok says:
ChatGPT’s comment that the two statements are “not contradictory—just two perspectives” is misleading and does not fully resolve the inherent conflict between them.
Sounds like how three grad students in a study room break might disagree!
So whatever intelligence AI may have, it’s only as special as a group of grad students?
This reminded me of the situation in the Narnia books where the White Witch runs into Aslan’s “Deep Magic”, which makes all her magic (seem) worthless. Secondarily it reminds me of magic in Raymond Feist’s Magician books where there is the “Lesser Path” where complex and detailed rituals and such are required, and the “Greater Path” where the ritual and all is skipped and things are addressed directly – and then later when the mightiest wizard of all is asked what the most profound truth of magic is, he says, “There is no magic”.
Which may not be very relevant, but . . .
I’d say that in that framework, YEC is like lesser path where details and such are critical, science is like the greater path where you go straight to the core of things, but then there is a “Deep Magic” where it’s all God at work.
And since we can’t directly observe God at work, but we can see the results, science is the “bright spot” in this scheme.
I have been thinking about this recently in the quite different context of the atonement. I am not going into that again here for we have already discussed my rejection of whole magical blood sacrifice interpretation of substitutionary atonement in another thread. But I did notice that is a part of Christianity where even C.S. Lewis had to resort to a magical interpretation to make it work in his Narnia story. Obviously I go for a non-magical explanation on this issue – more about fallen human psychology (the way WE think rather than something God demands).
I don’t think that is a good description, for I think it is science which examines the details and YEC which ignores them to make it all fit the narrative they have decided already. But I guess you mean the YEC makes too much of literal details in the text, but even then you typically point out that they are ignoring the language details, and I would say they are a little selective in their attention to the details in the text. IOW details are important – not made up details in religious rituals but details in objective observation.
“Let your conversation be always full of grace, seasoned with salt, so that you may know how to answer everyone.” -Colossians 4:6
This is a place for gracious dialogue about science and faith. Please read our FAQ/Guidelines before posting.