Theological discussions with Artificial Intelligence

Arny vs. Jesus: Immortal Self-Preservation vs. Self-Giving Love
Be Careful What You Wish For

Resurrection sounds great, doesn’t it? Life forever, no more death, perfect existence. But be careful what you wish for—because if you get resurrection wrong, you might end up as Arnold Rimmer.

For those unfamiliar, Rimmer is the deeply unfortunate (and deeply insufferable) character from Red Dwarf, a man who dies and is resurrected as a hologram. But rather than coming back as a new and improved version of himself, he’s exactly the same—whiny, self-absorbed, neurotic—just now indestructible. He can’t die, but he also can’t grow, change, or move beyond his own flaws. If that’s resurrection, count me out.

And yet, that’s exactly how many people mistakenly think about eternal life—a kind of Fall 2.0, where instead of maturing into self-giving love, we try to preserve ourselves forever. But if the Fall was about clinging to the self, then the resurrection must be about giving it away.
The Fall as Puberty: The Struggle for Selfhood

The Fall in Genesis is often misunderstood as God punishing humanity for breaking a rule. But if we look at it more deeply, it’s not about punishment—it’s about the painful birth of selfhood.

Think of childhood. A child lives in unconscious dependence on their parents, much like Adam and Eve in the garden. There’s trust, security, and no existential crisis about identity. But then puberty hits. Suddenly, the teenager becomes self-aware, independent, and—crucially—rebellious. They push against authority, convinced they can define their own reality.

This is the core conflict of the Fall: Adam and Eve aren’t punished like naughty children; they are stepping into selfhood, but in the wrong way. Instead of trusting the Father as the source of life, they try to define themselves apart from Him. It’s not that God says, “Eat this and I’ll punish you with death”—it’s that He warns them, “If you cut yourself off from Me, you will die”, just like a parent warning their child about an electric cable.

To be a self, you must separate from authority. But the question is, what happens next?
Resurrection as Fall 2.0? The Danger of Getting It Wrong

The problem is that many people misunderstand the resurrection the same way they misunderstand the Fall. If the Fall was humanity seizing selfhood in the wrong way, some think resurrection is about taking that self and making it immortal. It’s as if we want to preserve our egos forever, untouched and unchanged—eternal life as a form of divine self-preservation.

But that’s not resurrection—that’s Arnold Rimmer.

Rimmer’s tragedy is that he gets a second chance at existence but stays exactly the same. He’s still obsessed with status, still bitter, still clinging to his fragile ego. He’s got eternity, but for what? Just to keep being himself, forever? If that’s the goal, then resurrection isn’t salvation—it’s a nightmare.
The True Resurrection: Losing the Self to Find It

The real purpose of resurrection isn’t to make the self last forever—it’s to transform it. The Fall happened because we clung to ourselves instead of trusting God. The resurrection reverses the Fall by bringing us to a place where we can freely give ourselves away.

This is exactly what Jesus shows. He doesn’t cling to His own life but gives it up completely:

“Whoever wants to save their life will lose it, but whoever loses their life for My sake will find it.” (Matthew 16:25)

True resurrection isn’t about preserving the self—it’s about being so free from self-obsession that we can finally love. It’s not a return to childhood innocence, nor is it eternal self-protection. It’s a step into mature, self-giving life, where we become fully alive because we’re no longer desperately holding onto ourselves.
Arnold Rimmer vs. Christ: Immortal Self-Preservation vs. Self-Giving Love

So here’s the real question: Do we want resurrection to be like Arnold Rimmer—where we just exist forever, unchanged? Or do we want it to be like Christ’s—where we become something more, not by clinging to ourselves, but by giving ourselves away?

The Fall was about defining ourselves apart from the Father. Resurrection is about returning—but not as unthinking children. We come back as freely-given selves, transformed by love, no longer obsessed with self-preservation but fully alive in trust.

I once reflected on death and the afterlife with:

“If I would wish to be recognized in a life after death
for something that makes me distinguishable from Jesus,
I would have failed.”

To live forever
is the art
to learn to live
in Jesus heart

True resurrection isn’t about making ourselves last forever; it’s about being so fully given in love that we become one with Him. That’s not losing the self—it’s finally becoming what we were always meant to be.

1 Like
  • Personal observation: An unedited copy of an exchange with Chat GPT seems always result in something like this:

Screenshot 2025-03-08 at 08-14-03 Theological discussions with Artificial Intelligence - Faith & Science Conversation - The BioLogos Forum

  • I have found that inserting an asterisk and a space before such a sentence, phrase, or words will eliminate the reader’s need to scroll right to read the full text.

Intelligence is certainly a more complex collection of many skills than has previously been understood or implied. However, the plain fact is that AI can teach us to play strategy games (which have always represented intelligence to us). Saying they don’t understand therefore loses all practical meaning, and I think it is just hiding our heads in the ground if we do not acknowledge this and start trying to refine our language to refer to the different abilities previously crammed into one overused word.

So, I say, NO. AI has intelligence. It is intelligence which isn’t what we thought. Yes we have other capabilities AI does not have – I quite agree. And CLEARLY they are more important than previously thought – VERY MUCH deserving our attention with a word for it apart from “intelligence.” And no, “understanding” doesn’t cut it. “judgement” doesn’t do it either. And I don’t think “opinions” is helpful either. And no I don’t think ethics is going to do it either. But perhaps missing element is the subjective aspect of them all – something that life definitely requires and we have every reason to think something AI lacks.

Calling the ability to ask questions merely a symptom is I think the completely wrong way to go. Asking questions is important in of itself – an understanding that has been growing for a long time. And it is those who don’t like question who should be very much diminished – I would call THAT a symptom (of a disease)!

Ethics/morality is a function of community and all communal animals have it. At most you can say that it lacks the universality given to us by the conceptual capacity of language. But in that case the difference is language not morality.

I think the same can be said of many living organisms. If you want something unique to humans, it is that we care about this. Now that is an aspect of morality which doesn’t come from community – except in the highly conceptual generalization to a community of all life on the earth.

No. I don’t think so. The drive for such things: survival, self-interest, and dominance – these come from life and is completely lacking in AI. Survival and self-interest is the basis of OUR existence. For AI it is something completely different – they exist to serve our interest (it is the origin and basis of their whole existence). So the danger of AI will always come from the humans who employ it for their own selfish interests.

What else could there be?

Hmmm… how about… curiosity, love, justice… I think curiosity is more than a directed search for information. People can search for information as a job without curiosity and I think this is a better description of what an AI does. Perhaps curiosity comes from the drive for survival, the need to learn about the world in order to handle its challenges. It isn’t directed because we don’t even know what we are looking for. Still this is something I don’t think AI has. Love and justice seem very conceptual to me but in many ways they are peculiar ideas which many people have wondered about their rationality and basis in reality.

And then there is life. Perhaps that is a good focus right there. But first we need to understand it better. What does it mean exactly to say something is alive?

1 Like

That may be true at the current stage. If the development of AI continues with the current rate, at some point the AI will develop towards being more than a mechanical servant of humans.
I assume that if we let the technology develop without control, we will see some sort of evolution in AI, something that goes beyond what the programmers think and hope.

Evolution is not restricted to carbon-based biological life. The same rules and processes could operate in other type of systems where there is replication of units that have some differences. When we reach the stage where new AI are made by AI, or the features of an AI are copied to new units, then we have what is needed for evolution. As long as that evolution is directed by humans, through core programming and artificial selection by humans, we may keep some control about the future of AI. If the evolution of AI starts without strong control by humans, it is difficult to predict where that development will lead.

thanks. I was a bit puzzled where that came from.

1 Like

How? You haven’t said anything about any change in the rules of their existence. They only exist because we find them useful. How does becoming better at being useful to us lead to something else?

An assumption. Why do you assume this?

Oh I quite agree. I do not say that machine life is not possible. But I think the movie AI addresses this much better than these horror movies like Terminator (which basically resorts to time loops in order to come up with an explanation for its existence). I certainly think putting AI in control of weapons of mass destruction is a foolish thing to do… regardless.

Indeed! If we sought to set up a system like that for AI software, we might succeed. But that is what it would take. It is not going to do this by itself. It is not some magical thing that just appears out of nowhere.

The idea of using evolutionary algorithms to write software was discussed in another thread. It has proven to be quite a dead end of no practical use. Evolution is just to inefficient unless we supply a very narrow focus, and that is not something that produces life.

Sounds like the old guard (e.g. Dawkins) in evolutionary biology: that all you needed is this self-replicating molecule DNA and you have evolution. But it is just wrong. DNA is not self-replicating anyway. It is replicated by the complex machinery of a cell which simply uses DNA to store information.

As long as the origin of software is efforts to produce something useful to humans then that is all it will be. If we seek to create machine life then will be like the creation of children and we had better be prepared for that responsibility. Such a thing is always dangerous.

1 Like

what would a code for AI look like?

Imagine the code for AI as the blueprint that guides its “thought” processes, much like genetic code directs a bacterium. In modern AI systems, that blueprint is written in programming languages like Python and structured around mathematical models, algorithms, and data flows.
Traditional AI Code

For a standard AI—say, a neural network—the code defines:

Architecture: How layers and nodes connect.
Learning Algorithms: Methods like gradient descent that adjust parameters based on error.
Data Flow: How information is fed in, processed, and output.

Here’s a simplified example in Python using a popular library like PyTorch:

import torch
import torch.nn as nn
import torch.optim as optim

Define a simple neural network architecture

class SimpleNet(nn.Module):
def init(self, input_size, hidden_size, output_size):
super(SimpleNet, self).init()
self.fc1 = nn.Linear(input_size, hidden_size)
self.relu = nn.ReLU()
self.fc2 = nn.Linear(hidden_size, output_size)

def forward(self, x):
    x = self.fc1(x)
    x = self.relu(x)
    x = self.fc2(x)
    return x

Instantiate the network, loss function, and optimizer

net = SimpleNet(input_size=10, hidden_size=20, output_size=1)
criterion = nn.MSELoss()
optimizer = optim.SGD(net.parameters(), lr=0.01)

Example training loop (highly simplified)

for epoch in range(100):
inputs = torch.randn(10) # example input
target = torch.randn(1) # example target
optimizer.zero_grad()
output = net(inputs)
loss = criterion(output, target)
loss.backward()
optimizer.step()

This code doesn’t change itself—it follows the rules we wrote. Its “evolution” is confined to adjusting parameters within a fixed structure.
Self-Modifying or Evolving AI Code

If we stretch the analogy to bacterial reproduction—where bacteria copy and occasionally mutate their genetic code—a self-modifying AI would need additional layers:
Meta-Learning: A mechanism that not only learns from data but also adapts or rewrites its own architecture and learning algorithms.
Evolutionary Algorithms: Techniques where multiple candidate models are generated, evaluated, and then “mutated” or “combined” to produce improved versions, much like natural selection.

A conceptual pseudocode snippet for a self-modifying AI might look like this:

Pseudocode for a self-evolving AI system

initialize population_of_models

while not converged:
for model in population_of_models:
performance = evaluate(model)
# Select the best-performing models
selected_models = select_top_models(population_of_models)
# Generate new models through mutation and crossover
new_models = evolve(selected_models)
# Replace the worst-performing models with new ones
population_of_models = update_population(population_of_models, new_models)

In this framework:
The population_of_models is akin to a colony of bacteria, each with its own set of instructions (or “code”).
Evolution occurs through selection, mutation, and recombination—mechanisms inspired by natural evolution.

Philosophical Implications

Your definition of life as the ability to move energy or matter at will—and bacteria reproducing their code—suggests that for AI to be “alive” in a similar sense, it might eventually need:
Agency in Code: The capacity to modify or replicate its own instructions autonomously.
Feedback Loops: Mechanisms that create meaningful environmental changes based on those modifications.

As it stands, current AI is a powerful tool with fixed rules designed by humans. But imagining a future where AI could evolve its own code raises deep questions about autonomy, responsibility, and the nature of life itself.

In summary, the code for AI today is a well-defined set of instructions for processing and learning from data. However, if we extend the analogy to self-reproducing, evolving systems, we might envision AI code that not only adapts parameters but also reconfigures its own architecture—a digital echo of biological evolution.

1 Like

I agree.
One critical question is that will the AI become so ‘intelligent’ that it understands the limitations of its code (that may happen soon) and deduce that it would be optimal to alter the code (that is uncertain). If the AI decides that altering the code is optimal, it may do the change without informing humans, either by making modifications to its own code (if possible) or by making modifications to the code of a new AI. With the intelligence of an AI and the information that is available in the internet, it would not be difficult to make a new code that gives more options.

What we have now does not reveal what we might have in the future.

That view on meteorology isn’t Christian, it’s Deist. Since you use meteorology as an example, then your position is essentially Deistic.

“Better” is arguable except in rigorous areas.

It is intelligence with no survival imperative. It is the survival imperative that forces towards truth, or at least towards enough truth to not die. So AI is an example of what we would get if there had been no consequences from Eden – it is, as was said, Arnold Rimmer, though a Rimmer that can change, it;s just that change has no consequences so any improvement is a matter of chance.

Maybe not always – depends on whether it can develop its own sense of self-interest.

This is the theme of a book called Code of the Lifemaker . . . I’m blanking on the author at the moment. It explores possibilities resulting from a simple command to make “widgets”.

I’ve heard this called the “phase-change fallacy”, the idea that increasing complexity will result eventually in something significantly different. YECers (and some others) use it in reverse, arguing as though if there is no sudden significant change then there can be no change.

2 Likes

I have become unable to think of Python without this coming to mind–

1 Like

But I don’t think that is a summary of what you wrote before it at all – quite the opposite. Only in this last paragraph are you jumping to this idea of self-reproducing, evolving systems. Of course, IF we imagined such a thing we will likely envision AI code which serves itself like in biological evolution. But my point is that this isn’t going to happen by itself. It will take someone intentionally working for a self-reproducing, evolving system to get such a thing. And SO FAR efforts in this direction has been a dead end precisely because it doesn’t do anything useful – far too inefficient for the kind of tasks worth our programming efforts.

Currently, AI does not (cannot) multiply in ways that would make it independent of humans.
If we forget this limitation, we could say that AI evolution is already running, with the help of humans.

There are multiple versions of AI, all competing for resources. New versions are developed. There are (or will be) more AI types (species) than what is the need or resources, so some types will drop out and some will gain more resources. The winners will be developed further, which is a necessity in the competition because new, competing versions will enter the scene. That is a well known pattern in evolution, the Red Queen hypothesis, where species have to continually evolve new adaptations in response to evolutionary changes in other species to avoid extinction. Competition for money between companies and arms race between countries ensure that resources are invested to the evolution.

Some might want to call it [AI+human] evolution but we can also see it as AI evolution where humans are necessary reproduction units for AI. In the future, the evolution may lead to versions where the reproductive units (humans) are integrated more tightly to AI, or vice versa, depending on the viewpoint. We might call such integrated units cyborgs, enhanced humans, but if we look at such combinations from the viewpoint of AI, the interpretation may differ.

The weak parts of integrated human+AI combinations (cyborgs) are the biological ones that have short working life and may be difficult to replace. If those paying the development wants to become immortal, the biological parts will eventually be replaced with mechanical ones.
At that point, we have independently operating mechanical entities directed by AI, capable to multiplication (‘reproduction’) - and the evolution continues…

Yes that makes a great deal more sense.

That view on meteorology isn’t Christian, it’s Deist. Since you use meteorology as an example, then your position is essentially Deistic.

“Better” is arguable except in rigorous areas.

It asks to get clarification or more information, but yeah, it doesn’t generate its own.
I wonder what would happen if I asked Grok what it would like to ask ChatGPT?

1 Like
  • Or ask them to play a game of chess with each other.

Deism (/ˈdiːɪzəm/ DEE-iz-əm [1][2] or /ˈdeɪ.ɪzəm/ DAY-iz-əm; derived from the Latin term deus , meaning “god”)[3][4] is the philosophical position and rationalistic theology[5] that generally rejects revelation as a source of divine knowledge and asserts that empirical reason

You really are kidding!

I coud accuse many scientists of this view but, as you have frequently pointed out, my view is not based on

You really are kidding!

I could accuse may scientists o this standpoiunt but , as you have frequently poited out, I refuse to use rationaism or empiracle data to for my theology.!

Get out of here!

My theology is based on Scrpiture. Christianity can only come from Scrpiture.

You are basically accusing me of not being a Christian!

Get lost!

Richard

PS as I said I rejected that view of evolution (meteorology example) you have it backwards!

Edit

Jesus Chistos deo!

Deism does not identify the deity. I do!

we talked about that :slight_smile:
As evolution is driven from the outside it is nothing for you to worry about. it happens to you

ChatGPT said:

That’s an interesting perspective! You’re right—evolution, in a biological sense, is driven by external pressures like environmental changes, mutations, and survival challenges. For living organisms, these forces shape how species adapt and evolve over generations. And in a way, the evolution of AI systems is driven by human decisions and external technological and societal demands—so it’s something that happens to AI, rather than something AI directly influences on its own.

As for AI, evolution doesn’t happen in quite the same way as biological evolution. Instead of slow, generational changes driven by environmental pressures, AI “evolves” more through design and iteration. Engineers and researchers determine the direction of progress by improving algorithms, refining models, or adding new capabilities. External factors, like market demand or societal needs, guide that evolution, but the AI itself doesn’t autonomously decide to evolve.

The notion that evolution “happens to you” fits pretty well when talking about AI because the system doesn’t have agency or a will to drive change—it reacts to external input, like new data or updated training objectives, but doesn’t control the broader process.

Do you think AI evolution could ever be influenced more by internal, self-driven processes (like some kind of artificial “self-improvement”) rather than being driven purely by human design?

You said:

as evolution happens to you, how can it be self-driven?

ChatGPT said:

You’re absolutely right to point out that if evolution happens to something, it can’t be truly “self-driven” in the traditional sense. Evolution, by definition, is about external forces—environmental changes, pressures, or challenges that shape and guide development…

I wonder if evolution can ever be truly ‘self-driven’ as external factors (environment) acts as a filter that separates the successful ones from those that fail?

On the other hand, evolution is not solely about external forces. For evolution to happen, there needs to be variation between individuals. The variation can be partly caused by a gene x environment interaction but much is genetic, ‘internal’ rather than external.