Questions About Artificial Intelligence

Is anyone with any expertise able to comment on this? Strong AI has always been an intimidating idea for me, not least with the likes of Elon Musk always predicting gloom-and-doom about how we will be overtaken by it.

The deeper questions for me are the implications of this with regards to philosophy of mind and how we view the evolution of consciousness. On the one hand, I don’t think this gives much favor to reductionism because the AI was intentionally designed to do things like this, but I figured a discussion here would be interesting and enlightening! Thanks to all ahead of time.

Hi Noah,
Thanks a lot for sharing! This work is related to what I am doing for my graduate research in neuroscience (strongly AI based). This is the abstract of the presentation that I am going to give:

A computational study on the evolutionary origin of affective signaling
C. Hesp & R. H. Phaf

The survival of virtually all life forms depends heavily on being able to approach fitness-increasing (i.e., positive) stimuli and avoid fitness-reducing (i.e., negative) stimuli. Through natural selection, therefore, the distinction between positive and negative affect is deeply engrained in the neural systems of organisms of all complexities. This basic quality also appears to be reflected in signaling behaviors across many species. We aimed to investigate the evolutionary origin of such affective signaling with computational methods. Genetic algorithms served to evolve recurrent neural networks of simple virtual organisms (agents) in an environment with food and predators. Previous evolutionary simulations with single agents unexpectedly revealed the emergence of neural oscillations that differed for positive and negative stimuli. First, we tested whether that finding could be replicated. Then, we extended the setup towards a multi-agent environment in which communication could take place. We were particularly interested in the possible exaptation of neural oscillations for communicative purposes. More so than with other types of computational approaches, evolutionary simulations allow for unexpected, serendipitous findings which may suggest new hypotheses for affective communication. The outcomes and possible new insights provided by these simulations will be discussed in the presentation.

I read the original paper of the article you linked to, so here are a few comments:

  1. That futurism article is horribly over-popularized and the actual findings are a lot less dramatic. Depending on the circumstances these agents learned either how to cooperate or how to compete with each other. We can observe this kind of behavior in many animals and people too.

  2. The authors used an algorithm of direct reinforcement learning. This utilitarian principle focuses on maximizing whatever the programmer has defined as “rewarding”. This is what can happen during a lifetime, but it is different from evolution which can actually shape what is perceived as rewarding. When I read the paper, I was a bit disappointed that these authors did not use genetic algorithms to actually simulate evolution with these agents.

As for the philosophical implications of the evolution of consciousness and emotions, I believe there are many. For us humans there are countless examples of feelings that are directly related to some benefit for survival of our species. So many of our daily experiences can be categorized in these terms, its overwhelming. For example, we generally have the tendency to find nutritious things tasty, poisonous things disgusting (rotten eggs), sexual interaction pleasant, and snake-like things scary. All mammals have an ingrained tendency to fear snakes, even monkeys who grow up in isolated laboratory environments. It is no coincidence that the embodiment of evil in Genesis 3 is represented by a snake.

Intuitively speaking, some degree of selfishness would seem to be a requisite for survival in many situations. However, at the same time, selfishness has a destructive force to it that we see at work abundantly in societies throughout history. That’s part of the reason why it is necessary for selfless love to start governing the stage as a much more powerful and sustainable principle. I believe the Kingdom of God has to do with that transformation away from selfishness towards selfless love.

Anyway, these are my ramblings for now!



Thanks for the thoughtful response, Casper!

Most of the stuff about AI is admittedly over my head, so I’ll defer to your expertise (whatever it is, it’s far greater than mine!).

Moving towards the philosophical, how are we to avoid reductionism? It does seem like most of the observations align with common sense: sex is pleasant, snakes are scary, etc. None of them really feel like the science is telling us anything new or unheralded; however, deep down there’s a sense in which evolutionary psychology is trying to undermine our basic assumptions of truth by explaining them away as adaptations: i.e., sex isn’t pleasant because it’s an expression of love, it’s pleasant so we want to do it more to propagate our genes more often. Or, the mother gazing lovingly at her child only does so because caring for her child improves the chances of the child living, etc.

It feels a bit like people in the EP field are relishing in pulling the curtain back on Oz, as it were, and showing us that all our art, musings, and angst are little more than adaptations. Not to mention the issues with the idea of sin (I don’t find many EP-related explanations as coherent with the biblical idea of sin).

Thanks again!

I must confess I first checked into here thinking it was going to be Q & A about our very own (@aleo) Al Leo, and I thought --how appropriate that somebody’s finally going to address all our questions about him! :smiley: But okay – "A"rtificial "I"ntelligence – I see that now.

A lot of AI will be over my head too. One book I enjoyed reading some time ago at my younger son’s recommendation was Brian Christian’s “The Most Human Human”. That is a very interesting read … perhaps if somewhat dated by now given the subject matter. But for those of us interested in the old Turing test and its more recent manifestations, he gives a delightful tour of that in his book.

1 Like

Ha! That gave me a good chuckle, thanks Merv. I think Casper has since tweaked the headline. Perhaps that’s why it took almost 2 weeks for someone to reply! :sweat_smile:

The Turing test has always fascinated me as well, so I’ll have to check the book out!

I guess this is on-topic, but there was movie that came out about 2 years ago called Ex Machina that explored the Turing test and artificial intelligence in spectacular (though very dark and, by the end, violent) fashion. I can’t recommend it per se (there’s some quite graphic content), but if you’re interested in AI in its many cultural manifestations, I think it’s an important text (and I will refer to films as ‘texts’ until the day I die!). More info, including parental guides, etc., here, if you’re interested!

So he did! And I added an emoji to my half-serious “Al” confusion above, hoping Mr. Aleo doesn’t take it the wrong way … (Really, Al, if you ever do host a Q and A, I’ll drop in!)

Thanks, Noah for the link and the accompanying warning. I’m not sure I’ll take it in. I’m a bit of a light fluff guy when it comes to entertainment. I’ll do “dark” if I think it comes with some great benefit (like eating my veggies) … but the benefit has to be there, since the “dark” doesn’t attract me in and of itself. That said, if this has significant veggie content … I’ll be happy to hear of specific details it raises.

One of my passing musings when I correspond on this site is the thought that some programmer somewhere might be amusing themselves with the challenge of creating a chat-bot AI to simulate somebody advancing a simplistic creationist type of polemic here. The challenge would be to see how long (in exchanges) the thread could get with all the concerned scholars here trying to “educate” their creationist interlocutor, who just keeps coming back with the same replies. (Think of the Cheech and Chong skit with one of them trying to get the other to open the door --speaking of entertainment we can’t really recommend.) Of course we are never permitted to assume this even if we suspected – because how terrible to be wrong and perhaps misbehave accordingly at a real human being’s expense. So it is a kind of low-bar AI challenge I suppose. Makes one wonder if we’ve been spoofed already? Makes one wonder if we’ve been spoofed already? Makes one wonder if we’ve been spoofed already? Makes one wonder if we’ve been spoofed already? [[stack overflow … system halted]]

dang it!

It is interesting how AI (as popularly imagined so far --like Data in Star Trek) always ends up reflecting the sentiments of the culture in which it was created. So if our main AI comes from people working for say, the Pentagon, it will probably be aggressively militaristic in its “default outlook”. (All situations entered ‘guns first’ so-to-speak.) If the programmers at the console were instead from some loving mission organization, the AI will probably have a self-sacrificial service attitude. So maybe the question to ask is … what kind of people are throwing the most resources into all this AI development? Look at them, and you’re probably looking at your next generation of AI. Kind of scary, huh?

Hi Noah!

Phew, good question.

The first thing I would note is that, while there are certainly overall patterns and ingrained tendencies in human emotions, they are in no way completely hardcoded. Beyond certain basic patterns, there’s a lot of leeway in what elicits joy or appetite, sadness or disgust, love or hate in a person. So the claim that everything is “just an adaptation” is demonstrably false. Often when a faulty conclusion is based on common sense, it can also be refuted with common sense :slight_smile: .

In fact, faith and beliefs in general have very powerful influence on which things we like. I really enjoy bacon but I have a Muslim friend who’s sincerely disgusted by the thought of eating a pig. In the Bible, the Psalmist writes how the wicked enjoy sin and the righteous rejoice in the law. We also read how Paul considered everything he had deemed of value before as garbage after his encounter with Jesus Christ. Our beliefs heavily shape the specifics of the things we like. It is then a matter of being inspired by the right Source to encourage the liking of things that are truly good (e.g., the love of a mother for her child). However, there are strong powers at work, through the flesh, that pull us towards sin as in Galatians 5:16-17:

So I say, walk by the Spirit, and you will not gratify the desires of the flesh. For the flesh craves what is contrary to the Spirit, and the Spirit what is contrary to the flesh. They are opposed to one another, so that you do not do what you want.

We can continuously pray for the Holy Spirit to tune our joys in this life in accordance with what is good in God’s eyes. It’s almost like fine-tuning an instrument to play beautiful melodies. It needs to be tuned again and again.

We might also look at the evolutionary aspect of emotions in analogy with the important influence of neutral drift on genetic diversity (although this comparison should be taken with a grain of salt). Beyond the “basic” ingredients for propagation, many changes in population gene frequencies are neutral in terms of survival benefit. They do increase the diversity of the population and are highly essential for adapting to changing circumstances. But in themselves, they do not arise as adaptations to anything in particular. So saying that all such phenomena are “just adaptations” is fallacious. Something similar can be said for development of variation in our emotional and cultural life, which are largely unconstrained by considerations of “fitness” for survival.

Sorry for taking so long to reply :slight_smile: .



Thorough response Casper, thanks!

I appreciate you bringing me back to earth on our relative individual uniqueness, and how faith/belief impacts our tastes. A very good point. Today I was pondering what our mind’s connection to our brain meant for free will. Specifically, that something so out of our control as a tumor or infection can so radically change our personalities/tastes/sense of right and wrong.

It brought up the question of whether we can hold people accountable for actions taken under such situations (i.e. a man commits unthinkable acts on a child because of a tumor changed his otherwise “normal” personality/tastes/etc–this happened in real life). This, in turn, brought the further question–if we’re out of control in these situations, what makes us think we’re in control in a “normal” situation, where our brain is functioning properly? But, and this is a more Calvinsit (gasp!) route to take, I was reminded of what my theology teacher in high school said–man’s free will is limited to that which is within his nature: a cats can’t be dogs, goldfish can’t fly, men, without grace, can’t not sin–that was his illustrative purpose, obviously colored by his theology (total depravity was the specific doctrine he was talking about).

Currently, I don’t believe our lack of the ability to not sin is because of evolution, but perhaps there’s a case to be made that because our minds and brains are so intricately woven, we cannot have libertarian free will in the Enlightenment sense. But he have the freedom of will allowed to us by God–the freedom to be kind, to pay our taxes, to buy a new car, to choose or not choose to follow Christ (your tradition will influence at what point you think this choice comes, etc.). As a matter of faith, we can trust God’s guidance of evolution has given us the capacity to make these choices truly and freely and it’s not all illusory, contra Dennett and the like.

These are all thoughts probably better suited for clearer articulation and more rigid formulation, (and I thought all of them while being a little too tired). Sorry about the rambling/length; let me know if I can make it more clear!



AI is and always will be just an extension of the designer …just as a hammer is an extension of our muscles. AI is not prone to anything - only the human designers is.

These might help: