Is the pursuit of AI ethical... or even a good idea?

  • Perhaps humans could practice AI Brain Implants using octopi, which would, conceivably, create huge investment opportunities in octopus acquaculture and laws regulating AI transplants, who–unlike organ donors–would affect a Class called AI recipients.

I’m afraid it’s been several decades since I read about this and have forgotten the details, and I don’t have time to refresh my memory at present.

That projects that the AI brain can have real personal relationships (is it in fact a person?), and a real relationship with God and can in fact sin in thought and deed. Does it have human feelings that emanate from its human heart and does it have a soul are questions too. And yes, I am not answering the question. :slightly_smiling_face: At least I would expect it to recognize that declaring that God belief can only arise from the material is illogical and this as well:

1 Like

I’d ask it why it thinks it’s damned. Why does it think it needs salvation. And depending on its culture I’d direct it to a minister of religion. I’d pray for it if it asked. Unlike most wet brains it should be able to think logically about itself, deconstruct, educate, analyse itself. One should be able to have the conversation with it unless it’s too broken or inadequately programmed, in which case it will just loop. If intentional consciousness has actually emerged in it or not. That’s what most of us do after all.

Any improvement over the current system would be praiseworthy. Eye exams feel like I’m trying to take a MENSA test anymore! Not looking forward to the next time I have to renew my driver’s license in person.

1 Like

I’m sorry but the effort to create artificial intelligence is an unattainable goal. The two words are mutually incompatible. The hallmark of intelligence is understanding the meaning of experience. Only living beings have experiences. Therefore, I propose that the designation of AI be changed to IA which would stand for Intelligence Augmentation which is what computers do extremely well.

Sounds to me like the first step to restricting “intelligence” to humans. Will we next restrict “intelligence” to men or white people, so we can declare anybody not fitting some unrelated criterion to be unintelligent just because we don’t want attribute anything good to them?

When computer programs not only play our games of intelligence better than we do but teach us how to play the game better than humans ever have before, then I don’t see how we can reasonable deny that artificial intelligence is a apt description of what they do. This is not to deny that there are considerable differences between what they do and what we do. But I think the take-away should be that intelligence is not what most people have lazily assumed for so long.

  1. It is NOT the defining characteristic of our humanity.
  2. Things are a LOT more complicated with a lot of different abilities involved rather than a single attribute. Thus it points to the need for more distinctions and specialized terms for these different abilities.
  3. The simple ability to follow instructions and alter our methods according to new information (which is all that AI do) is a major portion of what has always been seen as intelligence, and perhaps that shouldn’t be such a dominant measure of human value anymore.
1 Like

How does AI training and machine learning fit into this? This involves code actually changing itself in response to input. This is how AIs like AlphaZero are able to learn how to play Go and chess by simply playing the games.

Yes, it is not a black and white difference to be sure. There certainly is some “self-creation” involved with what AI do, but within rather limited bounds. Our “self-creation” is clearly goes back a lot longer – I think it goes all the way back to the transition from non-life to life.

In fact this is a disadvantage and inconsistency in the thinking of those who believe we are a product of divine design. In that context, the difference between humans and AI is smaller, it seems to me. But I suppose for them it is enough that we are (according to them) a product of divine design rather than human design.

These are very interesting questions.

It helps me to keep in mind two kinds of AI: ASI - artificial specialized intelligence, and AGI - artificial general intelligence. Humanity has made incredible progress in ASI in the last 50 years. Who would have thought that ASI would be able to beat chess grandmasters at chess? Or that we would even be talking about self-driving vehicles? When the media talks about AI, I think they are usually referring to ASI. It is remarkable, but fundamentally, it is just highly sophisticated algorithms.

My predisposition is to be an AGI-skeptic, but permit me to argue with my own predisposition.

Great minds have thought about thinking machines from the very beginning of the invention of the computer that can be programmed. None other than the great Alan Turing wrote an article “Computing Machinery and Intelligence” in 1950 that discusses the famous imitation game, or Turing Test, for deciding if a machine can think. See

Before that, Ada Lovelace, thought to be have written the world’s first computer program, postulated that computers could become creative enough to compose music.

More recently, Professor Scott Aaronson of UT Austin has written about AI. For example, sections in “Why Philosphers Should Care About Computational Complexity”. See https://www.scottaaronson.com/papers/philos.pdf

Perhaps we are being chauvinistic in being skeptical of a machine ever achieving “consciousness”? Is there an essential difference between a computer and the human brain? To be sure, the hardware is quite different. The brain is a moist and soft biological computer, and a computer chip is dry and silicon-y, but maybe there is no essential difference as computers.

In theory, why couldn’t a computer be made to satisfy many of our theological descriptions of what it means to be human? (I am not saying we are anywhere close). What does it mean to be made in the image of God? Does that mean to have the knowledge of good and evil? To have a moral law or conscience? If so, why couldn’t we program a moral law into a machine? Indeed, I think we would have a responsibility to do that in AGI. Which brings up a concern: is there any way to prevent bad actors from unleashing immoral AGI’s? I suppose Christians have a responsibility to be in on debates of how to regulate the use of AGI’s. This is science-fiction-y stuff at present, but we should be thinking about this.

2 Likes

This is an interesting point. I wonder what the comparable animal intelligence would be. For example, does a program learning to play go require more or less intelligence than teaching a dog to play fetch or …?

This is a trope that sci-fi has explored quite a bit. I’ve already mention Person of Interest where this is a principle concern in later series. But I think an equal concern explored by writers is a AGI that is smarter than us and essential dupes us into unleashing it. The Terminator franchise is perhaps the most famous example of this.

Then of course the is the question of spontaneous AGI consciousness, AGI’s that turn on there creators, or carry out there wishes in a way that the creator didn’t expect. For example, The Matrix, the Geth and the Reapers in the Mass Effect franchise, etc.

Circling back to my steak in this thread, I think the stories we tell have lots to teach us if we approach them with humble curiosity. And history is filled with warnings about people who said “yeah, but that won’t happen this time!” :sweat_smile:

If my understanding is correct, they are purposefully similar. AI’s have neural networks that are modeled on how actual neurons interact with each other.

2 Likes

The first thing I thought when reading the Wiki article and looking at the diagram is “That’s analog!”

(Calling the professional developer @jammycakes. ; - )

1 Like

If you have a specific task that doesn’t have to be super accurate then analog circuits can do the job faster and with less energy compared to binary chips. Neural networks use matrix multiplication over and over and over, and the result doesn’t have to be super accurate which makes them prime candidates for analog circuits.

2 Likes

Yes, we can program computers to simulate neural networks. But, do we have a deep enough understanding of the human brain to be able to simulate one on a digital computer, in principle?

This is related to the so-called Church-Turing Thesis (Church–Turing thesis - Wikipedia). This says that any function “naturally to be regarded as computable” is computable by a Turing machine, which is a well-defined theoretical model of a computer that computer scientists use to study the theory of computation. And a Turing machine can certainly simulate a real-life digital computer. Many computer scientists believe that there hasn’t been any serious challenge to this thesis since it was first proposed in the 1930’s, neither as a claim about physical reality, or as a definition of “computable”.

Is there something about the human brain that cannot be simulated on a computer, that would violate the Church-Turing Thesis? I wonder if our understanding of neural networks completely captures the workings of the brain. I have the feeling that our computer models of neural networks are still somewhat clumsy, not accounting for the amazing complexity of what goes on in the cells that make up the neurons of the brain, of which I have barely scratched the surface in my reading.

I only post these things because I don’t know what the computer science background is of people on this forum, and perhaps some are interested in the theory of computing side of things. I’m sure there are people with programming experience. And I know there is a lot of expertise on Biology and Theology here.

I would say no (for now). Today’s digital neural networks are at best a simplified approximation of biological neurons.

From what little I understand of both digital and biological neural networks, neurobiology is much more complex than the simplified version found in digital simulations. Neurobiology can be attenuated by many different global effects like serotonin or dopamine. I don’t think this can be truly simulated by giving a neuron a simple weight in a numerical matrix.

1 Like

Pursuit or not, it’s happening.

We talk to the chat bot on the podcast. It critiques itself in these ways a little, interestingly enough.

3 Likes

As I mentioned in the podcast thread, I wonder if the real reason AI makes us anxious is not that is will become like us, but rather that we fear that we are really more like it. We want to be special and unique, and to consider that we may be more like a computer program is disconcerting.
That sort of relates to one of my objections to a historical Adam and Eve and original sin. If they were created and their brains preprogrammed in a sense by God, then God is responsible for their actions and ultimately for their sin. Garbage in, garbage out. That is unacceptable to my concept of God.

Thank you for the Biology input. I do think neural networks are a fruitful way of studying AI, but we humans must simplify models of the brain in order to try to understand it. It seems we would like to make a separation of layers, such as high level functional areas (like hippocampus etc.), neural network, cell biology, and study them independently. We then define clean interfaces between the layers (something like software architecture) and study the whole system. But, it seems like Biology is always more complicated than that - with unexpected interactions between the layers.

Perhaps I’ll have more comments on the ChatGPT podcast thread. I’m looking forward to that.

This topic was automatically closed 6 days after the last reply. New replies are no longer allowed.