Is the pursuit of AI ethical... or even a good idea?

This is an interesting point. I wonder what the comparable animal intelligence would be. For example, does a program learning to play go require more or less intelligence than teaching a dog to play fetch or …?

This is a trope that sci-fi has explored quite a bit. I’ve already mention Person of Interest where this is a principle concern in later series. But I think an equal concern explored by writers is a AGI that is smarter than us and essential dupes us into unleashing it. The Terminator franchise is perhaps the most famous example of this.

Then of course the is the question of spontaneous AGI consciousness, AGI’s that turn on there creators, or carry out there wishes in a way that the creator didn’t expect. For example, The Matrix, the Geth and the Reapers in the Mass Effect franchise, etc.

Circling back to my steak in this thread, I think the stories we tell have lots to teach us if we approach them with humble curiosity. And history is filled with warnings about people who said “yeah, but that won’t happen this time!” :sweat_smile:

If my understanding is correct, they are purposefully similar. AI’s have neural networks that are modeled on how actual neurons interact with each other.

2 Likes

The first thing I thought when reading the Wiki article and looking at the diagram is “That’s analog!”

(Calling the professional developer @jammycakes. ; - )

1 Like

If you have a specific task that doesn’t have to be super accurate then analog circuits can do the job faster and with less energy compared to binary chips. Neural networks use matrix multiplication over and over and over, and the result doesn’t have to be super accurate which makes them prime candidates for analog circuits.

2 Likes

Yes, we can program computers to simulate neural networks. But, do we have a deep enough understanding of the human brain to be able to simulate one on a digital computer, in principle?

This is related to the so-called Church-Turing Thesis (Church–Turing thesis - Wikipedia). This says that any function “naturally to be regarded as computable” is computable by a Turing machine, which is a well-defined theoretical model of a computer that computer scientists use to study the theory of computation. And a Turing machine can certainly simulate a real-life digital computer. Many computer scientists believe that there hasn’t been any serious challenge to this thesis since it was first proposed in the 1930’s, neither as a claim about physical reality, or as a definition of “computable”.

Is there something about the human brain that cannot be simulated on a computer, that would violate the Church-Turing Thesis? I wonder if our understanding of neural networks completely captures the workings of the brain. I have the feeling that our computer models of neural networks are still somewhat clumsy, not accounting for the amazing complexity of what goes on in the cells that make up the neurons of the brain, of which I have barely scratched the surface in my reading.

I only post these things because I don’t know what the computer science background is of people on this forum, and perhaps some are interested in the theory of computing side of things. I’m sure there are people with programming experience. And I know there is a lot of expertise on Biology and Theology here.

I would say no (for now). Today’s digital neural networks are at best a simplified approximation of biological neurons.

From what little I understand of both digital and biological neural networks, neurobiology is much more complex than the simplified version found in digital simulations. Neurobiology can be attenuated by many different global effects like serotonin or dopamine. I don’t think this can be truly simulated by giving a neuron a simple weight in a numerical matrix.

1 Like

Pursuit or not, it’s happening.

We talk to the chat bot on the podcast. It critiques itself in these ways a little, interestingly enough.

3 Likes

As I mentioned in the podcast thread, I wonder if the real reason AI makes us anxious is not that is will become like us, but rather that we fear that we are really more like it. We want to be special and unique, and to consider that we may be more like a computer program is disconcerting.
That sort of relates to one of my objections to a historical Adam and Eve and original sin. If they were created and their brains preprogrammed in a sense by God, then God is responsible for their actions and ultimately for their sin. Garbage in, garbage out. That is unacceptable to my concept of God.

Thank you for the Biology input. I do think neural networks are a fruitful way of studying AI, but we humans must simplify models of the brain in order to try to understand it. It seems we would like to make a separation of layers, such as high level functional areas (like hippocampus etc.), neural network, cell biology, and study them independently. We then define clean interfaces between the layers (something like software architecture) and study the whole system. But, it seems like Biology is always more complicated than that - with unexpected interactions between the layers.

Perhaps I’ll have more comments on the ChatGPT podcast thread. I’m looking forward to that.

This topic was automatically closed 6 days after the last reply. New replies are no longer allowed.