Is the pursuit of AI ethical... or even a good idea?

Yeah, and the broader context of this episode was that the robot/droid had been initially programmed to fight but was later “taught” to be a nurse droid and care for others, so some of the main conflict was which line of “programming” the droid would end up following… which seemed like a very human kind of conflict.

1 Like

Currently, the major danger from AI is not checking if it is doing its job correctly. AI is increasingly being assigned tasks without good checking if it does it well. Often, this goes along with a corresponding lack of valuing of the work that goes into doing it right - no need to pay you, the computer can do that. Bad automated customer service is perhaps the most familiar example (Customer Service Chatbot - Dilbert Comic Strip on 2023-01-29 | Dilbert by Scott Adams ). But it’s also major in analysis of large data. The computer doesn’t know if the data are any good or whether its analyses are accurate, but it makes the results look impressive.

The basic problem is that artificial intelligence doesn’t really exist. Artificial pattern detection and calculation exist, but there is no intelligence guiding the decision, merely rule-following. The rule-following is increasingly complex and sophisticated, but it is strictly within the box, and there’s no guarantee that it’s the right box. ChatGPI above was wrong - seeds and spores are alive by standard biological definitions. ChatGPI is just a very fancy version of “I googled it” and is no more reliable than the source data.

Might actual intelligence, etc. develop in future artificial systems? Maybe, but currently there are major problems with treating AI as a shortcut and not checking if it’s getting to the desired destination. The focus is too much on “woo, we can make the computer do this” and not on “how can we make things more useful and reliable?”

4 Likes

As you may know, Shelley’s book is widely viewed as a metaphorical treatment of birth.

Thanks for your thoughts, @mitchellmckain. This got me thinking:

Why did you choose self-actualisation of destiny and purpose? An earwig or a box jellyfish doesn’t choose its own destiny and purpose, yet I doubt we’d quibble about them being alive? Welcome your thoughts.

I did not know that. Would you be happy to elaborate?

I said self-organization not self-actualization. The difference is that the former isn’t about some obsession with the individual but about how life comes into existence. The earwig likely doesn’t even have a concept of self. They may be chemically self-aware as required for repairing damage to the organism - but don’t think they even have mind - so I don’t see how “self-actualization” applies. But like other living organisms they ARE a product of self-organization. And that includes choices of survival strategies – no not mental-conceptual choices like people – but still choices nevertheless.

It is not that I see mental life as qualitatively different. I don’t. But the quantitative difference is considerable, in terms of developmental speed, awareness, and range of responses. If AI had self-organization then they would be quantitatively comparable to the human mind. But they don’t. They are a product of design – only what we have made them to be. Intelligence but not life.

That’s helpful. I think the confusion arose because you talked about the ability to choose purpose and destiny, that sound like more than self-actualisation to me. Thanks for the clarification.

Speaking of Frankenstein, Russell Moore ( I think, but could not find the article) recently wrote something regarding questions put to students that is a good thought question. What if some time in the future, a robot AI brain could be put into a human body, with all the interactions being seen as human and self awareness being achieved. What then if that entity (person?) with a human heart and body and robot brain then desired salvation and to be spared from damnation, however you see it. How do you respond, and what does that mean?

2 Likes

Essentially that happens now when you have cataract surgery and lens implant. The eye is measured by computer and the proper lens selected to give you the best acuity without any input from you.

And with x-rays, computers are used in some cases to scan mammograms, and are better in some respects at finding cancers than human eyes. I suspect they also mistakenly flag benign areas as well, so still human judgement is needed, and no doubt there is a human cost if unnecessary biopsies result.

2 Likes

Brilliant question! What would be your answer? (Totally stalling so I can think about this myself!)

  • Perhaps humans could practice AI Brain Implants using octopi, which would, conceivably, create huge investment opportunities in octopus acquaculture and laws regulating AI transplants, who–unlike organ donors–would affect a Class called AI recipients.

I’m afraid it’s been several decades since I read about this and have forgotten the details, and I don’t have time to refresh my memory at present.

That projects that the AI brain can have real personal relationships (is it in fact a person?), and a real relationship with God and can in fact sin in thought and deed. Does it have human feelings that emanate from its human heart and does it have a soul are questions too. And yes, I am not answering the question. :slightly_smiling_face: At least I would expect it to recognize that declaring that God belief can only arise from the material is illogical and this as well:

1 Like

I’d ask it why it thinks it’s damned. Why does it think it needs salvation. And depending on its culture I’d direct it to a minister of religion. I’d pray for it if it asked. Unlike most wet brains it should be able to think logically about itself, deconstruct, educate, analyse itself. One should be able to have the conversation with it unless it’s too broken or inadequately programmed, in which case it will just loop. If intentional consciousness has actually emerged in it or not. That’s what most of us do after all.

Any improvement over the current system would be praiseworthy. Eye exams feel like I’m trying to take a MENSA test anymore! Not looking forward to the next time I have to renew my driver’s license in person.

1 Like

I’m sorry but the effort to create artificial intelligence is an unattainable goal. The two words are mutually incompatible. The hallmark of intelligence is understanding the meaning of experience. Only living beings have experiences. Therefore, I propose that the designation of AI be changed to IA which would stand for Intelligence Augmentation which is what computers do extremely well.

Sounds to me like the first step to restricting “intelligence” to humans. Will we next restrict “intelligence” to men or white people, so we can declare anybody not fitting some unrelated criterion to be unintelligent just because we don’t want attribute anything good to them?

When computer programs not only play our games of intelligence better than we do but teach us how to play the game better than humans ever have before, then I don’t see how we can reasonable deny that artificial intelligence is a apt description of what they do. This is not to deny that there are considerable differences between what they do and what we do. But I think the take-away should be that intelligence is not what most people have lazily assumed for so long.

  1. It is NOT the defining characteristic of our humanity.
  2. Things are a LOT more complicated with a lot of different abilities involved rather than a single attribute. Thus it points to the need for more distinctions and specialized terms for these different abilities.
  3. The simple ability to follow instructions and alter our methods according to new information (which is all that AI do) is a major portion of what has always been seen as intelligence, and perhaps that shouldn’t be such a dominant measure of human value anymore.
1 Like

How does AI training and machine learning fit into this? This involves code actually changing itself in response to input. This is how AIs like AlphaZero are able to learn how to play Go and chess by simply playing the games.

Yes, it is not a black and white difference to be sure. There certainly is some “self-creation” involved with what AI do, but within rather limited bounds. Our “self-creation” is clearly goes back a lot longer – I think it goes all the way back to the transition from non-life to life.

In fact this is a disadvantage and inconsistency in the thinking of those who believe we are a product of divine design. In that context, the difference between humans and AI is smaller, it seems to me. But I suppose for them it is enough that we are (according to them) a product of divine design rather than human design.

These are very interesting questions.

It helps me to keep in mind two kinds of AI: ASI - artificial specialized intelligence, and AGI - artificial general intelligence. Humanity has made incredible progress in ASI in the last 50 years. Who would have thought that ASI would be able to beat chess grandmasters at chess? Or that we would even be talking about self-driving vehicles? When the media talks about AI, I think they are usually referring to ASI. It is remarkable, but fundamentally, it is just highly sophisticated algorithms.

My predisposition is to be an AGI-skeptic, but permit me to argue with my own predisposition.

Great minds have thought about thinking machines from the very beginning of the invention of the computer that can be programmed. None other than the great Alan Turing wrote an article “Computing Machinery and Intelligence” in 1950 that discusses the famous imitation game, or Turing Test, for deciding if a machine can think. See

Before that, Ada Lovelace, thought to be have written the world’s first computer program, postulated that computers could become creative enough to compose music.

More recently, Professor Scott Aaronson of UT Austin has written about AI. For example, sections in “Why Philosphers Should Care About Computational Complexity”. See https://www.scottaaronson.com/papers/philos.pdf

Perhaps we are being chauvinistic in being skeptical of a machine ever achieving “consciousness”? Is there an essential difference between a computer and the human brain? To be sure, the hardware is quite different. The brain is a moist and soft biological computer, and a computer chip is dry and silicon-y, but maybe there is no essential difference as computers.

In theory, why couldn’t a computer be made to satisfy many of our theological descriptions of what it means to be human? (I am not saying we are anywhere close). What does it mean to be made in the image of God? Does that mean to have the knowledge of good and evil? To have a moral law or conscience? If so, why couldn’t we program a moral law into a machine? Indeed, I think we would have a responsibility to do that in AGI. Which brings up a concern: is there any way to prevent bad actors from unleashing immoral AGI’s? I suppose Christians have a responsibility to be in on debates of how to regulate the use of AGI’s. This is science-fiction-y stuff at present, but we should be thinking about this.

2 Likes