Is the pursuit of AI ethical... or even a good idea?

I was recently reminded of the fantastic black comedy puzzle platformer, Portal. In Portal, the main character is a test subject in a maze of physics puzzles overseen by a malevolent AI called GLaDOS (Genetic Lifeform and Disk Operating System).* Unchecked, the AI murders everyone (but Chell) at the Aperture Science Research Facility so that it can do science experiments uninterrupted by those pesky humans and because… it can.

The despotic AI is a common trope in Sci-Fi and we are some way off that yet. However, given the recent leaps and bounds in AI technology, I wonder:

  • Is there anything we can learn here from the stories we tell and the games we play?
  • Is it naive to assume that a true thinking machine will docilely follow our requests or even be benevolent in its goals?
  • Perhaps the concerns of writers are misplaced, and AI will help humans do more, better?

What of the theological lessons and ethical implications:

  • Are stories of rogue AIs in Sci-Fi just the human/social conscience subconsciously retelling the Fall Narrative?
  • At what point do code and software become life and organism? Should we be already thinking about AI rights?
  • In Portal, Aperture Science’s company motto is “We do what we must because we can”. There is no doubt that we can and could create more and more advanced AI in the future. But must we just because we can?
  • What responsibilities (if any) would human creators have for the behaviour of AI programmes?

Feel free to answer any of these questions, none, or suggest your own. I’m more interested in getting a stimulating conversation going than arguing a point. :+1:t2:


*If you want a taste of Portal’s humour, check out the end credits where the AI GLaDOS sings you a song. Remember, the cake is a lie.

3 Likes

“Do you know the way to Shell Beach?” :sunglasses:

I’m undecided if AI can become conscious, but it will be be able to mimic it better and in more immediately satisfying ways than human interaction provides.

My eyes were opened when I saw how a young man blushed when the AI video image, which he knew was AI, complimented him.

Imagine a virtual reality game filled with AI npcs. And if it were multi player, would you be able to distinguish who is who?

1 Like

I can’t pretend to know enough about it really to know if it can ever become conscious or not. Part of me thinks yes. Or rather I do believe it can and will just don’t know any of the actual science behind it. My thoughts kind of goes like this.

At one time we were organic material. We were nonliving things, just chemical reactions. Over time abiogenesis occurred and simple celled organisms were here and evolution occurred and eventually developed consciousness. But I often think how so much is beyond our control. It’s like software that dictates what we like. Some things are universal and make sense. We mostly like sugar because it’s tasty because of our tongues. But what makes someone like horror and another like comedy or one like metal vs another liking country. Sure nurture plays a role in these things but I also feel like so does nature. For whatever reason some seem hardwired one way and others another. So I don’t see how that is really any different from parameters created in AI. So I imagine if we could evolve consciousness and if you went back far enough we were just chemical reactions then why can’t AI who is just algorithms can’t also develop consciousness.

But my main worry is not some god like AI believing it’s doing the world justice by destroying up through activating robots and stuff. I think way before that augmented intelligence will be weaponized in many ways. The most dangerous way won’t be the rouge villain but when the government uses it to police the world. I imagine a scene where regulations are pushed and everyone has a smart car and every house has cameras inside of it, or even just outside of it and drones constantly fly around. Take Covid. Most of us supported social distancing and so on. I also think most of us would have been upset if it was completely forced on us. Like what if automation took over most jobs and AI was used by the government to simply lock us in our houses, censor any text or post and shut down all cars and had food transported and delivered to us by machine. I see something like that happening way before AI develops consciousness. I see it being abused by power trip people thinking they are doing what’s best. Even if it’s for the better, and crime drops and so on, I’m not supportive of dictatorship even if the dictator is a wonderful human.

1 Like

This is an interesting observation. Our brains aren’t evolved to distinguish between compliments from humans and non-humans after all.

If allowed to learn from the player based, after a time would anyone be able to beat them?

This is somewhat similar to the plot of Person of Interest. The NSA tasks a computer programmer to create an AI that predicts acts of terror. Later in the series, a rogue company activate a rival AI with the purpose of guiding the course of human society, providing targets to assassinate the risk of disrupting its plans.

1 Like

I wouldn’t call it a lack of evolutionary development. More like the way social media fills an emotional space and yet can severely affect in a negative way emotional and social development.

Even if the young man came from a heathy home and was honored by his mother and father, a compliment from a smart sounding and attractive image can still touch him on the inside.

If allowed to learn? Hmmm… what would be possible then?

I noticed ChatGPT is currently not able to learn from its interactions. It’s learning is carefully controlled. But what happens if a program is released without bias and is able to learn in realtime?

  • Put to ChatGPT:
    • “Compose a haiku. Is it alive?”
      • Response:
        • “Intelligence born,
          Bits and bytes, a mind of code,
          Alive in the wires.”
    • My pre-command opinion: Intelligence is born … in a living thing; it will never, IMO, be “bits and bytes”. “A mind of code”, as similar as any human says that it seems, cannot “be alive in wires”.
3 Likes

Welp, ChatGPT’s not smart enough to know that haiku are rarely composed in 5-7-5 (regardless of what our English teachers taught us) so I guess that area of poetry is safe!

This is an interesting observation, however, if I might coin a term, it’s a little organocentric? From one perspective our own bodies are machines made up of organic hardware, wires, and fuel delivery systems. Our brain is like an organic computer that runs on a collection of chemical and electrical coding. Even on a genetic level, one could say that genes producing proteins to activate other genes is analogues to coding that under certain conditions activates other programming lines.

Don’t get me wrong, I’m not a materialist, humans are more than matter in motion, and neither am I a genetic imperialist, believing that everything is determined by our genes. However, I think the human body has more in common with computers than we might think.

4 Likes

General AI (i.e. “true thinking machine”) is a ways off yet, and it certainly isn’t necessary in order for humans to benefit. Narrow AI is still very, very useful. For example, reliable self driving vehicles would be a huge boon, especially given the downturn in people who want to drive trucks. AI assisted coding would be amazing if it were improved.

We should think about it, but I think we are quite a ways off.

3 Likes

It also thinks it can write chiastic verse. I didn’t look closely at the result to see if it was any good.

On a more existential level, what does it mean to be alive, and does that life have to be comparable to a human to be considered alive? If we took that road, a yeast bacteria, or even a fruitfly, is closer to being ‘not alive’ than it is to being ‘alive’ in a way that is comparable to a human.

As a child, we were taught the acronym MRS GREN, that, Living things move, breathes (respiration), have sensitivity, grow, reproduce, excrete and need nutrients. I don’t think they teach it any more, after all, under those criteria fire is a living entity.

My point is, categorise the ‘stuff’ that makes an oragnic ‘thing’ alive is tricky. Non-organic things are on a whole different level.

  • It certainly is, and I make no apology for it because the Concrete Inanimate stuff that our organic bodies is made of was not fabricated by humans [or machines].
  • And there’s the rub, isn’t it? The analogy persuades some, …but not me. There is, IMO, a substantial and crucial difference: the “is” of actual biological formation is not and never will be equal to the “like” of analogous human [or machine] fabrication.
1 Like

As a junior high student, I remember the question of whether a virus is alive. Is this still a question?

  • It is. And the answer is: kind of, but not really.
3 Likes
1 Like
  • BTW, I’ve expressed my opinion(s) and tried to explain my reasoning. However, whether anyone agrees with me or not is not essential to me. I’m stuck with my opinion. I can live with it, whether anyone agrees with me or not.
2 Likes

And since they would be indistinguishable from each other, how would one be alive and the other not? (Ooh – we created life in the lab.)

  • So God couldn’t tell or wouldn’t know the difference, and a humanoid AI would undergo “resurrection”?
  • Is there a "heaven for humanoid AIs?
1 Like

[Removed]. God being omniscient would know which came from where. And I’m not talking about humanoid AIs either, but synthetic life. I’m also not talking about consciousness – I recall someone saying that all life has some kind of rudimentary consciousness. I’m suggesting it doesn’t.

A paramecium is a complex biological machine.