I was recently reminded of the fantastic black comedy puzzle platformer, Portal. In Portal, the main character is a test subject in a maze of physics puzzles overseen by a malevolent AI called GLaDOS (Genetic Lifeform and Disk Operating System).* Unchecked, the AI murders everyone (but Chell) at the Aperture Science Research Facility so that it can do science experiments uninterrupted by those pesky humans and because… it can.
The despotic AI is a common trope in Sci-Fi and we are some way off that yet. However, given the recent leaps and bounds in AI technology, I wonder:
Is there anything we can learn here from the stories we tell and the games we play?
Is it naive to assume that a true thinking machine will docilely follow our requests or even be benevolent in its goals?
Perhaps the concerns of writers are misplaced, and AI will help humans do more, better?
What of the theological lessons and ethical implications:
Are stories of rogue AIs in Sci-Fi just the human/social conscience subconsciously retelling the Fall Narrative?
At what point do code and software become life and organism? Should we be already thinking about AI rights?
In Portal, Aperture Science’s company motto is “We do what we must because we can”. There is no doubt that we can and could create more and more advanced AI in the future. But must we just because we can?
What responsibilities (if any) would human creators have for the behaviour of AI programmes?
Feel free to answer any of these questions, none, or suggest your own. I’m more interested in getting a stimulating conversation going than arguing a point.
I can’t pretend to know enough about it really to know if it can ever become conscious or not. Part of me thinks yes. Or rather I do believe it can and will just don’t know any of the actual science behind it. My thoughts kind of goes like this.
At one time we were organic material. We were nonliving things, just chemical reactions. Over time abiogenesis occurred and simple celled organisms were here and evolution occurred and eventually developed consciousness. But I often think how so much is beyond our control. It’s like software that dictates what we like. Some things are universal and make sense. We mostly like sugar because it’s tasty because of our tongues. But what makes someone like horror and another like comedy or one like metal vs another liking country. Sure nurture plays a role in these things but I also feel like so does nature. For whatever reason some seem hardwired one way and others another. So I don’t see how that is really any different from parameters created in AI. So I imagine if we could evolve consciousness and if you went back far enough we were just chemical reactions then why can’t AI who is just algorithms can’t also develop consciousness.
But my main worry is not some god like AI believing it’s doing the world justice by destroying up through activating robots and stuff. I think way before that augmented intelligence will be weaponized in many ways. The most dangerous way won’t be the rouge villain but when the government uses it to police the world. I imagine a scene where regulations are pushed and everyone has a smart car and every house has cameras inside of it, or even just outside of it and drones constantly fly around. Take Covid. Most of us supported social distancing and so on. I also think most of us would have been upset if it was completely forced on us. Like what if automation took over most jobs and AI was used by the government to simply lock us in our houses, censor any text or post and shut down all cars and had food transported and delivered to us by machine. I see something like that happening way before AI develops consciousness. I see it being abused by power trip people thinking they are doing what’s best. Even if it’s for the better, and crime drops and so on, I’m not supportive of dictatorship even if the dictator is a wonderful human.
This is somewhat similar to the plot of Person of Interest. The NSA tasks a computer programmer to create an AI that predicts acts of terror. Later in the series, a rogue company activate a rival AI with the purpose of guiding the course of human society, providing targets to assassinate the risk of disrupting its plans.
Welp, ChatGPT’s not smart enough to know that haiku are rarely composed in 5-7-5 (regardless of what our English teachers taught us) so I guess that area of poetry is safe!
This is an interesting observation, however, if I might coin a term, it’s a little organocentric? From one perspective our own bodies are machines made up of organic hardware, wires, and fuel delivery systems. Our brain is like an organic computer that runs on a collection of chemical and electrical coding. Even on a genetic level, one could say that genes producing proteins to activate other genes is analogues to coding that under certain conditions activates other programming lines.
Don’t get me wrong, I’m not a materialist, humans are more than matter in motion, and neither am I a genetic imperialist, believing that everything is determined by our genes. However, I think the human body has more in common with computers than we might think.
General AI (i.e. “true thinking machine”) is a ways off yet, and it certainly isn’t necessary in order for humans to benefit. Narrow AI is still very, very useful. For example, reliable self driving vehicles would be a huge boon, especially given the downturn in people who want to drive trucks. AI assisted coding would be amazing if it were improved.
We should think about it, but I think we are quite a ways off.
On a more existential level, what does it mean to be alive, and does that life have to be comparable to a human to be considered alive? If we took that road, a yeast bacteria, or even a fruitfly, is closer to being ‘not alive’ than it is to being ‘alive’ in a way that is comparable to a human.
As a child, we were taught the acronym MRS GREN, that, Living things move, breathes (respiration), have sensitivity, grow, reproduce, excrete and need nutrients. I don’t think they teach it any more, after all, under those criteria fire is a living entity.
My point is, categorise the ‘stuff’ that makes an oragnic ‘thing’ alive is tricky. Non-organic things are on a whole different level.
It certainly is, and I make no apology for it because the Concrete Inanimate stuff that our organic bodies is made of was not fabricated by humans [or machines].
And there’s the rub, isn’t it? The analogy persuades some, …but not me. There is, IMO, a substantial and crucial difference: the “is” of actual biological formation is not and never will be equal to the “like” of analogous human [or machine] fabrication.
BTW, I’ve expressed my opinion(s) and tried to explain my reasoning. However, whether anyone agrees with me or not is not essential to me. I’m stuck with my opinion. I can live with it, whether anyone agrees with me or not.
[Removed]. God being omniscient would know which came from where. And I’m not talking about humanoid AIs either, but synthetic life. I’m also not talking about consciousness – I recall someone saying that all life has some kind of rudimentary consciousness. I’m suggesting it doesn’t.