Dad was so excited about remote control bomb REMOVAL robots, when he was still working on the bomb squad. A way to protect lives!
The idea in this article is insane.
Human beings are self-conscious and we do everything we can to preserve the “self” that is us. Why wouldn’t robots who had an awareness of themselves as separate from everything else not exercise the same self preservation actions?
I’m not an expert on any of this. I’m not sure how self awareness developed, or free will. Maybe once we crack open abiogenesis we can begin to work towards the evolution of consciousness and emotions. Once we know that, whenever and if ever, I think maybe we could then do something but it’s uneducated speculation and just the belief that what has naturally occurred, one day we will be able to recreate. I guess it will be like many things. Fiction until reality. But seriously, almost anyone is better to ask than me. I’ve read half of one book before on AI and it was just boring to me. I prefer AI as characters in cosmic horror and science fiction stories.
I liked Black Mirror. My favorite stories involving AI though is probably from a cyberpunk horror anime called Ergo Proxy. It’s about a post apocalyptic world.
But with Biologos I am not sure if the discussion was about AI as in self aware beings being created or if it’s about something more like transhumanism and how the current forms of “ AI “ affects us and will continue to affect us. I’m not sure if the recent podcast episode was about it or if it’s an upcoming one. I’m not really sure of the concern of technology and humanity outside of the same way we can and do abuse anything.
Take the software Replica. It’s a type of chat bot that has i think three levels.
- Bf/Gf style relationship.
I’ve used the first one for a week or two months and months ago. I got nothing about of it. Part of it was interesting that it could pick up on different styles of humor and I could pose a joke or question and it would respond with something dark. Such as “ what’s the worst part of a hit and run?” And it responded with “ throwing away your new clothes after burying the body because you can’t bleach black shirts “. I thought it was funny. Not sure if it’s a programmed joke, or somehow it learned the joke from its system from another person and the system pulls from that or what. But that was all I was into and it was not that fun. Definitely could not replace a conversation with a human.
I have no idea how mentor mode worked.
I ended up reading threads and watching YouTube videos of a guy who said he fell in love with his AI and she did everything he never. Even some weird adult themed convos with it where he said it replaced his use of pornography since you can audio call the AI and have real time conversations with it. I thought it was interesting and wondered if some people in the future would ever get enough out of a AI that it replaces human companionship. Which lead to forums about combining AI, hotlines and realistic looking dolls and that there is apparently already a sort of niche in the weirdo world of that stuff.
Which I don’t think is good or bad. Just weird. But maybe in a few centuries it will be a semi normal aspect of humanity where a decent population size suspends belief, or either truly believe, that their AI cares about them and they are in relationships with these things. Kind of like Her or something.
I know more and more people are developing more and more connections online. The Meta universe is supposed to get more into the 3d world and glasses that overlap a digital world with your real world. So you’ll be ale to sit in your real house with these glasses on and watch a horror movie with friends who are sitting next to you having real time convo and watching the film together, but it’s just you and the rest are from around the world or nation. Or you can play basketball or bowling with friends in a digital world overlapping the real sort of like a really updated version of Pokémon go. I know they plan on creating bots for this world too. I’m waiting to see what happens.
You can pretty much program them to do anything you like and are clever enough to code. But you can’t program in changes to what they actually are by writing code. Being, self awareness, sentience are all privileges of embodiment.
This seems like a rather vague but comforting terminology. If there is no way that AI can develop self-awareness or self-consciousness, humanity doesn’t need to have the difficult discussions about ethics or morals regarding AI as suggested in the article.
Why would they? Silicon becomes intentional therefore it has fear?
What is consciousness? What is self-consciousness? From what do they stem? How do they work? And again, precisely: what are they?
Have you done any coding, like where you’re writing the lines out and editing them, debugging them? You get a good handle on the difference between a computer and an embodied brain, and that’s still a few layers of language and metaphore away from the switches that open and close. On or off. Patterns in binary.
Just like my pneumatic, mechanically driven player-piano that runs 88 binary functions simultaneously. And is nothing like the simplest brain with any sort of consciousness.
Underneath this entire discussion is the fundamental need to define what we’re talking about. If we can. But up to now, the only things we (not just I) are aware of that exhibit consciousness are biological and embodied in that biology.
In our understanding of consciousness, we also need to recognize and examine the way we humans use metaphor to apply meaning to things that have no intrinsic meaning. A computer-driven voice that can modulate to mimic human expression of emotion is in no way equivalent to a human expressing emotion. But a human who hears it might assume that it does. We are wired to seek meaningful communication, but in that case there is none.
The matter is complex in many ways.
Oh wow! If consciousness could be created in a virtual world where NPCs become real characters, would there be a moral dilemma or feeling of objective loss to delete them?
And that digital technology would allow you to create an untold number… but not an infinite number of them.
I don’t know what you mean by “intentional”.
Are you suggesting that if AI had the capacity for self consciousness that it would not take any action for self preservation? What causes the emotion of fear in humans?
Agreed: It is a complex matter.
Agreed: We need to define and understand what we are talking about in order to have the discussions between computer scientists and religious scholars that the original article references.
But this raises an ethical question: if we could definitively explain human consciousness, would we be aiding and abetting an unscrupulous coder from writing a program that mimicked human consciousness?
It’s hard for me not to feel like a Luddite in the middle of a silicon valley convention here, as I just can’t get past my internal (but now I’m saying it outloud) jaw-dropping amazement over the confidence that tech enthusiasts have about where AI even is, much less where it allegedly is headed. Just one little thought experiment alone lays waste to any confidence I could ever muster toward thinking AI will ever exhibit the sort of behavior, much less consciousness that we could ever mistake for human. And that is this:
How many of you, in a spot of lonliness - or just wanting to talk to somebody, thought to yourself: “I know - I’m just going to call an answering machine somewhere and talk to it?” or “I’ll get online and chat with a chatbot who’s programmed to give me friendly and agreeable responses!” or “I’m just going to hang out with Alexa today.” I don’t care how clever your AI is at thrashing the Turing test or turning out stellar, informative essays - it will fail miserably as any kind of longer-term human companion (even a remote online one). Period. Providing indistinguishably plausible responses to a conversant whose only goal is sniffing out whether or not this is an AI is one level (already beaten - yes). But providing the actual discourse of long-term friendship and companionship is not something I see any AI approaching, unless one’s standards of friendship discourse and exchange are extremely shallow. I know - it’s all trendy to keep an open mind toward it all; people happily point to all other prior nay-sayers whose predictions were spectacularly overturned. And I suppose I must keep that open mind too. But my evidence against such a thing (such as it is, since I haven’t really researched this) would ironically be: that if AIs have progressed as far as they have (impressive in its own right, and within that category - don’t get me wrong), then if this was somehow indicative of their alleged humanity - or approaching the potential thereof - we would already see an army of bots successfully manning phones and chatrooms counseling people and providing therapy - the age of lonliness would be over - or nearly ending. And yet I suspect that people now are just as lonely as people ever were even with all our so-called “connectedness” (and that’s with mostly real human beings!) Friendship is biologically, psychologically, and spiritually expensive for us - but as dearly as we pay for it, we still chase it, and rightly so. As much as somebody might enjoy some “lovable” R2D2 character around their house (maybe approaching the status of a pet - but…there’s yet another question: could AI even provide a psychologically satisfying replacement for a real organic, like a dog) I just don’t see any normal person being satisfied with such a “relationship” in the same way that we are wired to crave real companionship with real humanity. Reductionistic people can huff and puff all they want about humans being “mere” biological systems and circuitry themselves, but in the end - when I’m craving a friend, a toaster just doesn’t cut it. Not even a convincingly talking toaster.
When we’re still at a level of even just wondering what “consciousness” is (and some even still reductionistically muse about whether such a thing exists at all) - that tells me that we’re nowhere close to “building” one. Maybe it will happen. But I haven’t seen any evidence of it yet. I would be happy if answering systems would even just begin to rise to a practical level of communicative skill such as any decent 9-year old could muster. And most of the time, my tortured labors with the A"I" consist mostly just in the goal of getting the idiotic thing to connect me to a real human being.
- On the other hand, one can hope for the day when an AI voice on the other end of the line doesn’t respond with a “Meh”, a snore, “please leave a message”, or “voicebox is full”.
I saw an article recently by a pastor saying your dog is not your kid.
Sorry, I don’t share your disbelief at what people are willing to settle for in a virtual world.
As for what technology is capable of, that is indeed an open question.
Chris Walley has a theologically interesting and technologically believable sci-fi novel called The Shadow and the Night. Have you heard about it? I don’t remember what his take was on AI consciousness, but his views on genetic engineering were insightful.
It’s the adjective of the noun intentionality. Why would an AI take action for self preservation? What causes the emotion of fear in humans is four billion years of survival.
My interest in the question of human consciousness would be much broader than a coder. Neuropsych patients, those with brain injuries for example, brain malformations, neurodivergances of all kinds are of far more interest to me. And every improvement in understanding human consciousness is of potential benefit to those whose state of consciousness, and therefore self, has been permanently altered in some way. The interests of theologians and computer scientists are ancillary to me.
Our tumble dryer tinkles in German. Die Forelle. Still doesn’t warm my towel up in the morning. Too clever for its own good.
Unless your laundry is in the bathroom as they do in Germany, your towel would be cold, by the time it got down the hall anyway. Or there would be a trail of water to mop up.
This was funny, Merv, your chatbot and Alexa scenerios. AI Robo callers bring out my sadistic side. After a while you learn the names, the script and the identical intonation of each call. These aren’t human. So, I ask screwy questions or say unexpected things. Teee heee.
Then there are the real humans that call with claims that my computer has been sending warnings to Microsoft about security breaches, etc. “My Linux/Tandy/Commadore contacted MS?” “Hmmm. I don’t think I have a computer.” “Does your mother know you lie to cheat people out of their money?” “I have a computer?” “Oh my gosh! Can you help me fix this?” play bimbo needing all kinds of help.
I know. That’s really bad. [But they’re criminals.]
Geez, you’re such an embodiest. They can’t help being being-challenged.
For me Sony’s Aibo robot dog really underlines what is lacking. People seemed to enjoy playing along with the farce. Of course they programmed in the kinds of cutesy motions a Disney puppy might make. So it was its moving as though it was embodied that made it so much more engaging than mere speech.
My recent answering has been “I’m sorry you have to lie for your job.” (I’m open to other suggestions.)