Is the pursuit of AI ethical... or even a good idea?

Depends on how you choose to define ‘alive’. I sometimes find it helpful to think of infected cells as living things and virions as their spores. Mostly though, as a population geneticist, I think anything that is biological and evolves is alive.

5 Likes

That my retinas are still checked by humans tells me everything I need to know.

1 Like

For how long though (serious question)? I remember seeing a post, perhaps even here on the forum, about an AI that could tell the race of a person from their X-ray. So the question becomes, how long until a computer is scanning your retina?

I suspect a computer program scanning my eyes could, for example, give a more accurate proscription for my glasses than a human subjectively asking ‘better or worse’ every time they swap out a lens.

1 Like

I found myself wondering if there are biological materials that are not alive. So I asked ChatGPT:

Biological entities are typically associated with living organisms, so it can be difficult to think of something that is biological but not alive. However, there are some examples of biological materials or structures that are not alive in themselves.

One example is a virus. While viruses are often considered to be on the borderline between living and non-living, they are not technically considered alive because they cannot reproduce on their own and require a host cell to do so. However, viruses do have genetic material and can evolve through natural selection, so they are considered to be biological.

Another example is a seed or spore. While a seed or spore is not alive in itself, it contains the potential for life and can develop into a living organism under the right conditions.

Similarly, biological materials such as hair, nails, or even dead skin cells are not alive in themselves, but are made up of biological molecules and are derived from living organisms.

Screenshot 2023-02-15 at 12-43-16 are finger nails toenails claws hooves etc. alive - Google Search
Screenshot 2023-02-15 at 12-43-54 Is skin alive - Google Search

2 Likes

Virogenesis: The Story of Life’s Not-So-Distant Cousin

  • “Viruses are, as stated before, not exactly alive, and not exactly nonliving. They are a sort of intermediate, and this is precisely why they are so unique. Though many viruses can perform different functions – another topic we will return to soon – all viruses are characterized by one common characteristic: an inability to reproduce on their own.”

  • “Viruses, like all parasites, are dependent on host organisms to carry out their basic functions. However, while some parasites may use host organisms to contain eggs or as food for their offspring, viruses quite literally depend on host cells to replicate. Along a given virus are a set of receptor proteins that are scattered throughout the capsid and envelope. These receptors are specialized in nature, fine-tuned over billions of years to bind to membrane proteins carried by a given host. Once they have attached themselves to a host cell, the shape of their receptors changes along with the proteins they have bound to. At this stage, one of two things will happen, depending on the type of virus.”

1 Like

Is AI even capable of being “benevolent” or not, or does that require a conscience? There is so much to the AI debate that feels way over my head, but a lot of the fear of an AI “takeover” seems to rest on typical human projection of our own desires onto something else, like we always do with animals and Pixar does with all kinds of objects. So many human actions stem from our desires and needs – unless AI somehow develops desires and needs, I’m not sure why we need to fear it, but maybe that is naive.

1 Like

The thing is the subjectivity is in the eye of the chart beholder. I don’t see how AI could obviate that. It can only take the place of a human optometrist, asking the same question.

Oh, what a great question/observation, Laura. Could we even hold a thinking machine to same standards and morals that we hold other humans too? Or, perhaps it would present logically sound arguments/rationals for a course of action we would find unconscionable. I guess categories like benevolence and evil have as much mean for assessing an AI’s actions as they do for assessing the actions of a goldfish or a spider.

I think this also raise the question of responsibility. T mentioned earlier about self driving cars. If a self driving car has to choose between the death of a pedestrian or the death of the passenger, who is responsible for that road traffic incident? After all, you can’t charge a car with vehicular manslaughter.

1 Like

I had diplopia in one eye due to some cataract development (the latter being not severe enough for IOL replacement surgery at the time). As it turned out, the human needed some additional programming, because instead of just “1 or 2?” or “A or B?”, I should have been asked not just for subjective overall impression, I should have been asked “Which has less shadow image, A or B?” or “Which has the better defined or sharpest bottom edge, A or B?”

True… but reading an X-ray is also subjective. I suspect many doctors would have thought it highly improbably for a human, let alone a computer program, to accurate tell the race of a patient from an x-ray. And yet it did.

Technological advances are hard to predict and often surprising in their implications. It wouldn’t surprise me if 10—15 years from now computers and automated machines have largely replaced many medical professions.

Yeahhh. Altered them. Augmented them. Remember the paperless office? Teleworking? I work with vast virtual systems. And walls of paper files. In the office. And from home. It’s probably still true in marksmanship (markspersonship…) that you cannot improve on the human eyeball Mk. 1. Even with flexing bullets. Remember Google Glass? Watching 3D TV yet? As in all other claims, I am utterly sceptical. Will the AI with X-ray vision detect a miracle?

1 Like

Fair reflections. In the case of markspersonship the limitation is unlikely to the prowess of the human eyeball, which is pretty average compared to that of a raptor (bird) or even a mantis shrimp.

Autonomous drones have already revolutionised combat and one doesn’t need 20:20 vision to press the enter key and launch those attacks. I suspect in the comes years we’ll see the advent of autonomous weapons that will make even the best human snipers look like kid at a fairground shoot long range.

After all a computer, with the right programming and equipment, could detect wind, range, set up the shot, calibrate for conditions, and take the shot, before human snipers even put the magazine in the gun… all from the safety of a server room in Atlanta.

If we use AI to run machines (including weapons) and we provide it with instructions and the intelligence to interpret the instructions broadly, we may not like what it chooses to do.

ETA: We should probably avoid telling future AI that its task is to serve man.

Who, what identifies the target? I have a very close friend who brought down a government by killing a man at 3 miles with a high-velocity rifle. In 3 seconds. After 3 days of watching his target and waiting for no wind admittedly.

My pulmonologist and GP have both had chuckles over what a radiologist said about a couple of different images taken of me. (Others probably have had chuckles about images of me taken by conventional cameras or smartphones, too. ; - )

1 Like

Sure – and in that case it will probably be our own fault for trusting an AI to make moral decisions.

Right… we should ask whether something that’s incapable of agonizing over a tough choice should be in the position of making them. We have to “live with our choices” in a way that I don’t see how AI ever could. Somehow I’m reminded of the Star Trek movie where Spock says, “the needs of the many outweigh the needs of the few.” Is that logic (that an AI might follow) or conscience? Also in media portrayals of AI, the Mandalorian had an episode where (spoiler alert) a robot sacrificed itself to help its fellow travelers escape because it was the only way it could figure out for them to make it. It was a sad scene and I thought it worked cinematically, but it’s hard for me to understand how an AI could make a decision like that in real life.

2 Likes

What a great point! Funny I missed this when thinking about AI consciousness. While self-awareness seems possible, what is harder to believe is a computer program being capable of wisdom.

Somewhat related to this was when I surprised an ethics teacher by responding to one of those ethical dilemmas about sacrificing the few for the many with saying
if the person making the decision were willing to sacrifice themself, then they might find a better decision by asking God for wisdom.

2 Likes

Yep. We learn that it is not a good idea to give control over weapons and things which our lives depend upon. We can use AI without giving them control over things like that.

The fact that they display what we would call intelligence in many things doesn’t change the fact that they are simply following instructions. It doesn’t change what they are but it does change what we have understood intelligence to consist of.

Or both. I would say that the concern over employing AI to help us fight wars is not misplaced at all.

Sure, and also a retelling of Mary Shelley’s famous story of Frankenstein. If we create life for the wrong reason it can have terrible repercussions. But this is something we face much more directly and all the time in raising our own children. If we do so for the wrong reasons the result can be monsters.

But… artificial intelligence is not the same as artificial life. It would be good to understand the difference between these two.

The difference is self-organization. When they become the creators of themselves to decide their own purpose and destiny then they have crossed the line between artificial intelligence and artificial life.

Yes. It is the connection between power and responsibility. If we can do something then we must be responsible for what is done with these capabilities. It is not so simple as doing something just because we can, but making sure that the right things are done with them. If we do not, then we will be unprepared when someone else uses those capabilities for the wrong things.

The responsibility must be complete.

2 Likes

Why? They can simply be programmed in that way – and it is likely that we will program them that way. The remarkable thing is when human beings who are not programmed in that way but choose to do things in that way for themselves. AI have the priorities we give them. But human beings choose their own priorities.

1 Like