Pope G7 Speech - artificial intelligence at the same time an exciting and fearsome tool

I am reading the Pontif speech transcript on Artificial intelligence. Its an interesting angle and i think one of the first times i have been refreshingly suprised that religion is capable of producing relevant speeches that are not simply redneck conspiracy ranting.

The Pope makes the point that

that artificial intelligence is above all else a tool . And it goes without saying that the benefits or harm it will bring will depend on its use…it has been this way with every tool fashioned by human beings since the dawn of time

Our ability to fashion tools, in a quantity and complexity that is unparalleled among living things, speaks of a techno-human condition : human beings have always maintained a relationship with the environment mediated by the tools they gradually produced. It is not possible to separate the history of men and women and of civilization from the history of these tools. Some have wanted to read into this a kind of shortcoming, a deficit, within human beings, as if, because of this deficiency, they were forced to create technology.[5] A careful and objective view actually shows us the opposite. We experience a state of “outwardness” with respect to our biological being: we are beings inclined toward what lies outside-of-us, indeed we are radically open to the beyond. Our openness to others and to God originates from this reality, as does the creative potential of our intelligence with regard to culture and beauty. Ultimately, our technical capacity also stems from this fact. Technology, then, is a sign of our orientation towards the future.

I think as Christians its important that we recognise that the talents that God has given us as individuals, are not inherantly evil. That we need to ensure that we further His will by using those talents for good and for furthering the Gospel. Too often the conservative inside us resists change…even though it is quite clear that technology has allowed an explosion of “spreading the gospel to the nations”.

I was only talking with someone yesterday about how much difference fax machines made to the way businesses communicate. My friend had an engineering business, prior to fax machines, he had to wait for drawings for jobs to come in the mail!

Whilst i may not agree with the Catholic religions inherant heresies, on this [AI] the pope and i agree. We need to embrace embrace knowledge, embrace technology…of course as is the age old saying (that i believe was first coined by French author Voltaire who was ironically trained by Jesuits at the College Louis de Grand) remember, “with great power comes great responsibility”. As the pope warns, ultimately, decision making is a human process (also of conscience i think).

I think the pope probably misses the point on banning the use of AI in weapon use…unfortunately, convenience and efficiency far outweight moral reasoning in this fundamental…the use of AI to determine who lives and who dies is inevitable and that is a sad fact i think. The pope said in this regard:

Being classified as part of a certain ethnic group, or simply having committed a minor offence years earlier (for example, not having paid a parking fine) will actually influence the decision as to whether or not to grant home-confinement. In reality, however, human beings are always developing, and are capable of surprising us by their actions. This is something that a machine cannot take into account.

Probably the most worrying implication of AI that the pope talks about is its affect on influencing young people in the arena of education…

the more it finds a repeated notion or hypothesis, the more it considers it legitimate and valid. Rather than being “generative”, then, it is instead “reinforcing” in the sense that it rearranges existing content, helping to consolidate it, often without checking whether it contains errors or preconceptions.

In this way, it not only runs the risk of legitimising fake news and strengthening a dominant culture’s advantage, but, in short, it also undermines the educational process itself

In a more or less explicit way, this constitutive power dimension of technology always includes the worldview of those who invented and developed it. (Pope Francis)

link to speech



AI is still oly as good as its program. Even if it can “learn” there still has to be parameters. it is almost impossible to program ethics (despite Data in Star Trek)



Salva Magazine has some great articles on this:

Many articles are free to read.

There was a news story on our local saydney radio station yesterday that im not suprised by, however, it does pretty much sum up the state of things…

The Australia, Canadian, New Zealand, US and UK countries all share the private data of citizens (such as facial recognition from public security cameras, bio security information from medical records, dental records etc)…

AI almost certainly has access to all of the above (and more)…

I am not one who is particularly interested in conspirarcy theories, however, even i must admit, our lives are not our own anymore thats for sure. We are at the mercy of technology.

I guess its a price we pay for wanting a TV remote (one of lifes many conveniences)

I was encouraged a few months ago by an article which reported that some country was considering a law that would make all data about a person the property of that person, and anyone using it would have to pay for use of that data. I’ve been advocating that approach for thirty years and a bit because to me it seems like common sense: I’m the one whose life generates that data, so it is mine.

The more so in the U.S. now that we have a Supreme Court with a majority that does not recognize a right to privacy.

1 Like

While we may not agree with adamjedgar’s heresies, I think we can agree with him about several points he has made, such as recognizing the gifts God has given us are not inherently evil. I also think it is important to mention, as he does, the foolishness of using AI in weapons.

1 Like

I expect that is imminent. Drone warfare has become a deciding factor in the current conflict, and signal jamming is the counter measure. AI recognition cannot be jammed, so some measure of drone autonomy probably is already being developed.

The point is that the target has already been determined, the AI just has to accomplish it. The intelligence is in avoiding or coping with whatever is trying to stop it

Did you see the Voyager episode where it dealt wit the ultimate smart bombs? (Warhead) Alright, they persuaded it to self destruct, but that is really TV justice. The fact of the matter is, when you give over power to technology you are in the hands of the programmers and they cannot think of everything as in the other AI bomb in Voyager the Dreadnaught. At the end of the day AI cannot determine as we can.


In the U.S. everyone should find that particularly scary because under the Insurrection Act the president could deploy drones to “pacify” gathering of his political opponents – and while Congress could theoretically stop him that would require a 2/3 majority in both houses, plus the action would not be subject to judicial review.

1 Like

The idea of “obedient as dogs but not as faithful” popped into my head.

Unfortunately, that is not true anymore. AI guided missiles (or drones) can make decisions about the target, within given parameters. I do not know how much such autonomy is utilized, perhaps more often than authorities are ready to confess. A few years ago, I watched a document of some modern weapons used by the US army. The document was some years old but even at that point, there were missiles you could launch towards the enemy area and forget - the missiles scanned the enemy roads, selected the best possible target using a priority list of potential targets and attacked towards the target it selected.

Target selection in the Gaza war has been based on suggestions given by AI. This has lead to some seriously bad choices, causing unnecessary civilian victims. The AI estimates the probability that a given location hosts enemy activity, without considering collateral damage. Humans should exclude targets with high number of civilians but it seems that in a war, it is easier to accept the suggestions rather than do time-consuming checking of how much civilians are within the explosion area.

In conflicts that demand very rapid decisions, humans are simply too slow to make those decisions. AI will make those decisions, whether it is defence against missiles or an attack against a fast target. In fighters, humans cannot tolerate as much G force than the equipment. Autonomous fighters are planned and coming to use, whatever the official statements are. I listened to an interview of two retired Israel generals and they took it for granted that autonomous fighters are coming and will be used in future wars because humans are too slow to make decisions in aerial warfare.

1 Like

I wonder how many people remember Buck Rogers in the 25th century?
In it the fighters relied on AI and were always losing, It was only when they went back to human control that the tables turned. OK so it is fiction but there are many who think that technology is a great servant but a poor master. (see also Booby trap TNG).
I guess that no matter how powerful we make AI it can never match the brains we were born with


AI is clearly superior to humans in some tasks. AI can do calculations and decisions based on the calculations much faster and more accurately than humans. It can also find patterns or solutions to problems that would be too time demanding for an ordinary human because AI can go through huge amounts of data in a very short time. AI does not miss or forget details as humans do.

There are still tasks where humans are better than AI. As technology develops, AI will reach the skills of humans in some areas but it is not evident whether it can ever win human brains in everything. Maybe AI is too limited to mathematical calculations to reach the level of intuition and compassion of human brains. Intuition-like problem solving may become possible for AI but will AI ever have feelings like humans? Time will show but an AI with feelings might be an unpredictable and terrifying creature.


I was talking with my nephew, and he stated AI could replace all his employees in his computer security dept if desired, and has pretty much replaced the accounting dept. Sort of scary for choosing a profession these days.


As in Terminator.

Or I Robot.

And there are many more.

Speed has its advantages, but where humans excel is thinking outside the box. AI cannot think outside its parameters.


AI can think outside the box, which is kind of creepy at times. For example, AlphaGo (the Go playing AI that can beat human Go players) will place stones on the board in really strange and unintuitive places on the Go board, places that no human would put them. And it works. These are solutions the AI found that no human player would play.

I would say that one of the advantages of AI is that it does think outside of the box. Humans do have biases, and it colors which pathways and strategies we take. An AI that is trained outside of those biases can find new solutions that we would not easily reach.

But as you say, the tool is only good as its manufacturer and only as good as the user.


This topic was automatically closed 6 days after the last reply. New replies are no longer allowed.