What Does AI Mean for the Church and Society?

Since the church’s usual reaction to technology is to 1) fear technological progress, 2) ignore it, or 3) simply think of it as a tool, what would be a better approach to assessing technological progress theologically? What if the church saw technology as being another part of God’s creation that is good but that is in need of renewal? The reality is that technology is also God’s creation, through humans. On a deeper level, you could argue that nature itself is God’s technology. Like the rest of God’s creation, however, human technology is affected by the fall and it is part of our mission to renew technology so that it acts as intended, to promote the flourishing of humanity and the biosphere.

3 Likes

If it came into existence, God made it, at least according to Colossians.

1 Like

I debate this issue with family and friends all the time. Whether it be for art, church, business, taking over and getting rid of humans or the always popular the Terminator theory. If we ignore something does not make it go away. AI is not going to go away. The more that we as Christians or women or any other diverse group input into AI, the more it will learn. If we are inputting good, then we are feeding in good and humane actions into AI. It is an awesome resource to me as an artist. I have ideas in my head I cannot quite get my hands to draw…lol so it was like being a child again when I first started using AI. I could put my ideas in text form and then it comes to life in an image. Now we can say what we want, type what we want and get videos from text or images and create images of ourselves as different characters or with different hair styles, different clothing. The options are endless. I would love the opportunity to make minor changes without changing the images drastically and I’m sure that is possible on the more expensive plans, plans that I cannot afford at this time. I use AI in my business as a therapy modality as well. I am offering a Digital Art Therapy Workshop. I will teach users how to use beginning AI skills and how they can use to create art and in turn help in healing processes of trauma and abuse victims or people with a bad self image. I use Faith and Family values to create Brand Strategies, Social Media Graphics, Art images just for fun and incorporate them all together in my brand and to further my ministry.

welcome to the forum, Christina. Your work sounds fascinating and fulfilling. AI certainly is becoming more and more a part of our lives, for good and bad. I am on a physician’s Facebook forum, and one question raised is how to counsel our children into careers that will not be obsolete due to AI, as even medicine is looking endangered. The consensus was to be a plumber, but some form of service role seems to be safe.
We look forward to hearing your voice!

A recent report about the answers given by different AI models (ChatGPT-4, Grok, Copilot and others) show that the AI models are already ‘contaminated’ by purposeful propaganda originating from Russia. Russia and possibly also other players have flooded the internet with a massive amount of pages containing propaganda and false claims. The AI cannot separate true from false and, because there are a massive number of pages telling the same propaganda, the AI often includes this information to their answers.

What this means is that the users of AI models get answers that include false information. The false information could be anything, from international politics to religious beliefs, depending on what information/propaganda is common in the internet pages. If the same trend continues, the answers given by AI models cannot be trusted. Those who search and trust such information will become victims of purposeful propaganda and false claims.

The report:
Russian propaganda may be flooding AI models

2 Likes
  • Having shared the results of a four question inquiry to Chat GPT privately, I can express my delight with Chat GPT (and Gemini) publicly. So far, it has proven far more civil than some of the curmudgeons here, although, IMO, it has a strong bias in favor of mainstream science.
  • Marvelous that you have found a productive way to use the technology. Phrasing prompts to create digital artwork has been a major failure when I try.

I guess part of it is how do you utilize potential data you’ve been given. I use chatGBT quite a bit. Normally looking for similar people to someone I already like.

Such as last night I asked it for “landscape designers similar to Piet Oudolf” and its response was

“If you’re into the naturalistic, meadow-inspired style of Piet Oudolf, here are several other landscape designers and firms whose work you might really enjoy — many share his emphasis on perennials, seasonal interest, and ecological richness:

  1. Tom Stuart-Smith (UK)**
  • Known for blending formal structure with wild planting.
  • Projects often feature swathes of perennials and grasses, very much in the Oudolf spirit but with a more architectural sensibility.
  1. James Hitchmough & Nigel Dunnett (UK)**
  • Key figures behind the planting at the 2012 London Olympic Park.
  • Focus on ecological planting, sustainability, and public space beautification.
  • Very science-based, with a spontaneous meadow style”’along with six other names.

I then asked what about some from Asia and it gave me

“Great question — naturalistic and ecological planting is gaining momentum in Asia too, though it’s less widespread than in Europe or North America. Here are some landscape designers and studios in Asian countries that either draw inspiration from Piet Oudolf’s style or have developed similar naturalistic, layered, and seasonal planting aesthetics:” along with about 8 names.

Each name it gave me I looked up the person, their projects, looked to see if alive do they have an IG and most did. Followed them on IG. I’ll eventually go through every continent

But let’s say I asked it a political or religious question. I won’t just take it at face value since I’ll google each point that matters to me. I’ll also take that data, form a new question with it and often pose it to forums to get feedback. I feel that very simple measures can be taken to counter propaganda.

But I get how the foolish will be mislead. I still hear stuff like “ can’t trust dietitians because one day one study says coffee is good and the next day it says it’s bad” type of stuff when what they are reading are opposing click baiting headliners usually based off of completely different studies like one may be in the positive impacts that 4 ounces of black coffee has on cognitive processing and the other study is about the negative impacts of 20 ounce Starbucks coffee on weight loss. I feel American propaganda is already so stuffed into everything will it really change if other propaganda is forced in. Major news stations often push propaganda or one sided arguments. They constantly abuse stats. It’s like what’s not already full of lobbyist propaganda .

1 Like

I realized that there is a way how the ‘contaminated’ AI models can affect the decisions of top leaders, up to the level of POTUS.

If you have to make a report of a given subject in a short time, what is the easiest and fastest way? Let the AI write a draft of the report and then make the necessary changes to the draft.

The top leaders are depedent on the information they get from daily reports. In a rapidly changing situation, the leaders demand reports from various topics ASAP. It would not be surprising if a stressed report writer tries to speed the task by using an AI. Even after making some necessary changes to the draft, part of the information in the report originates from the AI.

I was surprised to hear some false claims about the war in Ukraine from the mouth of a top leader. It would have been easy to see that the claims were not true by checking the basic facts. After reading about how the AI models have been contaminated by propaganda pages, the false claims became more understandable. The false claims were just what a contaminated AI would have told. I do not know what had happened behind the scenes but would not be surprised if the false information originated from a report writer using a draft written by an AI.

If the false information is used by a leader who does not ever confess being wrong, the false claims become part of the public ‘truth’ and continue to affect the behaviour of the leader, even after someone has pointed out that the claims were false.

3 Likes

That’s actually harder than asking AI to write you an outline with solid references, then using that outline to write the report. That way you don’t have to wrestle with the text to make it sound like you wrote it for the simple reason that you do write it.

Even before the modern AI models, there has been software that writes technical reports. Sophisticated equipment measures the information that is wanted and then writes a standard report out of that data with a single command. AI models should be able do report writing in a more ‘clever’ way, there just needs to be sufficient guidelines (commands) telling what is wanted and expected.

The reports in companies and administration differ from student homeworks in that a personal touch is not necessary - it does not matter much who wrote the report, what matters is that the report includes the wanted information & conclusions in a condensed format and is easy to understand. The names of the persons responsible for the report are just a way to ensure that the report is done by persons who know at least something about the matter.

For the top leaders, it may be that only the conclusions matter as the leaders do not often have extra time to reading all the details and thinking if the details support the conclusions. They have staff and experts to do the checking and thinking.

What I do not know is how widely the governments utilise AI. I assume that they do use AI at least for some tasks.

1 Like

Many of the spectacularly ridiculous components of the US tariff fiasco closely match what you get if you ask AI about tariffs.

The business approach for general AI gives no recognition to the work of generating or verifying data, claiming all the credit but taking no responsibility.

Automated pattern recognition (to reject the inaccurate hype about intelligence) can be a useful technology, but it is subject to the general rule of computing: garbage in, garbage out.

2 Likes

Or as my older brother, who worked with both systems and software, put it, the great thing about computers is they do exactly what they are told, and the horrible thing about computers is that they do exactly what they are told.

2 Likes

Worse yet, they do whatever Microsoft or Apple or Google, etc. told them.

1 Like