Have you seen this new statement? It was spearheaded by Russell Moore, President of the Ethics and Religious Liberty Commission of the Southern Baptist Convention, and signed by some 70 faith leaders – most of whom appear to be in Southern Baptist circles. But not all: BioLogos board member, Richard Mouw, also signed it. He just agreed to write an article for us, “Why I signed the new AI statement”. Hopefully we’ll get that next week.
In the meantime, what do you agree with, what do you disagree with in the statement?
I fully support the statement and haven´t find an aspect that I would disagree with. In a time where there is more confusion than clarity about what AI exactly means and will or will not be able to ever do, I appreciate the church having an open mind and encouraging its members to engage with it.
Looks as though while dominated by Baptists, since that is Russel Moore’s main group, it is actually pretty varied, including being signed by an Amazon executive.
I would agree with it in general, though might quibble over the inclusion of “the Fall” in the wording, as that has implications that I don’t fully agree with, but a minor point.
The big point of concern to me addressed was article 10, concerning use of AI in war. I agree with noting that moral culpability remains with people but it is still a scary thought of having autonomous killing machines roaming around.
I found it a very well thought out article, and they certainly give the impression that they know what they’re talking about. It’s good to see the Amazon executive’s signature on the list – it suggests that they probably had at least some input from subject matter experts (computer scientists or AI researchers). We need more high-quality articles like that in the Church: there’s far too much clueless dreck knocking around that just adopts a sensationalist ear-tickling Hal Lindsey/Jack van Impe kind of approach to these matters.
One point that I think they could have made that they didn’t is something along the lines of: “Silicon Valley needs to realise that just because they can, doesn’t mean to say that they should.” There’s a mentality in many tech circles – and AI is no exception – that “the world is full of fascinating problems waiting to be solved.” This very often leads to people working in fields such as this losing sight of the moral and ethical implications of their cool inventions. And other people suffer as a result.
I mostly agree with it, but I have some questions about two sections.
Article 4: Medicine
We deny that death and disease—effects of the Fall—can ultimately be eradicated apart from Jesus Christ. Utilitarian applications regarding healthcare distribution should not override the dignity of human life. Furthermore, we reject the materialist and consequentialist worldview that understands medical applications of AI as a means of improving, changing, or completing human beings.
I deny that death and disease are effects of the fall. The next two sentences desperately need unpacking. What are “utilitarian applications regarding healthcare distribution”? Likewise, what is “the materialist and consequentialist worldview,” and what does that have to do with “improving, changing, or completing human beings”? To me, the last sentence, in particular, is complete nonsense. Perhaps Dr. Mouw can explain what the commission intended to say, or what he understands by it.
Article 8: Data & Privacy
AI should not be employed in ways that distort truth through the use of generative applications.
What are “generative applications,” and how can they be used to distort truth?
I have ignored them to be honest. I only fear that those two sentences prevent this paper to get more wide spread. I think that you can spread such a paper with a Christian tone without making such bold statements.
“Utilitarian” describes the focus on the benefit of society in a whole. I think in his context it calls for the focus of the health of the individual rather than the gain society gets from investing in the healthcare of individuals. I think the A.I. advances in medicine mustn´t be applied with exclusivity.
I would say the first describes for example the false reduction of psychological problems to the biomedical field while the “consequentilist” part wants to focus on the application of ethics in medicine, so that the purpose doesn´t justify every means. It seems to me that in the last part the paper calls for caution in transhuman movements.
I think it probably wants to remind us that there is not an objective way for us to look at all individuals and have an exhaustive picture of them. The same or similar data collected about two individuals may look very much the same for the outsider, but have a different meaning in context.
Thanks for taking a stab at it. Before I forget, if @Chris_Falter has time, I’d like to hear his opinion of the statement. I believe he has some relevant experience.
Anyway, in the middle of a statement written for the layperson, suddenly the authors break out the incomprehensible jargon. That would be fine if they unpacked it with some examples or translated it into ordinary language, but as it stands now, folks like me can only guess at the intended meaning.
Okay. But I can imagine an AI program that could show us how to spend our limited healthcare resources more efficiently, so that less money is wasted and more is spent where it is needed. It almost sounds to me like the statement would rule out such an application. Does it? I honestly can’t tell, judging from what they’ve written.
But we shouldn’t have to guess at their meaning. What if an AI program devises an exercise and diet regimen that requires half the time and effort with double the results? Wouldn’t that be “improving” and “changing” human beings? Similarly, what if AI is able to improve the formulas and manufacture of current drugs? It seems to me that every medical intervention “changes” or “improves” us. The sentence starts off so dense that it’s opaque, and it ends so vaguely that it’s practically meaningless. That’s what I meant when I said the sentence was “nonsense.”
That’s a pretty good guess. I wish we didn’t have to guess, though.
I don´t think the author was adressing that particular issue, but rather the ethics or the focus on the individual when the AI is in direct contact with the patient so to speak. For example in the field of neurology when the AI is used to support the doctor in determining the patience chance of waking up from a coma and at which time.
Sure and it would certainly not falls under the issue I mentioned, because it probably wouldn´t offend any particular ethics. If the progress requires ignoring ethics, another way has to be found. And in context of the fans of transhumanism, I think the author primarily called for attention, if and how much robotic enhancements in a human are desirable and at which point it is the loss of human aspects.
Once again, not an ethics issue. And I fully expect in companies like Bayer that to be the case already.
And now imagine me trying to interpret this at parts very technical language while neither being native english nor particularly familiar with the standpoints of the baptists
Thanks for thinking of me, Jay. I can’t do more than casual comments for the next 3 weeks as I have 2 major papers due for my graduate studies + the usual work grind. I am interested in this topic and hope to take it up as soon as feasible.