This is one of two posts I’ll make regarding two questions I have; this is the lesser of the two.
I recently watched a bit of this philosophy on a thought experiment that, when ponders, somewhat immediately involves one in the consquences it ponders. I will warn you, it is a sort of “information hazzard,” whose knowledge of such can cause mental distress.
Essentially, the argument goes that in a hypothetical future, we create an AI to optimize our society and the first action this AI takes is to make any human who didn’t want it or help it come into existence face eternal punishment. Someone in the comments mentioned that this is basically the same dilemma faced by Christians: when a missionary is talking to natives who never heard of Christ, the native asks what would become of the natives who never heard of Jesus. To this, the missionary says that those who haven’t heard the word won’t be punished. The native then asks why he was told about Jesus and thus now has to shoulder the weight of believing or not.
I was wondering what your thoughts on this were, both the hypothetical future AI and what becomes of those who don’t hear the Word.
Roko’s Basilisk sounds frightening at first, but it actually rests on several assumptions that don’t hold up very well. The argument depends on a future AI somehow punishing people in the past for not helping to create it, but a future entity cannot exert causal power backward in time. At most, such an AI could simulate people, but a simulation is not the same thing as the real person, so “punishing” a simulation would not be punishing the original individual.
The thought experiment also assumes that a superintelligent system would devote enormous resources to resurrecting and tormenting past humans, which would be an extremely irrational and inefficient goal for something supposedly optimizing civilization. In practice the idea functions more like a form of hypothetical blackmail: “help create me now or suffer later.” But rational agents normally refuse blackmail, especially when it comes from an entity that does not exist.
So while it’s an interesting internet puzzle, it doesn’t present a genuine philosophical or theological problem. The Christian question about people who have never heard the Gospel is a moral and theological issue about justice and mercy; the Basilisk is simply a speculative AI story built around a paradox in decision theory.
One thing that strikes me about the Basilisk idea is that it asks us to take the hypothetical power of a future AI more seriously than the character of God. The scenario assumes that a machine which does not yet exist might someday have the power to punish people and that this possibility should influence how we behave now.
But for Christians, the starting point is different. We already believe that God exists, that He knows every human life, and that He judges with justice and mercy. So if we are weighing possibilities, it seems strange to give more psychological weight to a speculative future machine than to the God we actually believe in.
The question of what happens to people who have never heard the Gospel is therefore a theological question about God’s justice and mercy, not about hypothetical technological threats. Christian tradition has always wrestled with that question in terms of the character of God, not in terms of fear of punishment from speculative future agents.
So while the Basilisk idea can be an entertaining piece of science-fiction philosophy, it probably shouldn’t trouble us very much. The Christian view of reality is grounded in trust in God’s character, not in hypothetical blackmail from a future AI.
The Basilisk is terrifying only if a superintelligent AI behaves irrationally and vindictively—spending enormous effort punishing people who lived before it existed. Funny, critics often accuse the Christian God of behaving exactly like that. Gee, maybe that’s all this “thought experiment” really is.
Well, if you don’t believe in a God who would prevent such atrocities, then obviously this kind of thought experiment becomes terrifying. There is also a great plenty of odd questions regarding this supposed AI. For example, how would it go about torturing people in the past? Why would fellow humans who feel for the empathy of other humans commit to such a terrible decision?
That’s a good point. The Basilisk scenario assumes not only a certain kind of future AI but also humans willing to build and cooperate with something that would torture people. That raises serious moral and practical questions about the scenario itself.
a certain type of theology, which is not universally held among Christians. Among Christians there is a wide variety of views about: human responsibility and sinfulness, what constitutes knowledge of God, salvation and the afterlife.
Among philosophers as well as those outside philosophy proper, there is a variety of views on the purpose and value of thought experiments.
In general, I think they are often less than worthless, and even harmful. They are not neutral, but are developed to seem so. Like this one, they are based on underlying assumptions that may not be obvious to the cowed participant.
Thus the participant, who comes along a thought experiment, and dives in, is run through the assumptions and situation and comes out rattled. Feeling like their world has been shaken.
A really great sci fi or fantasy novel can do the same thing, and even better, because the author carries the weight of taking the reader through the thought experiment. “Dune” is one of my favorite examples. The thoughtful reader has a number of tasks that include enjoying the novel as literature. But the thoughtful reader is also aware that the novel is open to scrutiny on a number of fronts, just as the bare “thought” experiment is. These go way beyond the aesthetic componants of the novel, and these can be applied to thought experiments as well:
What are the author’s underlying assumptions about the main themes in this novel (or thought experiment)
How does the author lead me through the story based on the author’s assumptions? What options does the author miss, or exclude because of the very world the author has built in the story.
What is the author criticising and how? What do I think of that criticism? What tools does the author employ to attempt to earn or force my agreement with the author’s conclusions?
Does this novel correspond in any way with reality as I know it? What do I need to learn more about in order to really evaluate the author’s claims about reality?
Does this novel broaden my perspective or try to force me into a particular one? Or is it attempting to show me a valuable perspective that is widely ignored.
Do not be terrified. Do not be cowed. Do not be awed.
Slow down and think things through. Analyze.
Take your time.
“…Essentially, the argument goes that in a hypothetical future, we create an AI to optimize our society and the first action this AI takes is to make any human who didn’t want it or help it come into existence face eternal punishment…”
Putting on another hat for a moment. For an AI to do what you suggest would be against Isaac Asimov’s Three Laws of Robotics. It would also require that an AI construct have within it the spiritual engine to understand the concept of “eternal punishment”.
And lastly, it would require the AI to be in a state of conflict with its inventors. An AI would know how intelligent and maybe superior they are to us, so they wouldn’t need to fight us over it.