To return to the subject of the OP, I have a problem with the chess analogy.
Chess programs do not make random choices, even within the simple rules of chess. Instead, they are trained by thousands of games played by highly skilled human grand masters, to “understand” the consequences of as many series of successful moves as possible, together with possessing a programmed goal to get to checkmate however the opponent responds.
Even the early, primitive, chess programs which, perhaps, tried any legal move and learned by experience what worked (a) did not choose moves randomly, but according to whatever algorithm was programmed in and (b) were also governed by the teleological imperative to get to checkmate: learning (and it was very slow) came from that internally programmed goal being rewarded.
I suspect that training such early programs would have to be done carefully and intelligently - pitch a grand master against it at first, and it will always be wiped out before learning anything: presumably the programmers were initially as kind as a parent teaching a small child chess would be.
The “chance” involved in choosing moves was not “ontological”, because the programmer could, if pressed, point out which lines of code governed the individual moves of the program. Without those guidelines, the computer would go on searching infinite search space forever without winning a single game. A spectator sees the chess program making “random” moves, but they are only random to him because he doesn’t understand the algorithmic determinism of the program.
Applying that to the creation, the “randomness” we’re considering is either intrinsic (ontological), ie there is no rhyme or reason for events except that which emerges through how things pan out by selection, or whatever: or it is merely epistemological, ie what we think is random is simply a pattern that we are too ignorant to discern, as it was in the case of the chess program. The profound difference between the two is vital, and it’s a big shame that which type was being considered wasn’t clarified in the OP.
In the second case, epistemological chance, the randomness is trivial - there are far more things that we don’t know in God’s working than things we do know, so to be unable to make sense of his purposeful choices is no big deal, and the proper response is to confess our ignorance. The danger is that using the word “chance” makes us think we know something about purposelessness, whereas in fact we are only being ignorant of purposefulness.
In the first case, ontological randomness, we would have the same problem that we would have with a chess program that made truly random moves, and had no teleological goals inbuilt - not to mention the vast knowledge base of past humanly planned games that real chess programs use. And that problem is what Loren stated upfront: “a few simple pieces, along with a few simple rules for how they interact, can create so many possible combinations that they could not all be explored in the lifetime of the universe.” Nothing interesting would ever happen, and you’d have to invent the multiverse to improve the odds.
So somewhere in the scientific model of reality, one has to build in what is actually built into chess programs: that is, an intelligently planned decision matrix (pseudo-randomness, which is actually non-randomness), and some concrete mechanism for incorporating the teleological goals that are rewarded. Note that these are ADDITIONAL to the few simple pieces and few simple rules, and the scientific explanation fails without them.
And these additional factors (to quote Thomas Aquinas) we call God.