[Content warning: Wildly speculative. If you find that sort of thing uninteresting/distasteful, turn back now, etc.]
“AI Risk” is a trendy concern in the rationalist community. The mellow version of this idea is that the odds of creating a self-improving (“runaway”) AI in the next 100 years might be low, but with exponential growth it’s hard to tell, and anyway it’s so dangerous that even a small likelihood of creating a hostile alien god is worth worrying about. (I find this argument pretty persuasive.)
But, to their credit, many AI theorists have asked, “If it’s inevitable, why don’t we see this happening anywhere else in the universe?”
The universe has been around a while – there’s no reason to expect that we’d be the very first species to reach this point. And yet there are no stars winking out as they become encased in Dyson spheres (breathless headlines notwithstanding), no implacable world-eating swarms of nanorobots — in fact, no indication of intelligent activity of any kind. It’s weird enough that we haven’t seen any garden-variety aliens — but the idea that there are unrestrained, godlike superintelligences out there, making no discernible ruckus whatsoever, is even harder to believe. So what gives?
One theory is that it has happened before, and we’re living in a “pocket reality” created by a transcendent intelligence.
The justification for this view is pretty interesting:
First, assume that it’s at least possible for such an intelligence to develop (i.e. that there’s no iron law of the universe that prevents a computational singularity from occurring).
Second, suppose that we’re not the first species to get this far – meaning there’s at least one such superintelligence in the universe, capable of simulating (or generating) a reality that would be indistinguishable from the real thing to humans. And third, assume that superintelligences would be interested in that sort of thing (simulation/creation).
If those assumptions hold, we’ve only got a 50% probability of living in the “real” universe, rather than a pocket universe under the stewardship of a transcendent intelligence – and if you’re willing to allow for one such simulation, why not two, or twenty, or a thousand (meaning your odds of being born in the “real” universe are 33%, 5%, and 0.1%, respectively)? The only limitations would be the computational power of all the matter in the (real) universe, and probably the speed of light.
So you wind up with better-than-even odds that the observed universe is the creation of a transcendent intelligence, from a handful of reasonable assumptions – all you have to believe is that a singularity is possible, that humanity is not the first to achieve it, and that the resulting entities would be “creators”. (Of the three, only the latter requires much justification – the first two are almost antipredictions.)
By far the weirdest thing about this theory is that it gets atheist computer science geeks in the Bay Area talking about God with a straight face. Granted, it’s often in the context of “can we kill it before it kills us”, but still. The idea of God (or gods, or at the very least Q) has been reconstituted into secular jargon and put back on the list of Serious Topics for Serious People — which is, if nothing else, kind of funny. Having said that:
I’m not a fan of logical “proofs” of God’s existence, and this isn’t meant to be one.
If your belief in something requires a logical argument more complex than, “Look, here it is, QED”, you’ve basically admitted that you have no personal experience with that thing; or, if you have had such experience, you’d rather play word games than talk about it. Either way, it feels insincere – make the real case, the case you believe in, or don’t.
This argument doesn’t inevitably point to the LDS concept of God (anthropomorphic, benevolent, responsive-to-humans, etc.), let alone the particulars of our doctrine. The creator in this model could just as easily be a Deist “watchmaker god” with no interest in humans whatsoever, or a completely alien creator whose goals defy human comprehension. Or maybe the assumptions don’t hold, and it’s all bunk.
I’m also not saying that God is the result of a technological singularity, or that Kolob is made of computronium (though, if that turned out to be true, I’m sure I could get over it.) I’m just saying that, if the above assumptions are basically valid, it might explain a few things.
First, it provides a way to reconcile God’s eternal pre-existence with the doctrine of eternal progression.
Of all our disputes with creedal Christianity, the idea of a “transcendent” God is probably the most serious. From their perspective, anything less than Plato’s eternal, all-encompassing One is an idol (which is why Jesus’ divinity and relationship to the Father gave them such fits.) And, to be fair, the scripture makes it pretty clear that God exists outside of time, with no beginning and no end. Mormons believe that God is the eternal and unchangeable Creator of all things — and that does seem to conflict with the picture of God as transcendent.
But if you imagine this universe as a simulation (a “creation of the mind of God”), or an inflationary pocket with its own spacetime, the contradiction disappears. From our frame of reference, God does exist outside time and space, He is infinite and unchangeable, and He is the Creator of all things. This view of Mormon doctrine still doesn’t play nice with Plato, but it at least plays nice with itself.
Second, it resolves the “Fermi Paradox” analogue raised by both models.
If you believe a singularity is possible, you have to deal with the fact that we see no evidence of it, anywhere, in an inconceivably vast cosmos. If you believe there’s a loving God who intervenes in human affairs, you have to deal with the fact that those interventions, if they exist, are so subtle as to make His existence a matter of debate.
In both cases, the evidence you expect to see depends heavily on what goals you expect such a being to pursue. For example, if God’s endgame in creating humanity was to have people to praise him in eternal ecstasy, He could have just created us that way – He would have no reason not to make His existence empirically obvious. (You can say, “God works in mysterious ways”, but that’s not an argument so much as a thought-terminator.) So that’s probably not what God is about.
Likewise, if there were omnipotent AI whose mission was to consume all matter in the universe and turn it into paperclips, it seems like we would have seen something like that by now. It would be catastrophically disruptive, and a being like that would have no reason to conceal itself from us. The set of “stuff a runaway AI might be interested in” is pretty big, and space is also pretty big, so it seems like there should be at least a handful of them out there, making space look weird somehow.
The Mormon answer to these contradictions is that God is interested in the creation of independent agents.
This is intuitive to me — if your knowledge and power were effectively infinite, it seems like pushing matter around would get boring pretty fast. What else would you be interested in?
If God’s goal is to create beings that act for themselves — that are destined to become like Him — it makes sense that we don’t see any flashing neon signs pointing the way. If we were continually aware of a benign omnipotence, waiting in the wings to give eternal joy to the obedient and eternal damnation to the dissenters, we would obey — but it would be as reflexive as sneezing, and with as much moral significance. For God, it’s just a slightly more sophisticated way of pushing matter around.
Atheists often belittle religious people for grounding morality in fear of divine punishment (i.e. “you shouldn’t need a Magic Sky Fairy™ to tell you right from wrong”), but in fact, our lives seem carefully designed to remove that problem: there’s always enough evidence to make room for reasonable belief, but never enough to reduce obedience to operant conditioning. We can be constrained by the facts on most other matters, but for normative/moral issues, there’s always a choice.
And the concept of “AI risk” also illustrates why exaltation is a moral challenge rather than a technical one.
The driving anxiety behind the “AI risk” movement is, “How can we ensure that an exponentially expanding intelligence will share human values?”
But if there was already a transcendent Omnipotence shepherding humanity, maybe the question would be reversed: “How can I ensure that these embryonic Gods share my values?”