Perfection is not a straitjacket

I attack hard problems the same way that I assume everybody else does.

When I chose my major in college, I tried to imagine the most likely outcomes that would result from each choice, and then weighed them against each other on a basically linear scale of “total expected happiness”. It wasn’t really as formal as that — but I spent a lot of time thinking about what was most important to me in a job (time, money, risk, intellectual stimulation, social impact), and I tried to choose a major that would give me all those things, maximized in the right order of priority. Basically I made pro/con lists.

But these comparisons never seemed to reach a satisfying conclusion, and still haven’t.  I love the degree that I chose, and really do believe it was the best fit overall — but when I have a dull day at work, I think longingly about international relations or journalism, and when money’s tight, I daydream about computer science or accounting. (Seriously.)

I assumed, like Dr. Chang did, that this anxiety was rooted in my own ignorance: maybe I’m bad at predicting which choice will give me the things I care about most, or maybe I care about the wrong things. I assumed it was, fundamentally, an information problem; that there was one correct choice, and if I could have all the ramifications of all the possible options laid out in front of me, it would be possible (if not trivial) to spot the “best” outcome, and take it.

This kind of thinking also colors my intuitions about God.

If all our decisions are analytically solvable, a perfect being never faces any decisions at all. God is always perfectly aware of the most correct course, and he always takes it. Perfection might allow him a few aesthetic liberties, but otherwise he’s straitjacketed.

And if our goal is to approach that perfection, then we’ll be straitjacketed too; only worse, because we’ll be identical and redundant: all responding to the same perfect information with the same perfect morality, and supplying the same optimal output. (Not to belabor this point, but it’s just one more nightmarish consequence of Plato’s insistence on philosophical tidiness, his need for everything to converge on a single point.)

Leaving aside religious questions, the idea that hard choices always have One Right Answer is a troubling one, because it implies that your “identity” is just your peculiar configuration of stupidity and moral brokenness — the unique way in which you fail to arrive at the optimal conclusion. And in the paradisiacal post-singularity future, when all computational problems become trivial, we’ll all be slaves to our perfectly defined utility functions.

But Chang rejects the idea that all hard choices are “solvable”.

In other words, there’s no way of breaking down the various results of each decision into “happiness points” and deciding which pile is bigger, because they involve measures of value that simply aren’t comparable. It’s not just computationally difficult to determine whether marrying one person or another is your optimal decision — it may well be impossible.

This doesn’t mean there are no sub-optimal choices you could take– there certainly are, both for moral decisions and pragmatic ones. But there may be multiple “best” choices, each of which maximizes a different set of values, none of which are comparable to each other. (For what it’s worth, this appears to be the Church’s official position, at least on the question of “soul mates”.)

Of course, utilitarianism (and by extension, rationalism) is built around the opposite assumption. If we can boil down all the possible consequences of a decision into quantifiable, comparable units, then there is an analytically “correct” choice — or at least, a choice that is the clear best fit to one’s preferences.

Intuitively, it seems wrong (or maybe just distasteful) for there to be exactly one optimal response to all the decisions that make us who we are — but it’s hard idea to disprove.

No matter how complex a problem is, it’s always possible that the answer could be “moar data” or a more sophisticated analysis. So far, every objection I’ve come up with could be answered with, “Well, that’s just one more thing to factor into your utility function”.

But when I go back to the thought experiment — imagining all the possible outcomes laid out before me — it seems like it really would be impossible to compare the different lives I might live. This seems especially obvious when I consider the granular details of spending my life with one person versus another, or raising one set of children versus another.

I simply wouldn’t trade my wife and daughter for any other family, no matter how ideally suited to my preferences. It’s a choice with an effectively infinite opportunity cost. But that doesn’t mean it was the only right answer. If I could see all those hypothetical forgone relationships with as much clarity — maybe a dozen different happy lifetimes of intimacies and old stories and shared struggle, with a dramatically different cast of characters — it seems like each of them would also become incomparable and irreplaceable to me.

There are certainly a lot of sub-optimal outcomes; but it’s also easy enough to imagine more than one happy outcome, and all of them different.  (I know Tolstoy said all happy families are alike, but he was also a 19th-c. Russian novelist and therefore a huge buzzkill.)

This is possible partly because preferences are dynamic, and (at least to a certain extent) within our control.

In other words, I don’t just have the freedom to live in New York City or Omaha. I also have at least some capacity to deliberately fall in love with my choice — to make it right for me (or more accurately, make myself right for it.)

Likewise, as long as you find the right sort of person (one of the many “right” people for you), you can have that incomparable, irreplaceable happiness — though you are making a decision about who you will become in order to achieve it. And I’ve never met a couple so perfect for each other that their love wasn’t at least partly a conscious choice.

To some extent, these preference changes happen whether you want them or not. If I had majored in journalism or computer science, I would have been constantly surrounded by other wannabe journalists/programmers, and I would have chosen mentors and role models in “my” industry, and my definition of success and what’s cool and what matters would soak in from a completely different social ecosystem. The effect would be even stronger if I chose to nurse and encourage it.

In short, I could work hard to become the sort of person for whom journalism/computer science is the right choice. That doesn’t mean it would work out, but the point is that your choices can change your preferences — so that an “optimal” career (or spouse, or community, or whatever) is one that is close enough for you to manually nudge your preferences into harmony with it. And people who master that skill enjoy a much larger scope of possibilities.

Needless to say, if we can liberate the idea of perfection from the idea of uniformity, that’s a pretty big deal.

If nothing else, it can make life a little less terrifying for folks like me who are constantly re-litigating their choices, always wondering if they did the exact right thing, or did enough. It makes happiness an ongoing choice, rather than a train you might miss; and it helps explain why greater freedom to make the “right” choice doesn’t seem to make people happier.

It also resolves a few theological paradoxes: for example, it might explain why God gave Adam what seems like conflicting commandments — he could either keep the commandment to stay away from the fruit, or he could keep the commandment to cleave unto his wife. It wasn’t a trap, or a test for him to pass or fail — it was a choice. (And it became the right choice after the fact.)

For that matter, it explains why God would be interested in free will and consciousness in the first place (since it’s more than just “the ability to get the wrong answer”). It explains why he would aspire to create “Saints; Gods; things like himself” — since those Saints and Gods will be more than just an army of redundant, identical, fully optimized utility zombies.

More importantly, it makes theosis, or heaven, or singularity, or whatever eschaton you’re anticipating, seem like something worth having. Instead of merging with the undifferentiated divine (extinguishing choice and consciousness and identity), or becoming wirehead gods on lotus thrones, perfection looks more like infinite diversity, in infinite combinations.

Perfection is not a straitjacket

How God handles AI risk

ultron

[Content warning: Wildly speculative. If you find that sort of thing uninteresting/distasteful, turn back now, etc.]

“AI Risk” is a trendy concern in the rationalist community. The mellow version of this idea is that the odds of creating a self-improving (“runaway”) AI in the next 100 years might be low, but with exponential growth it’s hard to tell, and anyway it’s so dangerous that even a small likelihood of creating a hostile alien god is worth worrying about. (I find this argument pretty persuasive.)

But, to their credit, many AI theorists have asked, “If it’s inevitable, why don’t we see this happening anywhere else in the universe?

The universe has been around a while – there’s no reason to expect that we’d be the very first species to reach this point. And yet there are no stars winking out as they become encased in Dyson spheres (breathless headlines notwithstanding), no implacable world-eating swarms of nanorobots — in fact, no indication of intelligent activity of any kind. It’s weird enough that we haven’t seen any garden-variety aliens — but the idea that there are unrestrained, godlike superintelligences out there, making no discernible ruckus whatsoever, is even harder to believe. So what gives?

One theory is that it has happened before, and we’re living in a “pocket reality” created by a transcendent intelligence.

The justification for this view is pretty interesting:

First, assume that it’s at least possible for such an intelligence to develop (i.e. that there’s no iron law of the universe that prevents a computational singularity from occurring).

Second, suppose that we’re not the first species to get this far – meaning there’s at least one such superintelligence in the universe, capable of simulating (or generating) a reality that would be indistinguishable from the real thing to humans. And third, assume that superintelligences would be interested in that sort of thing (simulation/creation).

If those assumptions hold, we’ve only got a 50% probability of living in the “real” universe, rather than a pocket universe under the stewardship of a transcendent intelligence – and if you’re willing to allow for one such simulation, why not two, or twenty, or a thousand (meaning your odds of being born in the “real” universe are 33%, 5%, and 0.1%, respectively)? The only limitations would be the computational power of all the matter in the (real) universe, and probably the speed of light.

So you wind up with better-than-even odds that the observed universe is the creation of a transcendent intelligence, from a handful of reasonable assumptions – all you have to believe is that a singularity is possible, that humanity is not the first to achieve it, and that the resulting entities would be “creators”. (Of the three, only the latter requires much justification – the first two are almost antipredictions.)

By far the weirdest thing about this theory is that it gets atheist computer science geeks in the Bay Area talking about God with a straight face. Granted, it’s often in the context of “can we kill it before it kills us”, but still. The idea of God (or gods, or at the very least Q) has been reconstituted into secular jargon and put back on the list of Serious Topics for Serious People — which is, if nothing else, kind of funny. Having said that:

I’m not a fan of logical “proofs” of God’s existence, and this isn’t meant to be one.

If your belief in something requires a logical argument more complex than, “Look, here it is, QED”, you’ve basically admitted that you have no personal experience with that thing; or, if you have had such experience, you’d rather play word games than talk about it. Either way, it feels insincere – make the real case, the case you believe in, or don’t.

This argument doesn’t inevitably point to the LDS concept of God (anthropomorphic, benevolent, responsive-to-humans, etc.), let alone the particulars of our doctrine. The creator in this model could just as easily be a Deist “watchmaker god” with no interest in humans whatsoever, or a completely alien creator whose goals defy human comprehension. Or maybe the assumptions don’t hold, and it’s all bunk.

I’m also not saying that God is the result of a technological singularity, or that Kolob is made of computronium (though, if that turned out to be true, I’m sure I could get over it.) I’m just saying that, if the above assumptions are basically valid, it might explain a few things.

First, it provides a way to reconcile God’s eternal pre-existence with the doctrine of eternal progression.

Of all our disputes with creedal Christianity, the idea of a “transcendent” God is probably the most serious. From their perspective, anything less than Plato’s eternal, all-encompassing One is an idol (which is why Jesus’ divinity and relationship to the Father gave them such fits.) And, to be fair, the scripture makes it pretty clear that God exists outside of time, with no beginning and no end. Mormons believe that God is the eternal and unchangeable Creator of all things — and that does seem to conflict with the picture of God as transcendent.

But if you imagine this universe as a simulation (a “creation of the mind of God”), or an inflationary pocket with its own spacetime, the contradiction disappears. From our frame of reference, God does exist outside time and space, He is infinite and unchangeable, and He is the Creator of all things. This view of Mormon doctrine still doesn’t play nice with Plato, but it at least plays nice with itself.

Second, it resolves the “Fermi Paradox” analogue raised by both models.

If you believe a singularity is possible, you have to deal with the fact that we see no evidence of it, anywhere, in an inconceivably vast cosmos. If you believe there’s a loving God who intervenes in human affairs, you have to deal with the fact that those interventions, if they exist, are so subtle as to make His existence a matter of debate.

In both cases, the evidence you expect to see depends heavily on what goals you expect such a being to pursue. For example, if God’s endgame in creating humanity was to have people to praise him in eternal ecstasy, He could have just created us that way – He would have no reason not to make His existence empirically obvious. (You can say, “God works in mysterious ways”, but that’s not an argument so much as a thought-terminator.) So that’s probably not what God is about.

Likewise, if there were omnipotent AI whose mission was to consume all matter in the universe and turn it into paperclips, it seems like we would have seen something like that by now. It would be catastrophically disruptive, and a being like that would have no reason to conceal itself from us. The set of “stuff a runaway AI might be interested in” is pretty big, and space is also pretty big, so it seems like there should be at least a handful of them out there, making space look weird somehow.

The Mormon answer to these contradictions is that God is interested in the creation of independent agents.

This is intuitive to me — if your knowledge and power were effectively infinite, it seems like pushing matter around would get boring pretty fast. What else would you be interested in?

If God’s goal is to create beings that act for themselves — that are destined to become like Him — it makes sense that we don’t see any flashing neon signs pointing the way. If we were continually aware of a benign omnipotence, waiting in the wings to give eternal joy to the obedient and eternal damnation to the dissenters, we would obey — but it would be as reflexive as sneezing, and with as much moral significance. For God, it’s just a slightly more sophisticated way of pushing matter around.

Atheists often belittle religious people for grounding morality in fear of divine punishment (i.e. “you shouldn’t need a Magic Sky Fairy™ to tell you right from wrong”), but in fact, our lives seem carefully designed to remove that problem: there’s always enough evidence to make room for reasonable belief, but never enough to reduce obedience to operant conditioning. We can be constrained by the facts on most other matters, but for normative/moral issues, there’s always a choice.

And the concept of “AI risk” also illustrates why exaltation is a moral challenge rather than a technical one.

The driving anxiety behind the “AI risk” movement is, “How can we ensure that an exponentially expanding intelligence will share human values?”

But if there was already a transcendent Omnipotence shepherding humanity, maybe the question would be reversed: “How can I ensure that these embryonic Gods share my values?”

How God handles AI risk

Pattern-matching and Mormonism

deepdream

Above is an example of Google’s “Deep Dream” pattern-matching algorithm.

They programmed it to find dogs and pagodas, and no matter where you tell it to look, it will damn well find you some dogs and pagodas. Which makes it a pretty handy metaphor for delusion.

Scott Alexander suggests that dreams, drugs, religion, and schizophrenia all involve essentially the same “failure mode”.

Human brains are hard-wired to look for patterns, too, and we get big neurochemical rewards when we find (or create) one. But while we’re dreaming, or on DMT, or schizophrenic, or religious (the theory goes), this pattern-matching ability is completely disinhibited — so that patterns seem to emerge from meaningless noise. If you’re asleep, random brain activity gets stitched into a (bizarre) narrative. If you’re paranoid schizophrenic, innocuous social interactions become loaded with secret signals, and interpreted through the lens of a vast conspiracy. People on hallucinogens find the underlying fractal architecture of the universe in the spackling on the ceiling. And if you’re religious, you see the Blessed Virgin in your toast.

That’s a flippant summary, of course; there’s got to be at least some truth to this. Powerful religious experiences usually do involve a deep sense of clarity and connection and meaning that is, like a dream, very difficult to recapture after the fact. You can kind of remember how it felt, but the memory feels hollow somehow — like you forgot the central Truth that tied it all together. All of which are reasonable things to expect, if your pattern-matching algorithm was going temporarily bananas.

Psychologists would say that your conscious self has the more accurate interpretation of the dream. You haven’t forgotten some important fact that tied it all together — it never was tied together. One assumes that the same is true of profound religious experiences — the way you feel about them after you’ve “come down from the mountain” is what really happened, and that fading sense of contact with the divine is just your mental “immune system” mopping up erroneous patterns now that it’s functioning properly.

This model is pretty internally consistent — but it’s consistent because it’s recursive.

Once you’ve settled on a baseline “reality” (in this case, a mechanistic, materialistic, random universe), anything that calls that model into question can be dismissed as runaway pattern-matching. Almost no experience, no matter how powerful or vivid or “real”, could possibly falsify this model (unless God or your DMT spirit animal were deliberately trying to falsify it.)

It ignores the fact that conscious human minds also impose patterns on their observations, and that those patterns are not more obviously correct — they’re just harder to get away from. If you’re raised in a secular materialist society, your default pattern will be “random meaningless static”, and your waking mind will impose that pattern on all your experiences, whether it’s an obvious fit or not (just like the Peruvian peasant imposes Jesus on her breakfast.)

That doesn’t mean that it’s a bad pattern, necessarily; if you think the risk of a “false positive” (imposing meaning on a meaningless experience) is much more serious than the risk of a “false negative” (incorrectly discarding a meaningful experience), then a strong memetic defense against weird beliefs is probably a good thing. As a Mormon, it’s pretty clear what side of that question I come down on, but I can understand the appeal.

So what is happening to the brain when we have spiritual experiences?

I don’t think we can dismiss the role of hyperactive pattern-matching out of hand. Clearly something strange is happening in our brains when we feel that surge of intelligence and clarity — there must be some sensitivity that is getting cranked up to eleven. Some have speculated Joseph Smith’s seer stone worked by inducing this state of heightened sensitivity (essentially serving as a focus for meditation) and allowing him to receive revelations that he could not receive in a conscious state. I’m not convinced that that’s the whole story, but it probably helped.

The fact that these experiences feel a bit like dreaming is neither problematic nor surprising — we’re not obligated to accept the assumption that dreaming is wholly internal and random.

The fact that it has so much in common with hallucinogenic drugs, on the other hand, is a bit of a puzzle.

The Holy Spirit’s influence is both infallible and unmistakable — that’s what makes it a safe heuristic for testimony. If we’ve discovered a drug that can “fake” the experience of revelation, we’re in trouble. Admittedly, because I haven’t taken these drugs, I can’t say how analogous the experiences really are — but from a Mormon perspective, the weirdest thing about these experiences is how doctrinally sound they seem to be (despite being obtained in a seriously doctrinally-unsound way). It isn’t just that people think they’re hearing the voice of God — it’s that the voice seems to be saying basically true, good, wholesome, Mormony things. These guys aren’t being told to sacrifice their pets, or hold orgiastic black masses, or even vote Democrat. So, I dunno — it’s a pretty weird case, but if truth is being taught, the Spirit bears witness, right?

I don’t have any idea what to do with that information. It’s not impossible to imagine receiving true revelation from an iffy source (cf. the witch of En-dor); and it seems likely that these drugs would make you more sensitive to both friendly and hostile influences, or allow you to receive things you’re not really ready for. There’s probably a good reason that these states of consciousness require some discipline and spiritual maturity to come by “honestly”.

Pattern-matching and Mormonism