My p(eutopia)
Updated: 2/3/25
Credences
I believe there is a 25% chance that we create a eutopia (a “real” utopia) in the next 1-15 years. If we avoid our extinction or disempowerment from AI and other existentially risky technologies, I think we will develop technology that radically improves our civilization. Our first eucatastrophe will likely be solving human flourishing extraordinarily quickly. This I define as high levels on the following scales (see details):
- Holistic Flourishing Scale
- Total Well-being Adjusted Life Years
- Total Life Congruence
- Life Goal Value Attainment
- Life Goal Attainment
But measures of wellbeing will likely change dramatically as we start to “upgrade” our species. Roughly speaking, I expect a galaxy full of blissful life. A sublime post-humanity.
My rough timeline is:
- 2025: 5% cumulative
- 2026: 10% cumulative
- 2027: 25% cumulative
- 2028: 40% cumulative
- 2029: 50% cumulative
- 2030: 60% cumulative
- 2035: 75% cumulative
- 2040: 100% cumulative
My p(eutopia) by 2075 rises to 90% if we don’t go extinct, become permanently disempowered or voluntarily choose a less eutopian existence before then. The latter might include a semi-permanently constrained civilization to avoid extinction risks (e.g., an Amish-level of global technological development).
I’m more optimistic about our future if we immediately create effective international governance and enact a nearly universal global pause of frontier AI development. This amounts to developing civilizational-level wisdom. Immediately.
If you still can’t visualize the extreme upside we’re fighting for, try exploring the existential hope repository. I’ve yet to ever get compelling answers to why these futures aren’t plausible given a long enough timescale. Instead, I’ve gotten an enormous amount of denial, rationalization, and closed-minded thinking from people who nearly always fail the holistic understanding test.
Context
It’s important to note that I’ve been making predictions for decades, but not rigorously. I’ve been more right than wrong when I’ve written them down. But no one can predict the future extraordinarily well, least of all me.
For eutopia, in ~1998 I loosely predicted it for ~2030 (shortly after my prediction for the invention of artificial general intelligence) and have roughly held the same view since.
Since ~1998 I’ve known that AI would be humanity’s most important and transformational invention. I have been quietly mystified since then why others didn’t also come to this conclusion. I actually spent tens of thousands of hours trying to teach others to see this and other important truths about reality more clearly. And even more time trying to train myself to be able to shape this future toward eutopia and away from dystopia. I mostly failed.
Change Your or My Mind
If anyone with a credible background wanted to share their reasons for optimism or pessimism, I would love to speak with you. I will pay $1,234 USD to anyone that moves my credences up or down by >20%. Credibility usually means:
- Has evidence of understanding social and technological development (e.g., PhD in related field, startup that accurately anticipated a major change, etc.)
- Has evidence that they have processed most of their subconscious defense mechanisms (e.g., denial, repression, rationalization, wishful thinking, etc.)
- Has evidence of making calibrated and accurate predictions over a 10+ period
It’s harder for me to update from people who have not understood the nearly inevitable likelihood of eutopia (or extinction or disempowerment) and arranged their lives around either scenario. See basic rationality tests most of us fail and how I evaluate expert trustworthiness.