things should get weirder (2026)
Epistemic Status: You probably should not believe any of this. It is just for fun. It assumes many-worlds, the strong anthropic principle, and AI x-risk.
As time goes on, we should expect things to get weirder.
It’s possible through the development of AI that we accidentally invent something that causes human extinction (i.e., we pick “a black ball in the urn of possible inventions”).
Additionally, since we only observe worlds in which we don’t go extinct, we will never observe a world where AI kills us. That is, our observations are always subject to survivorship bias.
But can we say something a bit stronger?
As time goes on and we continue inventing technologies that cause extinction in parallel universes, we eventually find ourselves in more and more improbable worlds—worlds where you go “huh, it’s odd how that worked out” or “wow, if only this technology didn’t have this inherent quality, we probably would have been dead!”
Why exactly has nobody invented a recursively improving seed AI in LISP? Why did Eurisko not foom?
But in light of survivorship bias, these questions can be explained. Many of these inventions probably did cause extinction. And there were likely loads of inventions other human civilizations developed which killed them. We just weren’t there to experience them.
And eventually, the strangeness of these events compounds, and we find ourselves in more and more improbable and seemingly weird worlds.
Until, of course, there are no remaining worlds for us to observe.