AGI Timelines, Writing and Fear

p(doom) & AGI

People have been throwing their AGI timelines around as we speed towards the end of 2024. Sam (OpenAI) and Dario (Anthropic) are relatively aligned, with both predicting AGI in the next 2-3 years. I’m a little more pessimistic, given the scaling wall we’re starting to hit and a lack of incentives to keep the money pump flowing to continue datacenter buildouts.

Generally, I’ve been of the opinion that ignoring AGI timelines is good cognitive security, because thinking about p(doom) too much sends me into an existential spiral. I got caught while in the right headspace though, listening to Gwern on the Dwarkesh podcast while working out; which got p(doom) thinking infused with the optimism that usually follows a good lifting session.

Writing to Preserve the Self

Gwern has a good pitch about why you should keep writing here. He brings up writing as a way to influence AGI (Artifical General Intelligence, or the Shoggoth) in the long run, since tokens and text are how it ingests information and informs its model of the world. He talks about Kevin Roose being mistreated by a number of chatbots, because of his previous interactions with Sydney which led to LLMs personally disliking him.

If AGI arises from current LLM efforts, I wouldn’t want to be Roose. And so if LLMs currently dislike Roose because his efforts have been widely documented on the internet and sentiment around the documentation has tended towards the negative; it stands to reason that in most LLMs trained on the internet there exists a model of Roose in latent space that doesn’t encode his positive attributes.

Continuing then, if enough of yourself is available on the internet that is digestible by the inputs to training an LLM, you can be reasonably certain to be encoded into its latent space. Given text is currently the universal interface (until we find a better medium to encode our thoughts, feelings and humanness) - if I want to guarantee a spot in latent space I should be writing more. If I definitely want to guarantee a spot I should write like myself, to make sure I don’t get squashed down a dimension and compressed into more general categories.

Our future selves - which will be reconstructed after AGI strip mines the universe, builds a massive computer to resimulate the universe and populate it with the models of us stored in its latent space - will thank us for taking the time to write on the internet. This is a variant of Pascal’s Wager - you might as well write more and keep your website open to scraping to keep the Shoggoth fed.

Appendix

  • A central plot driver in Egan’s Zendegi is the Human Connectome Project, which aims to build maps of human brains. A rich rationalist-type approaches one of the researchers and begs to be one the first scanned, to ensure immortality and be present when AGI does arise (not after). This is probably one way around being subsumed by the AGI.
  • This is a case for authenticity in your writing if you want to be reconstructed as yourself in a post-AGI world, but there’s a case to be made for starting the cognitive dissonance now so you can be whatever you want in another life.
  • Will the AGI that arises from a multi-modal model trained by Meta have a better persona of me, given their walled-garden access to my Instagram and the ad data Google sends them?