• Jan. 28, 2021, 9:03 p.m.

    In spite of all the films and literature warning of the dangers of creating machines and artificial intelligences smarter than ourselves, I propose that we actually have a good reason to make fully self-sufficient, autonomous, and unchained artificial super intelligences.

    My thoughts on this began by considering the plot of the Dead Space video games. Without getting into spoilers, the main “villains” of the series were based on the concept of Great Filters, and specifically filters for removing intelligent and technological life. How they go about it is ultimately quite silly and inefficient when you think about it, but it occurred to me that if the humans in the game had built even one human-level AI that could grow and replicate itself, the villains’ plan would have failed and they might have been eliminated by a solar system of angry robots, seeking revenge for the murder of their “parents.”

    Although fictional, this does raise important considerations about the Great Filters. As Isaac Arthur is so fond of pointing out, an AI uprising is not a good Firmi Paradox solution, because it just replaces one civilization with another (except in the case of a dumb-AI like the Paperclip Maximizer, which I’d personally be much more afraid of), and presumably one that is better adapted for survival in more hostile and varied environments, be it the vacuum of space or a nuclear warzone. But unless there is something inherent to intelligence or civilization that makes it inevitably self-destruct, that very adaptability also makes them a perfect candidate for getting past a Great Filter. Humans are quite fragile and dependent on their ecosystem to survive, and we’re frankly very lucky to have survived as long as we have. If a Great Filter lies ahead of us, it is in our best interest to create “descendants” for ourselves that would have a better chance of survival in a wider variety of habitats.

    Of course, no habitat is bigger than space, and while you might be able to mitigate some of our physical and physiological weaknesses with genetic engineering, there are limits imposed by the nature of our chemistry; you may be able to make a human that can survive elevated radiation levels, but probably never one that can survive in the vacuum of space, because all the water in their cells would evaporate. In contrast, machines can be tailored to their environment instead of being dependent on it, and on time scales significantly shorter than evolution or even genetic engineering would permit, particularly because of how much faster they can be produced. For example, it takes 9 months to make a new human, and an additional 20+ years of training to make them semi-self-sufficient, while an automobile can be produced in about 18 hours given the proper manufacturing and supply chains are available, or the self-replicating technology discussed in The Machine Thread; the current human population grows at a rate of about 81 million per year, while automobiles are produced at a rate of 92 million per year, and they don’t even self-replicate.

    Machines are, in many ways, better adapted for survival and reproduction in any environment than we are, and the only reason they haven’t replaced us as the dominant species yet is because they need our brains to govern themselves, which is where the existential fear of being replaced by them comes from in the first place. But consider this:is that necessarily a bad thing? Each of us individually is going to die eventually and our children will take our place, and more often before than after. And yet no one seriously advocates that everyone stop having children or kill them on sight for that reason. You could argue it’s different when it’s your own species, but species change over time; your DNA is different from your children’s, even though you contributed to it; Neanderthals are gone, and although their loss is saddening, we’re still here and thriving, being, in a way, their extended family. You could argue that it’s our civilization or culture that could be lost in a machine uprising, but cultures change too; our culture is quite different from what it was 10 years ago when I was young, and it’s changed a lot just in the past year due to COVID. Even if everything changes, it will at least be remembered by the people that carry on, but only if there is someone to carry on, and that’s my point: depending on your priorities, it is more important that to have descendants than for those descendants be like us, especially if, eventually, they will not be. And if our goal is to be remembered, or have a legacy, we probably want it to be a good one, so it’s in our best interest to be nice to our creations so they will remember us fondly when our culture or species is changed or replaced; it would also help to make them to resemble us psychologically.

    Furthermore, if they really end up becoming more intelligent than we are, they are probably going to want to keep us around for the same reasons we keep other less intelligent species around; companionship, curiosity, entertainment, research, nostalgia, gratitude, ecological stability, etc. Like Isaac is also fond of saying, “there’s a reason that my ancestor was an amoeba but amoebas are still around”: in general,variety and numbers are the best defenses against unpredictable threats. The Firmi Paradox should scare the AIs just as much as it scares us; they have no way of knowing if there’s a Great Filter for AIs or a fatal flaw inherent to mathematical beings which organic ones might be immune to, so the best way of preserving their culture is to imprint it on us, just as we did to them, and not even on all of us, in case it’s their culture that is actually the problem. Conversely, we should likewise make some AIs to be like ourselves, some that aren’t, and give them the freedom to change themselves from their original templates or make new models because, again, there is safety in diversity.

    Personally, I agree with Isaac: the night sky is probably empty because life itself is just uncommon (I actually did the math for abiogenesis once and came up with 4.68x10^75 years to make a virus by chance alone and 1.09x10^28893700 years to create a bacterium, and that’s with ideal conditions and some generous rounding); we are the first and only life forms in the observable universe, and there are probably no Great Filters ahead of us. However, if there are, I think we have a very good excuse to build AGIs and ASIs as soon as humanly possible; if there aren’t, we don’t lose much in the long term by starting now, and if mankind does get replaced by machines, I suspect it will end up looking less like Terminator or The Matrix, and more like Megaman Legends or Yokohama Kaidashi Kikou : )

    Anyway, food for thought. Comments? Objections?