Artificial General Intelligence and Maximally Advanced Technology
Here, I would like to propose a definition of maximally advanced technology that connects to artificial general intelligence in ways that put constraints on possible long term outcomes for alien civilizations. In short, I define maximally advanced technology as the final theory of physics plus the final theory of production.
While the final theory of physics has been more defined, by the final theory of production, I mean the actual means of manufacturing the goods necessary to use maximally advanced technology. For example, what is the minimum possible computer size? What is the maximum possible damage from the minimum necessary energy/intelligence level?
I believe the advent of artificial general intelligence will quickly accelerate the race towards maximally advanced technology. It could literally be milliseconds between the time that an AI achieves superintelligence and the time it achieves maximally advanced technology.
As such, we need to consider the societal implications of such technology and arms race dynamics long before it becomes a reality. Assuming we aren’t already in this reality that is…
There is no easy way to define the “final” theories of physics and economics. Nevertheless, we can attempt to gain some insight into what those theories are and why they would have societal and especially national security implications.
We can think of this just like the nuclear bomb. The nuclear bomb was invented due to a deeper understanding of physics. What if the next theory discovers a simple way of creating a black hole in the center of our hallow earth, immediately ending all sentient life on Earth?
This mere possibility places constraints on civilizations. Even without artificial general intelligence, the final theory of physics could have national security implications. What if cloaking, teleportation, and time travel are possible when we discover the theory of everything?
Even if there is no final theory of everything, and just a mere infinite chain of theories that get progressively closer to reality, that doesn’t mean we don’t face existential risk from discovering the next theory of physics.
What this all means is that it would be extremely irresponsible of national security officials to wait for academics to discover the next/final theory of physics and same goes for economics.
But governments might not be the only organizations attempting to discover the theory of everything in pursuit of profits. Capitalism would assume as much, I’m afraid.
Any sufficiently advanced technology is allegedly indistinguishable from magic. But is it really? Many have claimed that we might not even be able to communicate with aliens because it would be like us trying to communicate with ants.
But the difference with ants and aliens is that we are sufficiently intelligent to be able to communicate physics and economics to each other. So, I simply don’t buy that aliens would have no way of communicating with us puny humans.
I believe we have reached a minimum level of intelligence that we could learn maximally advanced technology given enough time. Why? Because I believe we will soon create artificial general intelligence, and that will result in maximally advanced technology.
Let us model an AGI + max adv tech scenario as the result of a fight between a world’s leading superpower, and its Group I. In this scenario, we can imagine that both organizations achieve maximally advanced technology at approximately the same time.
This might include but is not limited to: brain computer interfaces, cloaking, teleportation, time travel, shapeshifting, nano technology, synthetic biology, artificial general intelligence, simulation technology, and advanced physics and economic models.
What this all means is that we could be in a simulation created by either the government or Group I, but probably not neither!