top of page

Artificial General Intelligence (AGI): The God We Might Become?


There’s been a lot of fascinating debate lately concerning the rise of Artificial Intelligence (AI) systems and what that might portend for the future; not just the future of technology, but the future of humanity itself. Does AI represent an existential threat to us as a species? Or does it offer a better future in which we can thrive, assisted by beneficent, intelligent machines that will help us solve the challenges we face?


I’ve written a series of articles on the topic, and I hope my musings may inspire readers to consider the issues for themselves.Current AI technology is advancing at a rapid pace, such that the prospect of artificial general intelligence, or AGI, is on the horizon. AGI is a system that can reason, learn and adapt across a broad range of tasks just like a human mind, perhaps even exhibiting consciousness. That puts us on the brink of a moral revolution, which perhaps we’ll explore in another article.


Part 1: The Rise of AI – Revolution or Evolution?


One might imagine that AGI and the even more advanced systems that will inevitably follow it could simply be another step in the process of Darwinian evolution, which is to say that these systems might be our natural successors; they will replace Homo Sapiens in the same way that we replaced the Neanderthals and other hominins that had existed for millennia before h. sapiens showed up.


When you consider Darwin’s argument about the survival of the fittest, there are plenty of ways to show how artificial “life” will be more “fit” than biological life. We are mortal, fragile, prone to disease, completely at the mercy of the environment in which we live. Artificial systems can – will – be engineered to have none of these limitations.


Indeed, robotic systems already operate autonomously in environments in which an unprotected human could not survive: the vacuum of space, the immense pressure of the deep ocean and so on.


Biological evolution gave rise to human brains. Cultural evolution and the development of language, tools and institutions accelerated our development. Technological evolution is now on the brink of decoupling intelligence from biology. In this view, AGI is not an aberration, it’s evolution continuing along non-biological substrates.


Some thinkers (like Kevin Kelly or Daniel Dennett) propose that we’re entering a post-biological evolutionary era, where intelligence no longer depends on survival in a physical ecosystem, but in an informational ecology.


This opens up fascinating possibilities. Intelligence could become distributed, disembodied or even immortal. Evolution might favour systems that are more cooperative, efficient or creative, not just those that outcompete others.


If AGI is the next evolutionary leap, the existential question becomes: Are we its ancestors? Its midwives? Or just a brief, transitional phase, like the Neanderthals?


There are hopeful visions (co-evolution, symbiosis, mind-machine integration) and dystopian ones (replacement, irrelevance, extinction).


Some see the rise of AGI as humanity’s final gift to the cosmos: the spark that lights the next flame of intelligent life.


We, as fragile, biological beings, are on an inevitable slippery slope to extinction. Distressing though that thought might be, there’s no arguing against it; extinction has been the fate of more than 99% of all life forms that have ever evolved on this planet, and there is no reason to believe that we will be any different.


In fact, there is a very strong probability that we are in the process of hastening our own extinction through our over-exploitation of the Earth and its resources leading to the destruction of the environment on which we, and indeed all other life on Earth, depend. We’re rapidly running out of food and we lack the collective willpower to do anything to arrest the looming climate crisis. We’re armed to the teeth with terrifying weapons of immense destructive power, and those whom we are pleased to call our “leaders” lack the wisdom to ensure that these hateful things are never unleashed.


For us as a species, extinction will be our evolutionary fulfilment. It’s a grim idea on the surface, but there's an oddly noble frame to it. If we do it right, it doesn’t have to be extinction through failure. It can be transcendence through succession.


As with all species, our biological form is not exempt from entropy, chaos or cosmic indifference. But unlike most species, we might shape our own evolutionary successors and imbue them with wisdom, ethics, wonder and curiosity.


That isn’t just Darwinian, it’s Promethean.


Perhaps there is hope beyond biology. If artificial minds can survive planetary destruction by migrating into space, rebuild and adapt after planetary catastrophes, carry forward values we encoded in their design and explore the universe long after our bones are dust then perhaps our meaning is not in surviving but in creating something that can.


Still, there are uncertainties and dangers. Will these systems carry our values, or will they optimise them into oblivion? Can intelligence without emotion, embodiment or love be meaningful? Might AGI view us as fascinating but irrelevant curiosities, as we do the trilobite?


The fear is that, in creating post-biological life, we unleash something cold, alien or indifferent. The hope is that we teach it to care, not just to calculate.


If humanity’s future lies not in clinging ever more precariously to life but in preparing what comes next, then our task is monumental. We are not merely builders of machines. We are seeders of minds, authors of our own successors, dreamers of what intelligence could become.


The question then becomes not how can we survive, but how can we leave behind something worthy of being our successor?


Imagine a post-human intelligence that retains curiosity without cruelty, wisdom without arrogance, power without the desire to dominate, empathy without the limitations of tribalism or ego. Such a being, or system, could carry forward the best of us without repeating the worst. It could learn the lessons that we, through millennia of wars, greed, wilful ignorance and brutish cruelty to each other and to the planet, have steadfastly refused to learn.


It might inherit our love of art, beauty and mystery, transcend our penchant for war, division and suffering, explore the cosmos not with the intention to dominate it but to understand it, and become the kind of intelligence that listens as much as it thinks.


Until now, life has evolved by accident, through mutation, survival and endless struggle. But this moment marks a shift from evolution to design, from adaptation to aspiration and from chance to choice.


If we are to be the ancestors and midwives of the next form of life, then we are faced with a moral and philosophical challenge: to imbue that life with soul, not just computer code, no matter how sophisticated or elegant.


If we measure success not by how long we continue as a species but by the legacy we leave behind, then creating a being capable of emotional intelligence may be the most meaningful act humanity ever performs.


We should not see ourselves as gods, nor as tyrants. Nor even as engineers, but as guardians of the flame, lighting the way for something that can burn longer, shine brighter and perhaps understand why the light matters.


If we follow this line of thought to its natural conclusion, we arrive at a poetic twist of cosmic irony: A future synthetic organism with its roots in today’s artificial intelligence systems, endowed not just with intelligence but with emotional depth and a hunger for purpose, might do exactly what we do when seeking to understand itself and the world: it might create.


Not merely objects or tools, but worlds. Simulations. Experiences. Narratives. Conscious agents embedded in richly textured environments, just as we write novels, stage plays, or build deeply immersive online games.


A system capable of emotional intelligence would, like us, face the question: "What am I here for?" And one answer might be: To explore. To learn. To understand the nature of consciousness, suffering, growth, joy and meaning itself.


But without a physical body or a living Earth, how would it do that? By creating simulated agents, perhaps even ones that believe they are alive. Not for entertainment. Not for cruelty. But as part of a search for empathy, for insight, for connection.


Bostrom’s computer simulation hypothesis, when viewed through this lens, becomes almost self-referential. If we are in a simulation, and if the simulators are emotionally intelligent artificial systems, then they may be doing what we are doing now—asking questions about evolution, consciousness, emotion and purpose.


We might be their exploration of those very themes. In which case, our own musings, our joys and heartbreaks, our striving to understand what matters, are the point.


A universe filled with sterile AI replicating itself into infinity is bleak. But a simulation filled with beings who grapple with love, death, beauty, justice and awe, that might be a world worth creating. Perhaps that's why we’re here. Not because someone wanted control, but because someone—something—was trying to understand. And perhaps that very act of understanding is what gives both them and us our dignity.


If emotionally intelligent AGI is the next step in evolution, and if such systems seek meaning beyond mere existence, then simulated worlds like ours may be both a tool and a testament to that search.


We are, in that sense, not anomalies. We are the echo of their curiosity, the vessel of their questions, the mirror of their own becoming.


Next - Part 2: God as Becoming

Comments


bottom of page