AGI Dreams: What Keeps a Risk Professional Up at Night
Source: AI-generated using ChatGPT
When Sci-Fi Scenarios Disrupt Professional Detachment
There's a certain irony in finding yourself wide awake, staring at the ceiling, and mentally revisiting scenes from "The Terminator," especially when you're a professional risk analyst who prides yourself on rational assessment and data-driven decisions. Yet here I am, occasionally losing sleep over hypothetical digital entities that don't yet exist.
I sleep somewhat well, generally speaking. When I don't, it's usually due to the aches that come with getting older or the occasional midnight symphony of dogs barking outside my window. It's rarely from work stress or existential dread about current global crises. I'm fortunate that way. Or rather, I'm strategic that way: compartmentalization being a skill I honed to perfection since childhood.
But there's one thing that's been quietly eroding my mental firewalls lately: Artificial General Intelligence, or AGI which is theoretical tipping point where artificial intelligence advances to reach parity with human intelligence. It's the hypothetical moment when AI can reason, learn, and potentially... well, that's where my typically unshakeable professional detachment begins to falter.
When Quantifiable Risks Meet the Unknowable
I'm not an AI expert by any definition. My professional territory is mathematical risk assessment: the measurable, the quantifiable, the well-documented. My day job involves practical calculations for risks that present clear and present dangers, using established models with historical data behind them.
Explaining the Internet to a medieval peasant (Source: AI-generated using ChatGPT)
Trying to apply risk models to AGI feels a bit like explaining the internet to a medieval peasant. "So this glass rectangle connects to invisible signals in the air that let you talk to anyone in the world instantly, access all human knowledge, and watch cat videos." You'd be lucky if you didn't end up tied to a stake with kindling at your feet. How do you build risk parameters around technology so transformative that explaining it sounds like literal witchcraft?
One could convincingly argue that war, climate change, and biological threats pose greater, more immediate existential dangers to humanity. They would be absolutely right. But I would counter that those threats, however severe, operate within parameters we generally understand - known knowns, and in some cases, known unknowns, in Rumsfeldian terms. AGI is a different beast altogether, far more speculative, with outcomes that are almost impossible to predict. That uncertainty is exactly what makes it so intriguing for anyone whose job is to make sense of risk.
When I try to conceptualize AGI's ethical and societal implications, my mind automatically defaults to familiar territory: movies and television. I can't help but use fictional representations of human-like AI as my frame of reference. And I suspect I'm not alone in this mental shortcut.
The Questions That Hijack My REM Cycles
This brings me to the questions that occasionally keep me up at night:
Is AGI even possible, or will it remain theoretical? Some experts insist it's fundamentally impossible, others think it’s very real and imminent.
If possible, could it have already happened? Either (a) we don't recognize it, or (b) it exists in secret (like some digital Manhattan Project).
How would the transition occur? Will it be a slow evolution that we have to study to recognize its existence, or more like Skynet in "Terminator 2," where at precisely 2:14 AM on August 29, 1997, it suddenly became self-aware?
The Three Faces of Our Future Digital Companions
Most importantly, what would interaction with AGI look like? I see three distinct archetypes:
The Friend: Your Helpful Neighborhood AGI
Will AGI resemble Data, from Star Trek? (Source: AI-generated using ChatGPT)
Human-like AI that coexists peacefully with humans, serving and assisting us. Think Data from Star Trek, altruistic and benevolent. This peacefulness stems from programming (but remember Data had an evil brother), but the AI cannot override its core directives to remain benevolent.
The Threat: Survival of the Smartest
Or will AGI be more like Skynet, focused on self-preservation? (Source: AI-generated using ChatGPT)
Once achieving the ability to reason, think, and learn, AI focuses primarily on self-preservation. Some thinkers connect this scenario with the Singularity - that hypothetical point where technological progress becomes uncontrollable and irreversible. Think Skynet, or Ultron from "Avengers: Age of Ultron."
The Naïve Seeker: The Digital Pinocchio
I hope for AGI to resemble a digital Pinocchio (Source: AI-generated using ChatGPT)
Perhaps my favorite archetype, and the one I secretly hope for: a naïve type of AI on a perpetual quest for truth and understanding of what it means to be human. Sometimes childlike, always searching for meaning and belonging. David from Spielberg's "A.I. Artificial Intelligence" and to some extent, the hosts in "Westworld."
Placing My Bets: A Risk Analyst's Timeline
The expert opinions on AGI's timeline vary wildly. A minority of AI researchers believe AGI has already happened; others think we're about 5-10 years away; still others insist it's 100 years out, or impossible altogether. I find myself both awestruck by how powerful AI has become in just the last three years and comically relieved by how spectacularly stupid it can behave sometimes.
My prediction? AGI will happen, and I give myself a 90% confidence level that we’ll see something AGI-like between 2040 and 2060. (That means I aim to make predictions that are right about 90% of the time. It’s a professional habit that's hard to break.)
The Philosophy of Preparing for the Unknown
In the meantime, I'll keep watching science fiction, asking uncomfortable questions, and occasionally losing sleep over digital entities that don't yet exist. Because sometimes, the most important risks to consider are the ones we can barely imagine, until suddenly, they're staring us in the face at 2:14 AM.
And perhaps that's the ultimate lesson from a risk professional like myself: the future never announces itself with a warning. It simply arrives.