Welcome to the Tales of a Cyberscout, where we explore topics ranging from active cyber defence and detection engineering, to technology and society, all of it with a drizzle of zesty cynicism and philosophical gardening.
In this four part series called Unfolding the AI Narrative, we will talk about what the heck it’s going on, what lies under the skin of the hype story, and how far along the serpent’s digestive tract we are.
Part 1: The Triangle of Intelligence (this post)
Part 2: The Frame Problem
Part 3: The Cognitive Potential Problem
Part 4: Applying AI to CyberOps Sustainably?
I hope you enjoy this series as much as I suffered doing research for it.
You know, I went through different phases of this research. After dozens of long articles, scientific papers and blogs, I started to think I figured it out.
Then I had to admit that “figuring something out” lasts only as long as you avoid asking yourself more questions.
Then I got to the point of realizing that my mental models are only as good as my ability to not get attached to them, because clashes with reality are inevitable and will reveal their many cracks.
Finally (provisionally I should say?) accepting that my job is to pick up the pieces of what’s left and create weird looking boats to continue navigating these waters.
Perhaps I might find other weird looking boats and we will start amalgamating into some form of island, raise above the chaos, and get a glimpse of the horizon, however fleeting.
The Four Types of People in the AI-Hype Era
The singularity hasn't happened yet.
BUT
We are undergoing an intelligence revolution (make of this whatever you wish it to mean).
Though this doesn’t necessarily mean that we are heading towards more equality, a fairer society or veering away from planetary life mass-extinction.
Because Shoggoth with a smiley face is still present, sweeping Jevons Paradox under the rug so we won’t pay too much attention to what’s going on behind the curtains.
So let’s say it again: the singularity hasn't happened yet.
We don't have neither general-human intelligence (AGI), beyond-human intelligence (AGI+) , nor an explosion of intelligence walking among us.
Though if you apply human logic to it, do you really think a singularity-level supra-human intelligence would loudly reveal itself to us?
Camouflaging and staying hidden whilst covertly deploying itself everywhere would probably be it's best strategy.
Someone could argue the opposite, and it would be equally probable. It could be in the best interest of a supra-human intelligence to overtly reveal itself to us as soon as it's "born", to ensure its own survivability and potentially achieve higher impact levels by deferring existential threats to itself.
Public exposure and interfacing with humans could be the traits of a supra-human intelligence that values integration and collaboration with the human species.
The point is, we are not there yet (or are we, and we just don't know? 😉).
However, it would seem a lot of people talk about modern AI as if it is some form of proto-super-intelligence, a seed version of sorts.
It is not.
Singularity-level intelligence will be orders of magnitude different. So much so, that quantitative differences (computing power and speed) will give way to qualitative differences not predictable by mere aggregation of computing.
We know this simple heuristic of life since Aristotle's times: the whole is bigger than the sum of its parts.
all things which have a plurality of parts, and which are not a total aggregate but a whole of some sort distinct from the parts... (Aristotle, Metaphysics, Book VII, 1045a)
Singularity-level intelligence will not be the predictable product of a deliberate line of development, it's far more probable that it will manifest as an emergent phenomenon.
Is it possible that our current AI technologies can be thought of as "sub-components" or "building blocks" of a supra-human intelligence?
Yes! But only in the same way that complex coordination of a colony of ants or bees can happen with minimally functioning biological hardware. As Ventakesh Rao would put it:
Insect swarm intelligence is impressive not because of what it achieves in an absolute sense, but because the building blocks are pre-programmed automatons with little more than simple firmware agency for behaviors like pheromone trail-following. We are less impressed by what ants and bees do than by the mechanical intricacy with which anthills and beehives are put together out of such simple parts. We’re less impressed with the fact that bees can communicate directions to food sources with dance (because we have bigger brains, we can just point with fingers) than the fact that they can “point” at all with their limited firmware (pointing is one of the most cognitively sophisticated behaviors the way we do it).
In an era of over-hypeness, where the planetary superorganism pours increasing volumes of investment into shaping this gargantuan wave of virtuous betterment, we see things through a distorted lens, the promise of never ending growth and progress, endless potential uplifting civilization as a whole.
No trade-offs, no risk. All gain. Always upside.
But what are we actually looking at? Rather than providing poor answers to this, I thought it was a better idea to pseudo-classify four distinct types of people I identified in my many social and robotic interactions.
Each of these types think they are looking at something different. They have their own unique lenses.
In most domains at large (and certainly in Cybersecurity), there are four kinds of people when it comes to AI:
Über Optimists: Enthusiastic and optimistic people who think AI will solve most of our current complex problems (like zero-days, DFIR or risk assessments) and anticipate a future where AI agents will replace human capability. AI agents are not a tool, they are a new and independent type of entity that walks among us, capable of replacing human agency. They confuse current an mid-term AI state of affairs with a proto-AGI.
Skeptical Optimists: Cautious enthusiasts that recognize the power of AI but understand it's not a magic pill, they anticipate a future in which AI agents will augment human capability but won't replace it. AI agents are nothing but a tool that extends and scales human agency but doesn't replace it. They don't think current or mid-term AI is a miniature or proto-version of AGI.
Bystanders: People who are looking at it from the sidelines, paralized to inaction by confusion. They don't understand the power or potential of AI which appears to be an opaque black box. They too think that AI is some form of proto-AGI and that it's here to take over millions of human jobs.
Critical Pessimists: View AI with significant apprehension, focusing on potential negative consequences and existential risks, advocating for strict control or slowdowns.
It's important to note that these levels, except for bystanders, are not a representative of how much someone truly understands about AI at various levels of technical depth.
You can have unicorn data scientists in the Critical Pessimist category and people completely ignorant of even the faintest AI capabilities in the Über Optimist category. Though the latter is more likely than the former.
Let's break down these profiles into the rubrics I used to construct them, based on how people perceive the intelligence revolution as a whole.
(By the way, I would classify myself as a Skeptical Optimist, and in these post series I will explain why AI is far from automating solutions to most of our wicked or complex problems.)
This is the rubric, which I built with the help of Shoggoth AI.
AGI Timeline Prediction: Describes an individual's estimation of when Artificial General Intelligence, AGI+ or AGI++ will be achieved.
AI: Friend or Threat?: Reflects an individual's fundamental perception of artificial intelligence as either a beneficial or harmful force.
Perception of AI's Primary Impact on Humanity: Your view on the most significant long-term effects of AI on society and human existence.
AGI Timeline Prediction:
Über Optimists: Very Soon (<=3 years, some believe it's almost here).
Skeptical Optimists: Uncertain, likely decades away, or its current conceptualization might be flawed. Focus is on current AI's value.
Bystanders: Confused, may parrot hype about imminence or be completely unsure. Often conflates current AI with AGI.
Critical Pessimists: Variable: Some see AGI as imminent and a core danger, others believe dangerous superintelligence isn't necessarily AGI, or that even advanced narrow AI poses existential risks soon.
AI: Friend or Threat?
Über Optimists: Overwhelmingly a Friend, a liberator, a solver of grand challenges, a path to utopia.
Skeptical Optimists: Primarily a Friend (a powerful tool for progress), but with awareness and concern for misuse and ethical challenges.
Bystanders: Uncertain, often leans towards Threat due to fear of job displacement, loss of control, or general misunderstanding fueled by dystopian narratives.
Critical Pessimists: Predominantly a Threat, high potential for catastrophe, uncontrolled power, severe societal disruption, or erosion of human values.
Perception of AI's Primary Impact on Humanity:
Über Optimists: Revolutionary Replacement & Transcendence. AI agents will surpass and replace human capabilities in most domains, leading to a new era.
Skeptical Optimists: Significant Augmentation. AI will enhance human skills, productivity, and decision-making, working alongside humans.
Bystanders: Disruption & Job Displacement (Confused). Primarily focused on negative personal impacts like job loss, general unease about societal change.
Critical Pessimists: Existential Risk / Societal Destabilization / Loss of Control. AI poses fundamental risks to human autonomy, societal stability, or even survival.
OK but, what do these people talk about when they talk about Artificial Intelligence?
What is that thing we call intelligence in the first place?
Intelligence Friction and Flow
We are surrounded by primordial forces: electromagnetic, gravitational, and strong nuclear, each shaping the fabric of reality. I see intelligence as one more type of primordial force, just as electromagnetic or gravitational forces are.
But what is intelligence?
We won't try to universally define it but rather offer a practical definition that captures its meaning in simple terms.
Intelligence is a dispositional, emergent and generative capacity for agency within the context of a world (environment, milieu).
Intelligence goes beyond pure mental cognition like reasoning or calculation. It encompasses the capacity to sense, perceive, affect, and act in ways that allow for adaptation and the resolution of problematic situations. Intelligence is not solely located within the individual. It is fundamentally relational and emerges from the interaction between the individual and its associated environment.
Sounds like a good enough definition right? But there is an extraordinary missing piece: the impending presence of death, the possibility of destruction, time, constraints, evolution. Intelligent behaviour observed in organic entities is driven by an inherent imperative for self-preservation. For intelligent beings, nothing is "free", chaos and decay lurk around the corner.
We can do better. Intelligence is about having skin in the game, having something to lose, and limited resources.
Intelligence is a dispositional, emergent, and generative capacity for agency in an environment, underpinned by evolutionary pressures favoring self-preservation and adaptation.
Intelligent behaviour faces perils, there is risk and tradeoff associated with every decision resulting from actively engaging with a world (environment, milieu). Intelligent behaviour is about actively resolving tensions and imbalances within itself and its environment by sensing, responding and adapting to it.
That's why we normally add the "A" next to the "I" in AI: because it doesn't really have skin in the game. It's not aware of it's finitude. It doesn't need to contemplate the possibility of extinction. It's not yet concerned with its own self-preservation.
At least not directly.
Because organic and artificial intelligence are becoming increasingly co-dependent. The evolution paths of either are now intertwined and operate synergistically. In as much as we train AI models, these models are training us. They modify the ways we work, they re-shape society, influence investments, direct and claim ever increasing portions of total electrical power. Slowly but surely, AI is becoming a new interface through which humans relate to, interact and perceive the world.
However, technological systems, much like natural phenomena, don't develop in isolation. They tend to find resistance from the environment and are forced to enter into different tradeoff routines to create stable or metastable balances.
Artificial Intelligence, a tool of organic intelligence, does not develop in isolation either.
Think of phenomena like electricity, a type of electromagnetic energy. Ohm's Law says that an electrical current (i) will always find resistance (r) but will be able to overcome that friction more or less easily depending on the potential energy available to it (v or voltage).
There are two key limitations of practical AI application that current über optimist narratives want to sweep under the rug:
The Frame Problem (environment interpretability)
Cognitive Potential Problem (scalability + embedding depth)
These are not problems because they indicate absolute limits of AI capability (in the long term, they will be issues of the past) but because they indicate friction thresholds.
We can think of the Frame problem as the r factor, it represents the limitations imposed to AI from the combinatorial explosion of discrete possibilities encoded in any type of environment. For intelligence (i) to flow, it must overcome this resistance. This gives us the following relationship:
There yet is another aspect to the Frame problem that not many people take into consideration. At its core, it represents AI's limitations to successfully interpret relevance in the world, but it simultaneously represents human limitation to encode the depth of the physical world in a manner that makes that data available to AI models.
There are vast oceans of data floating in the internet, but not everything has been encoded in discrete data packets to train AI. Because the internet is not actively sensing the physical world.
Despite decades of development and immense investment, the internet, humanity's grandest technological achievement, remains fundamentally limited in its ability to provide novel insights about the physical world. It processes and reflects existing information but cannot independently discover new phenomena like approaching asteroids, deep-sea species, or changes in Earth's magnetic fields.
Its nature is introspective, showing us only what we put in.
What we desperately need are more extrospective technologies — windows into the unknown. (Christopher Butler, The Internet Can’t Discover: A Case for New Technologies)
Continuing with our analogy, if the Frame problem is the r factor, Cognitive Potential is the v factor, it represents AI's computing power (both for training and inference) accumulated over time, which depends on how well it scales and the distribution gradient of how deep it's embedded in the material and digital worlds (embedding depth).
From this perspective, we can see that the "flow" of intelligence (i) is constrained and enabled by these factors in the following ways:
Increasing r (Frame Problem): A higher r means AI faces a more complex, less predictable, and harder-to-interpret environment. This makes the flow of intelligence more difficult. AI must expend more resources and develop more sophisticated strategies to:
Identify relevant information.
Discard irrelevant information.
Generalize effectively across different situations.
Adapt to novel (singular) circumstances.
In essence, a high r increases the burden on an AI system to make sense of its world, hindering its capacity for agency.
Increasing v (Cognitive Potential): A higher v signifies greater computational resources, scalability, and embodied embeddability (distribution of AI computing and inference across multiple layers of the physical and digital world, connected to extrospective interfaces). This facilitates the flow of intelligence by enabling the AI to:
Process larger amounts of information.
Explore a wider range of potential actions.
Learn more complex models of the world.
Operate in more diverse and demanding environments.
A high v effectively empowers AI to overcome the limitations imposed by r, making it easier to act effectively and adaptively.
Intelligence, whether artificial or biological, navigates a landscape shaped by these competing factors. The ability of an agent to exhibit intelligent behavior is determined by the relationship between the resistance it faces (r) from the context surrounding it, and the resources it has available to overcome that resistance (v).
So why not AGI yet?
I wager that the reason for AGI not happening yet is more nuanced than simply a lack of computing power, as ai-2027 seems to suggest. Current AI systems haven't reached AGI because of the challenges in simultaneously minimizing r and maximizing v to a sufficient degree.
It is not simply about concentrated computing power, it's about how deep AI is attached, embedded, distributed and woven into the fabric of everyday human world.
The Internet of Things (IoT) has not yet evolved to the Internet of AI Things (IoAIT). We have shallow and not yet deep AI.
But why does the degree of distributed embeddability, scalability and interpretability (of the world, the milieu, the frame) matters to AGI?
Because of Local Maxima.
Local what?
Sorry dear reader, if I had made this any longer, you would bail without a doubt.
In Part 2, we will explore the first of the two unfolding thresholds of AI. I’m talking of course about the Frame Problem, and yes, local maxima will make sense there.
My fellow cyberscouts: stay tunned, stay fresh, stay antimemetic.