Waiting for the Barber
Why artificial intelligence may be trapped in Russell's Paradox, and why we can't build what we can't define
If you ask five AI practitioners to define Artificial General Intelligence (AGI), you might get six answers. From “median human” to “generally capable” systems, there’s been a lot of hand-wavy vagueness. Some even reject the term entirely.
The definitions all suggest the same North Star: AGI is a system that possesses human-level intelligence.
It is an intuitive idea. It might also be, I believe, a logical impossibility.
It’s not that machines are incapable of greatness. Rather, “Human-Level Intelligence” is not a valid target. It’s a concept trapped in a logical loop, much like division by zero. By framing our technological goals around a paradox, we are creating confusion, fear, and hype, rather than clarity.
To understand why, we have to look back at the foundational crisis of mathematics in 1901, and the trouble with the “set of all sets.”
The Trap of Unrestricted Comprehension
In the early days of set theory, mathematicians operated under a principle called Unrestricted Comprehension. The idea was simple: if you can describe a property, you can create a set of all things that have that property.
You can have a set of all red things. A set of all even numbers. A set of all ideas.
Bertrand Russell, however, found the fatal flaw in this freedom. He asked: What about the set of all sets that do not contain themselves?
If this set contains itself, it contradicts its own definition. If it does not contain itself, it must be included, which again contradicts the definition. This was Russell’s Paradox. It proved that you cannot just define a collection loosely and expect it to exist logically.
It further revealed that the “Set of all Sets” is an incoherent concept. When a definition becomes self-referential or boundless, logic collapses.
I propose that “Human Intelligence” is exactly this kind of paradox.
Intelligence is the “Set of all Sets”
We treat human intelligence as a fixed bar to be cleared. But strictly speaking, we have no definition for it.
Anecdotally, human intelligence operates like a “Set of all Sets.” It is an unbounded, self-referential capacity that cannot be fully specified. We don’t just solve problems; we think about how we solve problems. We don’t just learn; we learn how to learn. This reflection itself becomes a new capability, and we can reflect on that too.
It is self-reference all the way down.
We’re essentially saying: “We will create a machine that can do all the things in a set that cannot be fully enumerated, that includes within it the capacity to transcend any enumeration we make.”
We are chasing a ghost. We are trying to define the undefined.
We define AGI as “Human Intelligence.”
Human Intelligence is boundless and self-referential (a “Set of all Sets”).
Therefore, AGI is a paradox.
Debating when AGI will arrive is like debating the value of 1/0.
The answer isn’t “infinity,” and the answer isn’t “zero.” The answer is undefined. It is a syntax error in our thinking.
The Way Out: The Axiom of Specification
Mathematics resolved Russell’s Paradox by abandoning Unrestricted Comprehension. They replaced it with the Axiom of Specification (or Restricted Comprehension).
The rule changed. You could no longer say, “I want a set of everything that has property P.”
Instead, you had to say: “From an existing, well-defined set A, I want to select the subset of items that have property P.”
In formal notation, we moved from:
To:
This shift saved mathematics. I believe it is the only way to save AI discourse.
Specifying the Set
If we stop trying to simulate “The Human Mind” (an undefined set) and start applying the Axiom of Specification, the path becomes clear.
We start with a domain that is actually defined. Instead of “human intelligence,” consider “knowledge work,” the set of cognitively demanding tasks that drive modern economies. This is actually well-defined: we know what software engineers do, what researchers do, what analysts do, what writers do. We have job descriptions, task lists, and professional standards. Our economy is a massive, well-documented set of tasks. They are a defined set A.
When we build AI, we are not creating a new consciousness. We are building a machine that captures a growing subset of Set A.
In this framework, an AI is not a “mind.” It is a union of specific, specified capabilities derived from the set of economic activity. The union becomes impressive without ever needing to be “everything human intelligence can do.”
Why This Matters
This is not just semantic pedantry. Having the words to talk about things is the prerequisite for clear thinking.
It lets us have conversations that converge. When someone says “We achieved AGI,” and someone else says “That’s not AGI,” they aren’t disagreeing about facts, they are using an undefined term. But if we say “This AI can now perform 80% of tasks in software engineering roles,” we can actually debate whether that is true and what it means for the labor market.
It lets us identify real risks. The risks from AI don’t require “general intelligence.” They come from specific capabilities: the ability to generate persuasive misinformation, to exploit security vulnerabilities, or to optimize for goals misaligned with human values. Naming these specifically lets us address them. Benchmark evaluations could become helpful here.
It reduces both hype and fear. Hype comes from claiming we are near “AGI” when we mean something more modest. Fear comes from imagining “AGI” as an unknowable, boundless superintelligence. Specification punctures both. “AI that can do all knowledge work” is both more concrete and more actionable than “AGI.”
Let us speak with more precision instead of orbiting Russell’s paradox without realizing it. We are trying to talk about “the set of all sets” as if it is a meaningful target. We are debating the properties of 1/0.
Conclusion
As long as we remain stuck in the “Unrestricted Comprehension” of AGI, we oscillate between delusion and terror. We will fear that the machine will suddenly “wake up” or “desire” control. We project a “Set of all Sets” onto a statistical model.
If we switch to “Specification,” we can measure progress. We can see the Venn diagram of economic tasks and AI capabilities. We can watch the circles overlap. We can talk about displacement, efficiency, and leverage without falling into metaphysical traps.
We need to stop waiting for the barber who shaves everyone who doesn’t shave themselves. He doesn’t exist. But the razor does, and it’s getting sharper. Let’s focus on what we’re cutting.
Footnote:
This argument isn’t about “human exceptionalism.” I would argue that that animal intelligence is also likely a “set of all sets”: boundless, self-referential, open-ended, and ultimately resistant to simple computational definitions.


