The Retreating Human: God of the Gaps Logic in AI Resistance
I first came across the "God of the Gaps" concept when reading Richard Dawkins as an angsty teenager. Dawkins dedicates a whole chapter of God Delusion to complain about how something mysterious and previously seen as divine—lightning, disease, the complexity of the human eye—is finally explained by science. Some religious apologists will retreat to the next unexplained frontier as evidence of the existence of God. But this piece is not about religion, and criticisms of this tendency to attribute God to what we cannot yet explain have also bothered the pious. From his cell in WW2, German theologian Dietrich Bonhoeffer wrote:
(...) How wrong it is to use God as a stopgap for the incompleteness of our knowledge. If, in fact, the frontiers of knowledge are being pushed further and further back (and that is bound to be the case), then God is being pushed back with them, and is therefore continually in retreat. We are to find God in what we know, not in what we don't know.
This counter-argument, which was deeply embedded within the 2000s/2010s neo-atheism movement, couldn’t be more timely.
First, because AI is becoming “the hot” debate topic these days. There are debate subreddits dedicated to it (r/aiwars), visceral reactions abound, and intellectual careers are being built around the topic. But I can’t help but think that this is a debate far more nuanced and with far more consequences! People’s perspectives around AI are shaping geopolitics, infrastructure investments, and regulatory frameworks that will define the next decades. And unlike the question of god’s existence, which is fairly binary (at least from the perspective of an atheist), the question is now whether AI will influence society (it already does), but how.
Second, because bad arguments are rampant. Whether you are a pro or an anti (in r/aiwars linguo) is, in Gen Z speak, a vibe. And here we should note that hyping AI up can be very beneficial money-wise, and this was already the case far before GenAI (see Arvind and Sayash’s book for plenty of examples): a timeline where AI proves revolutionary is good news for the valuation of companies, from startups to behemoths. And in that context, every breakthrough gets breathlessly covered as "AGI is here!" and every limitation gets dismissed as "just a scaling problem.”
But this post is a criticism of a specific pattern of AI skepticism, which is the reverse: seeing humanness in the next frontier that is unconquered by AI. Before “delving” further, let me just make clear why I think criticizing “anti-AI” takes is essential. AI is genuinely consequential technology that's already reshaping labor markets, educational systems, and creative industries in measurable ways. This means we urgently need clear thinking about how society functions in an AI-saturated world. And we do not get to help shape the world for the better if debates center around what AI cannot yet do.
Human of the Gaps and the Stochastic Parrot
Back in 2021, Bender et al. published the very influential “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜”. This paper provides essential lingo for what I’m calling the “human of the gaps” argumentative strategy. Authors define Language Models as generating of "haphazardly stitch together sequences of linguistic forms" based on probabilistic patterns from vast training data. But they do so "without any reference to meaning"; the text they generate is not grounded in "communicative intent, any model of the world, or any model of the reader's state of mind."
The paper arguments are not very convincing today. Current LMs can solve unseen Math Olympiad problems through complex “reasoning chains” (see Google’s official IMO gold), they definitely can create world models of the underlying data (see the Othello paper), and they can act as agents that autonomously use tools to accomplish goals (see ReAct). Yet, “true” understanding, intelligence, and consciousness are so hard to define (and measure), that their achievement can be always moved to the next unresolved frontier.
This is where the “Human-of-the-Gaps” move emerges (or perhaps “The Marcus maneuver”?). Every time a frontier falls, skeptics retreat to the next one, insisting that is where “real” humanness lies. Machines beat us at chess? Well, chess is just calculation. They learn to translate across dozens of languages? Translation is pattern-matching. They generate music, code, art, and even pass professional licensing exams? These, we’re told, are still not “true” creativity or “true” reasoning. The essence of being human always lives one step further into the fog, wherever technology has not yet penetrated.
The problem with this move is not that it is wrong to notice gaps. Gaps are real and important. The problem is that it becomes a kind of motte-and-bailey: it defines “human” precisely as “whatever machines cannot do (yet).” That definition is, by construction, unfalsifiable. It is also unhelpful for guiding real-world decisions about governance, labor, or ethics. If we only anchor our self-worth and policy frameworks in what remains unachieved, then every technical advance destabilizes the foundation of the argument.
And this matters because while we bicker over whether models “really” understand, their measurable consequences accumulate. Generative models are not waiting for philosophical consensus to reshape education, journalism, software engineering, and creative industries. They are already altering workflows, wages, and the distribution of expertise. To insist that these systems are “mere parrots” is to miss the obvious: parrots that can code, draft contracts, or prove theorems are already socially transformative, regardless of whether their internal processes pass your metaphysical bar for “understanding.”
Another Path Forward
What should we do instead? Let’s turn again to Bender et al. (2021) and not throw the baby out with the bathwater. They made important contributions about bias amplification, environmental costs, and deployment risks that remain highly relevant. Yet, these contributions were framed based on what AI was at the time of writing, not about what it was not.
This is the crucial difference. A productive critique does not define itself around what the technology currently lacks, but rather around the concrete harms, limitations, and externalities that can be measured and governed. We don’t need to speculate about “true” consciousness to see that large-scale models are shaping energy policies around the world, that training data often encodes social prejudices, or that downstream deployment can exacerbate inequality in labor markets. Those are tractable, empirical claims, and they remain urgent regardless of whether models someday cross into capacities we would consider “understanding.”
Another path forward, then, is to build an evaluative vocabulary around what systems demonstrably do, not what they metaphysically are. This aligns with Narayanan and Kapoor’s proposal to view AI as “normal technology” like electricity or the internet: transformative, yes, but not a sui generis alien species. In their words, “to view AI as normal is not to understate its impact—even transformative, general-purpose technologies such as electricity and the internet are ‘normal’ in our conception. But it is in contrast to both utopian and dystopian visions (…) which treat it akin to a separate species” . Framing AI in this way shifts the focus from ontological riddles (whether models “really” understand or “truly” reason) to practical governance. It means centering debates on capabilities, externalities, and failure modes: what tasks these systems can perform and how reliably, what costs they impose on labor markets and the environment, and how they fail under distributional shifts or adversarial pressure.
Anchoring debates this way helps us avoid the endless retreat into “Human-of-the-Gaps” territory, and instead grounds our choices in observable consequences. It also gives us a way to govern these technologies: to measure, benchmark, red-team, and legislate, rather than waiting for some metaphysical threshold to be crossed. Prediction and speculation are not useless—they help us scan the horizon—but they are no substitute for evidence. When speculation hardens into certainty, it distorts priorities: risks that are near and measurable get neglected, while imagined futures soak up attention.
Bonhoeffer’s advice scales: find our footing in what we know. For AI, that means resisting both the theological impulse to search for a sacred remainder machines can never touch and the eschatological impulse to declare transcendence at each leaderboard bump. The frontier will keep moving; that is the nature of science and technology. What must not is our sense of value. Instead of tying human worth to whichever frontier machines have not yet crossed, we should tie it to responsibility, justice, and the institutions we build.


The practical measure as to AI's function in society then is what it does or does not do to the quality of existing human life. I don't need to speculate about whether I've been displaced by AI and lost my job. I don't need to speculate if my child can read a difficult text, sans a robot helper, or judge well the helper's "advice" when it's available, or whether she can think through a problem independently. No speculation is needed to know that the noise pollution, among other environmental concerns, of a giant AI data center is driving nearby residents to desperate measures.
The human-of-the-gaps could be reframed as the human as a living and breathing creature (animal, organism, however shorn of "human" essence we want to make it), but still a creature prone to action and reaction in society, prone to powerful propaganda and ideological networks, prone to committing acts of violence, or abject passivity, when its existence is threatened, or it is made to believe its existence is threatened. The propaganda can be instituted through AI as it algorithmically shapes and conditions thought and behavior, creating algorithmically attuned creatures, as dictated through the corporate interests that create it, drive it, and that it ultimately serves.