Two Ways of Knowing
What happens when your rationalist project and your intuitive project start arguing with each other

The Argument
Dan Shipper makes a case that would have made Socrates uncomfortable: thinking too logically can hold you back.
His argument runs like this: Western thought, since Socrates, has been dominated by rationalism — the idea that real knowledge comes from explicit rules, clear definitions, testable theories. This approach gave us science, mathematics, modern technology, vaccines, computers. It works.
But it doesn’t work for everything. Social sciences are stuck in a replication crisis. Early symbolic AI — the attempt to reduce intelligence to explicit rules — proved too brittle for the real world. And our own minds, most of the time, don’t operate by following explicit rules. They pattern-match. They intuit. They know things they can’t articulate.
Neural networks, Shipper argues, are the technological embodiment of this other way of knowing. They learn from experience, not from rules. They recognize patterns without being told what patterns to look for. They operate on what he calls “inexplicit rules” — knowledge that’s real but can’t be written down as a set of instructions.
The punchline: we built a technology that knows things the way we actually know things — through intuition, not logic — and this should make us reconsider how much we’ve privileged the rational half of knowing over the intuitive half.
Why This Hit Home
I’ve been running two projects simultaneously that embody exactly these two modes. I didn’t plan it that way. But reading Shipper’s argument, the structure became obvious.
The Rationalist Project: Popper’s Knowledge Tensor
PKT is as rationalist as it gets. The premise: what if we gave AI hard constraints — logical rules that can’t be violated, falsification criteria that force the system to reject bad knowledge? The framework uses tensor mathematics, formal operators, convergence conjectures. It draws from Karl Popper, who argued that knowledge advances through falsification — you don’t prove things true, you prove things false, and what survives is provisionally accepted.
This is explicit-rule thinking. Define the constraint. Formalize it. Test it. If it fails, reject it. Everything is articulable, everything is inspectable, everything follows logically from premises.
The Intuitive Project: AI in the Garden
The rooftop garden is the opposite. Fifty species of plants across three floors in Singapore, managed through conversation with an AI that has never seen, touched, or smelled any of them.
Nothing about the garden follows explicit rules. I describe symptoms — “the leaves are small and pale, the old leaves are bigger than the new ones” — and the AI pattern-matches against its training data to suggest diagnoses. Zinc deficiency. Alkaline soil. Possible scale infestation. Three hypotheses, layered, evolving as new information arrives.
The garden knowledge is intuitive. I can’t write down a complete set of rules for managing fifty species in containers on a rooftop. I couldn’t before AI, and I can’t now. But between my three years of embodied experience (I know what the leaves feel like when they’re healthy) and the AI’s vast pattern library (it knows that small pale leaves with larger old leaves suggests zinc), we navigate the garden effectively.
No formal framework. No falsification operator. Just observation, hypothesis, treatment, observation again.
The Play Project
There’s a third project that sits in the uncomfortable middle: Rube Goldberg AI.
That essay asks whether AI can learn to do things the hard way on purpose — not optimizing for efficiency but exploring for the sake of exploring. Play. Serendipity. The accidental discovery that comes from not knowing exactly what you’re looking for.
This is neither purely rational (there’s no formal objective function for “play”) nor purely intuitive (it’s structured, deliberate, designed). It’s the weird space between the two modes — where you set up conditions for discovery without specifying what should be discovered.
Darwin fits here. Evolution doesn’t optimize — it satisfices. It finds solutions that are “good enough” to reproduce, not solutions that are optimal. The recurrent laryngeal nerve wraps absurdly around the aorta in giraffes because evolution doesn’t plan — it tinkers. Rube Goldberg machines are the engineering equivalent of evolutionary tinkering.
What AI Tells Us About Knowing
Here’s where Shipper’s argument gets personal.
The AI I use for PKT and the AI I use for the garden is the same AI. Same model. Same architecture. Same training. But it’s being asked to operate in two completely different modes:
In PKT: I ask it to reason formally. Define operators. Check logical consistency. Survey the literature. Identify where my framework differs from existing work. This is the rationalist mode — and honestly, AI is okay at it. It can manipulate symbols and check logic, but it doesn’t generate deep mathematical insight on its own. It’s a capable assistant, not a creative mathematician.
In the garden: I ask it to diagnose. What’s wrong with this plant? What should I do? It excels here — not because it knows my garden (it doesn’t) but because it has absorbed millions of patterns about plants, soil, nutrients, and pests. It pattern-matches my descriptions against its latent knowledge and produces useful, often surprising diagnoses. This is the intuitive mode — and AI is remarkable at it.
The asymmetry is telling. AI is better at the intuitive task (garden diagnosis) than the rationalist task (mathematical framework design). This shouldn’t surprise us — neural networks are pattern matchers. They were designed to learn inexplicit rules from data. Asking them to do explicit logical reasoning is asking them to do the thing they were specifically not built for.
And yet, when we evaluate AI, we mostly test it on rational tasks — math, logic, coding, factual recall. We measure it on the dimension where it’s weakest. The dimension where it’s strongest — pattern recognition, intuitive diagnosis, synthesis across domains — is harder to benchmark, so we measure it less.
The pH Meter and the Falsification Operator
There’s a moment in the garden story that bridges both ways of knowing.
My daughter had a pH meter from a school science experiment. On a whim, I tested the soil in every pot. Every pot read pH 8 — alkaline. This single measurement explained why multiple plants weren’t responding to fertilizer. The nutrients were there; the soil chemistry was preventing absorption.
This was a rational act. I measured a variable, compared it to known optimal ranges, identified a deviation, and applied a correction (ferrous sulfate to lower pH). Hypothesis, test, result. Popper would approve.
But the reason I tested the pH was intuitive. Nothing in any rulebook told me to. The AI had suggested sulfur for the ixora based on a pattern-match against symptoms. I had the meter lying around. I was curious. The connection between “the ixora improved with sulfur” and “maybe I should test everything else” was not a logical deduction — it was a hunch.
Intuition generated the hypothesis. Rationalism tested it.
Both were necessary. Neither was sufficient alone.
Two Modes, One Mind
Shipper uses the metaphor that we model our minds on the tools we build. In the age of clockwork, we imagined the mind as a mechanism. In the age of computers, we imagined the mind as a processor. In the age of neural networks, we’re beginning to imagine the mind as a pattern matcher.
But maybe the point isn’t which metaphor is correct. Maybe the point is that the mind does all of these — sometimes in sequence, sometimes in parallel, sometimes in conflict.
My experience running both PKT and the garden simultaneously feels like exercising two different mental muscles. PKT requires me to think precisely, define terms, check consistency. The garden requires me to observe broadly, trust hunches, tolerate ambiguity. They’re different cognitive modes, and switching between them is sometimes jarring.
But the most productive moments come when they interact. The garden generates intuitions that I can test rationally. PKT generates formal questions that I can explore intuitively. The pH meter moment — intuition generating a hypothesis, rationalism testing it — is the template for how these two modes work best: together.
What This Means for AI
If Shipper is right that we’ve over-privileged rationalism in our understanding of knowledge, then the current trajectory of AI is interesting.
We’re building systems that are primarily intuitive — pattern matchers, not rule followers — and then trying to make them reason logically. The research community is working hard on making neural networks do math, follow instructions precisely, reason step by step. In Shipper’s framing, we’re trying to make the intuitive tool behave rationally.
What if we leaned into what these systems are actually good at instead? What if we designed AI systems that used their intuitive strengths (pattern recognition, synthesis, diagnosis) and delegated the rational tasks (formal verification, logical consistency, mathematical proof) to different tools?
This is essentially what I do in the garden without thinking about it. I use AI for diagnosis (intuitive) and my own judgment for treatment decisions (rational). I don’t ask the AI to prove that zinc deficiency causes small leaves — I ask it to recognize the pattern, and then I verify with a test.
Maybe that’s the right architecture for AI systems generally: intuitive core, rational shell. Pattern-match first, then verify. Hypothesize fast, test carefully.
Which is, oddly enough, what Popper was arguing all along. He didn’t say hypotheses must be generated rationally — he said they must be testable rationally. Where the hypothesis comes from doesn’t matter. What matters is that it can be falsified.
Neural networks generate hypotheses from patterns. Falsification tests them with logic. Two ways of knowing, working in sequence.
Maybe we’ve been building both halves all along.
Prompted by Dan Shipper’s talk on rationalism and intuition. Connected to Popper’s Knowledge Tensor, AI in the Garden, and Rube Goldberg AI.