The Limits of Knowledge, Part IV: Too Clever for Its Own Good

(Last in a series; see Part I, Part II, and Part III)

“Clever Hans” was a mathematician who lived a century ago. He was also a horse, a genius horse who could add, subtract, multiply, and even calculate dates, giving the answer by tapping his hoof.

But it was discovered in 1907 that Clever Hans was in fact no better at arithmetic than any other horse.  Has was just reading subtle, unconscious cues from his owner,  tapping until he reached the expected answer.


Is science real? Is it objective knowledge of a world independent of us? Or is it just a cultural invention, an arbitrary game, something we project onto the world, with scientists tapping out results until, like Clever Hans, we get the result we unconsciously want?

Some postmodernists think the latter is true, and point to experiments swayed by unexamined assumptions, including the “Clever Hans” effect in animal intelligence experiments.   They then conclude that all science is equally rigged.

The critiques have some validity, but the examples are heavily weighted towards the sociological and anthropological sciences; that is, we easily fool ourselves concerning issues that touch upon us as humans.

But on the other end of the spectrum the story is different.  Out among the cold reaches of the galaxies and nestled in the hearts of atoms, we have found disturbing truths so contrary to human experience that they can’t be the result of some Very Clever Hans, trying to please our subconscious prejudices.

In previous essays I’ve written about some of these disturbing truths, focusing on surprising discoveries on the limits of our knowledge: the elasticity of time and space in relativity, the precision blur of the subatomic world, and even, in chaos theory, how butterflies can add up to hurricanes.  In this, the final essay in the series, I want to discuss how the rigorous pursuit of truth itself has been tripped up by mathematical ambition.


Mathematics is the purest and most universal truth. 1+1 = 2, always.  And while mathematics can be beautiful, like a poem, we don’t want our theorems to be as fleeting as a metaphor. We want them to be inexorable.

Or, as the great German mathematician David Hilbert (who in 1916  almost beat Einstein to the punch in deriving general relativity) proposed in the 1920s, we want a set of theorems that are complete and consistent.

Completeness means that all mathematical statements can be proved as either true or false, while consistency means no contradictions can arise.

Alas, Hilbert’s program was shown to be impossible–indeed, in an act of mathematical judo, it was proved to be impossible, in the work of a neurotic Austrian mathematician named Kurt Gödel.


A curious fact is all sentences with an even number of words are wrong.

(I’ll wait while you count.)

Ah, paradox. The bane and the joy of science.

Sometimes a paradox–such as the twin paradox in relativity, whereby two people can age at different rates–simply indicate the limitations of our intuition.

At other times a paradox indicates a serious logical misstep. By dividing by zero, one can obtain 1 = 7 or virtually any result. Mathematicians and theoretical physicists deliberately try to provoke such paradoxes, in order to test the soundness of a theory.

And sometimes the paradox is the result.

This is not a pipe

In logic, most troubling are variants of the liar’s paradox, like the one given above. It’s easy to dismiss as a trick, a string  of words that is grammatically correct but ultimately nonsensical.

The problem is when you import the liar’s paradox into mathematics. Consider Russell’s paradox: the set of all sets that are not members of themselves.  (I’ll wait again while you think about it.) It’s a paradox, but how do you set up the rules to exclude it? Mathematicians are  too scrupulous to just say “…nah.”  Russell’s own clunky solution was an unwieldy theory of types that disallowed self-referential statements.  But recursion is a powerful and important mathematical tool, and so self-reference is difficult to exclude and can lead to an incomplete theory.

Gödel was a scrupulously honest mathematician, who disliked to publish or give talks until he had made his proofs airtight, but he turned the liar’s paradox into–well, I was going to say, an art form, but in fact he achieved the exact opposite. He turned the liar’s paradox into a mechanical inevitability.


There are numbers– one, two, forty-eight–and there are statements about numbers. Three is one more than two.  Seven is a prime number. Six is the sum of its prime divisors. Eight is not a prime number. Gödel’s trick was to close the loop, to turn a statement about a number into a number.

Today in the computer age this seems a modest idea. After all, computers fundamentally work with ones and zeroes, and letters are represented by ASCII codes — the letter ‘A’ is represented by 65 (1000001 in binary), ‘B’ by 66, and so on.

Code talkers

But Gödel’s introduction of the idea was a revelation (and his work, along with Turing’s, helped to set the stage for modern computers).  This allowed him to create (the details of which the margins of this post are too small to contain) a self-referential statement:

This theorem cannot be proved.

Let’s think about this: If it were false, it could be proved; but then it would be true. So the only option is that the theorem is true…and unprovable.

Gödel showed that if you have a sufficiently powerful mathematical system, something strong enough to prove general statements about arithmetic, you can always play this trick.

This means that most interesting mathematical systems are incomplete: there are “true” statements that, nonetheless, cannot be proved.  (He also provided a second theorem that shows consistency to be an unobtainable goal.)

Gödel is like a boy who sets up his model train to crash head-on. Not only that, he showed that any train set can and will, eventually, crash head-on.


Of course, we see that “This theorem cannot be proved” must be true and unprovable.  We can recognize truths that mechanized mathematical systems–i.e. computers–do not.  Physicist Roger Penrose even turned this into a series of books, beginning with The Emperor’s New Mind, which argue that artificial intelligence is intrinsically impossible, that humans are intrinsically superior than machines. He also, implausibly, invokes quantum mechanics as the source of human intuition.

Penrose is a brilliant, brilliant man, but he is wrong about the implications of Gödel’s theorems. Gödel’s theorems apply to closed, deterministic systems working from a fixed set of axioms. Humans are open systems, adding new information all the time; and our brains are jittery, stochastic machines; our thoughts can be serendipitously bumped from one track to another.

Also, Penrose wrongly assumes that a single human can, or could, apprehend any and all mathematical truth. But there is no evidence to support that view. It is quite possible we are all Gödelian systems, but with different starting axioms. Some of us can apprehend some mathematic truths but not others; others apprehend a different set of truths. (Many of us, admittedly, have difficulty with math altogether.)

Unsurprisingly, Gödel’s theorems have long fascinated SF authors. In the classic Star Trek episode, “I, Mudd,” Spock and Kirk defeated their android captors by using the liar’s paradox; in D. F. Jones’ novel The Fall of Colossus, the world-controlling computer (an ancestor of Skynet, sans Terminators) is overthrown the same way. Greg Egan’s stories “Luminous” and “Dark Integers” have the protagonists discover, communicate with, and ultimately wage war against a nebulous alternate universe, all through theorems.


Philip Pullman, in his “Dark Materials” trilogy, envisions an aleithiometer, a machine that can answer any question. But some questions cannot be answered; others still have answers that we do not like.

The ultimate message of Gödel’s theorems is not that we humans are smarter than a pocket calculator. It is the judo quality of our reasoning: that we can understand so much, yet so much is out of our reach, and yet again we can know, without doubt, that the boundary exists.

Indeed, I would claim that our understanding of the limits of our understanding–not only from Gödel’s theorems, but also Einstein’s relativity, Heisenberg’s quantum mechanics, and Lorenz’s chaos–is the greatest achievement of the human mind.

(Pictures: Rene Margritte La trahison des images ;   xkcd comic

You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.