The Limits of Knowledge, Part III: Big Surprises in Little Packages

(Part of a continuing series: part I and part II)

I love tales of serendipitous scientific discovery. A spot of mold in a Petri dish leads to penicillin, a spill on the stovetop becomes vulcanized rubber. The true hero is penetrating curiosity: instead of dumping a ruined experiment in the trash, the keen-eyed scientist frowns and wonders: what does this mean?

Most stories of serendipity occur among test tubes and Bunsen burners, but today computers allow numerical experiments and computational “accidents.” And one of the most paradigm-shattering accidents of the twentieth century involved neither dawdling clocks moving at the speed of light, nor slippery electrons dancing around an atom, but humble calculations of the weather.

Newton’s mechanics engendered a view of a steadily ticking universe, as intricate but as ultimately predictable as clockwork. In the century following Newton, physicists and mathematicians devised powerful conceptual tools with which to track the cosmic machinery.

This is not easy. Humans have a hard time thinking of more than one thing at a time. Most tools on a physicist’s utility belt are tricks to make a large, complex problem look  smaller and simpler.

For example, total energy and total momentum must balance out at each step. (Please note that the conservation of energy, etc., is not something idly assumed just because Grandpa said so, but is based upon enormous amounts of empirical evidence and years of experiments designed to test said conservation.) Following the energy helps a physicist to intuit the broad sweep of mechanical evolution.

But there have always been ambitions to understand everything.  In 1814 the French mathematician Pierre-Simon, marquis de Laplace, postulated an intellect who, at one moment knowing the position and velocities of all particles in the universe, could predict all future events .  Such a thought sent a chill through philosophers, for it shattered the concept of free will, shackling us to Newton’s implacable cogs of force and momentum.


Of course, quantum mechanics places fundamental limits on simultaneous knowledge of the position and velocity of any particle, as devoted readers of this blog know. But even without the uncertainty principle, Laplace’s intellect faces an uphill battle–and a little butterfly leads the opposition.

During the 1940s and 1950s, general purpose electronic computers, the kind of tedious intellect Laplace envisioned, began to be developed, with ENIAC in 1946 and UNIVAC in 1951. By the early 1960s computers were being applied to a wide variety of problems. including meteorology.

Edward Lorenz was one such person (using a Royal McBee LGP-30; See also here.) Hoping to unravel the master principles behind weather, Lorenz developed a simplified set of differential equations with only twelve variables and numerically solved them.

At one point Lorenz wanted to reproduce some earlier results, and so re-ran his program. Comparing old and new results, he found they quickly diverged, giving wildly disparate predictions.

The source of the discrepancy was something seemingly innocuous: Lorenz had kept only three out of six decimal places in the data for restarting the calculation, for example approximating 0.506127 as 0.506.

But this small difference, one part in ten thousand, grew exponentially, until the two runs looked utterly unlike.

Many scientists would have simply concluded that their set of equations were “pathological” and started over from scratch. But Lorenz saw deeper.

He had discovered a “sensitive dependence on initial conditions.” Originally Lorenz used the image of a seagull flapping its wings and changing the weather around the world, but later settled on the more poetic idea of a butterfly as the revolutionary.

Lorenz’s genius was to realize that this sensitive dependence on initial conditions– rechristened chaos by James Yorke in 1975–was not pathological or rare, but in fact common. It occur in even simpler systems: Lorenz eventually boiled his equations down to just three variables, with the same result.

As it turns out, Lorenz was not the first to discover the principle of chaos. In the 1880s Henri Poincare (who, always the bridesmaid, also helped lay the foundations for relativity) realized there would exist divergent solutions for systems of three or more bodies.  But most physicists shied away from Poincare’s finding, not believing the universe to be so obstinately unpredictable. It required large scale number-crunching to rub our noses in the truth.


Chaos is not randomness as found in quantum mechanics. Poincare and Lorenz had discovered deterministic chaos. It turns out chaos easily arises in Newton’s clockwork.   Laplace’s intellect is doomed to failure the moment it uses only a fixed number of decimal places.

And, importantly, chaos is more than just the slogan “a sensitive dependence on initial conditions.”  It is a rigorous topic, bristling with many powerful mathematical tools. (This can be contrasted with so-called “complexity,” a field devoid of rigor but brimming with meaningless aphorisms, making it beloved of consultants and other charlatans.)

The Lyapunov exponent, for example, is a precisely defined measure of just how sensitive a system is to initial conditions.

And some systems are not chaotic, but “integrable”–for each variable there is a conserved quantity like energy or momentum, reducing the problem to a set of independent, one-dimensional systems. But most systems are chaotic, or have interwoven chaotic and nonchaotic regimes (in technical terms–and I know, Dear Reader, you love technical terms–interwoven regions with positive and negative Lyapunov exponents).

And then there is Lorenz’s three-variable system, whose two-lobed chaotic trajectory,  a staple of screensavers everywhere and sometimes called the Lorenz butterfly, is a a chaotic system that traces similar yet not quite the same paths over and over and over.

Lorenz' Butterfly, a Strange Attractor


The butterfly effect has been used many times in both genre and non-genre fiction.

Ray Bradbury’s 1952 short story, “A Sound of Thunder,” anticipated the butterfly effect: a time-traveler crushes a Cretaceous butterfly and changes history.

It’s unknown if Lorenz was aware of Bradbury’s story.

Kim Stanley Robinson, whose sf often explores the vagaries of history, even wrote an essay titled “A sensitive dependence on initial conditions,” and his new and brilliant novel, Galileo’s Dream, envisions alternate paths history might have taken, hinging on Galileo’s trial before the Inquisition.

The butterfly effect also gives us pause as to whether Asimov’s psychohistory (invented well before Lorenz, to be sure) would ever have a chance; Asimov, with a Ph.D in biochemistry, was clearly thinking in analogy with thermodynamics, statistical averages of the energy and momentum of trillions of random gas particles.

But a better paradigm for history might be the Lorenz butterfly. Each lobe, or basin of attraction, could be swings of the historical pendulum…except that when and where the swing will occur, no one could predict.

And that’s part of the fun for sf writers. Let’s be honest: sf’s track record in predicting the future is poor. We got communication satellites and cell phones right, but where are the jetpacks? The orbiting Hiltons? The AIs plotting genocide of the human race? Sf cannot, and should not, try to play the role of Laplace’s intellect.

Instead, as Kim Stanley Robinson emphasizes, we write morality plays about how skittish and fickle history is. Down one path lies nuclear Armageddon, down another galactic utopia. And sitting at the crossroads, resting her wings for a moment, is our heroine: a little yellow butterfly.

You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.