Archive

Archive for July, 2012

That Great Big Wave Pool In The Sky

When we imagine the distant future, we tend to envision some combination of industrial-strength social order on starships and preindustrial-strength chaos in exoplanetary exploration. Star Trek, Star Wars, City Mouse and Country Mouse, etc. There’s a lot of technologies-versus-organics still going on out there in the big wide multiverse of fiction.

What’s more interesting is the fact that humans will probably never survive very long away from the tiny wet marble we evolved on if we’re unable to forge a successful marriage between those two influences upon our bodies and minds.

No, there is no living system on Earth evolving in such a way that we can simply encapsulate it and use it to fly ourselves to other stars. Tardigrades seem to do alright for themselves in space in spite of the radiation, cold, and total lack of food, water and air. But humans aren’t that hardy. Or that cute.

Cuter than a tardigrade? ('Wild Thing' by Kay Holt)

Yes, it’ll take unprecedented degrees of human cooperation and organization and invention to make-real the technologies and infrastructure we’ll need to support ourselves off-Earth. We’re very good at gadgets; maybe someday there’ll even be an app for that. But all our best engineers working together for generations will never be able to fix what’s wrong on a starship devoid of wildlife and wide open spaces.

If we don’t want to self-destruct on our way to the stars, we’re going to need a bigger ‘boat.’ One big enough to carry an ocean inside. And a bit of forest. Some lovely crags. An icy brook here and there…

Bearing in mind that we essentially need to build small inside-out planets to sustain us on our [hopefully] inevitable deep space treks, the question I have for the writers among us is this: What’s in your interstellar terrarium?

Science Fiction Fails Immunology

Immunology is a rapidly developing science, but it still isn’t well-known or well-understood. And it shows, especially in science fiction. Time travelers and aliens easily adjust to life on present-day Earth without the need for vaccinations. Humans explore unknown planets with breathable atmospheres without any protection from the millions of potentially disease-causing microbes all around them. Or they willingly remove what protection they have…*cough*Prometheus*cough*. It’s practically a tautology; an atmosphere that can support life is a safe atmosphere for humanoids to breathe. H. G. Wells more accurately portrayed the consequences of making this assumption in War of the Worlds. When the invading aliens were exposed to Earth’s atmosphere, they were eventually killed off by microbial infections.

This glaring oversight is sort of understandable, though. Immunology, the study of the immune system, is a relatively new field of science. The germ theory of disease wasn’t even validated until the late 19th century. Antibodies, complex proteins that enable the immune system to adapt to and remember past infections so that you generally don’t get sick again from the same thing, weren’t fully characterized until the 1960s. Gerald Edelman, Joseph Gally and Rodney Porter won a Nobel Prize for finally putting the pieces together. It was sort of a big deal.

If there is any life in an alien environment, there will be microbes. Higher forms of life may show up as well, but microbes are a guarantee if we are assuming that life is involved. If these microbes are from an alien environment, then every single one of them could be completely foreign to your immune system. There are a couple of different ways this could play out.

Microbes that cause disease generally do so entirely by accident. A survival mechanism of the invading organism just happens to interact with your bodily systems in such a way as to cause sickness or death. This is why our ability to recognize something as ‘foreign’ and get rid of it quickly is so important. Immune cells ‘see’ microbes by recognizing proteins or structures that are microbe-specific, either through specialized receptors, antibodies or other immune proteins. Given that your immune system is being exposed to something it has never encountered before, it may be entirely unable to recognize these new microbes as microbes. This would basically mean that you would be defenseless against any incidentally lethal effects they might have on your cells.

Some of the mechanisms humans have evolved for dealing with foreign microbes are particularly unpleasant. The symptoms you experience when you have a cold or the flu, for example, are actually caused by your immune system trying to rid your body of the virus. The additional mucus protects your vulnerable membranes. The increase in body temperature makes for an inhospitable environment to viral replication. Your cells that are already infected are targeted and destroyed to prevent the spread of infection. The immune response, if activated strongly enough, has the potential to kill you outright…as those with severe allergies are well aware. The prospect that your immune system would respond to foreign microbes on an alien world is at least as terrifying as it not responding at all.

Of course, it is also entirely possible that nothing would happen at all. Whatever microbes that developed on this hypothetical planet may not be able to survive inside of a human body. They may not have the potential to negatively affect human cells while simultaneously being alien enough that the immune system doesn’t recognize them. The point is that whenever someone breathes in an alien atmosphere they are taking a serious risk, and that risk is often ignored or glossed over in science fiction.

This problem applies even more strongly to human time travelers, who are already susceptible to human diseases but simply do not have the immunity or the vaccinations to protect them against the latest strain of measles or influenza. They could also introduce diseases from their own time, like smallpox or multi-drug resistant bacteria, which present-day Earth is poorly equipped to deal with. The results could be devastating.

Humans are bacteria factories. Our mouths, guts and skin are absolutely coated with microbes. Our immune system has been adapting to all of these microbes since we were born and recognizes them as ‘normal’ rather than foreign. They generally don’t cause disease unless something goes wrong. In fact, we would quite literally die without them. But introducing our normal bacteria into a foreign environment could have disastrous consequences, and I’m pretty sure that accidentally causing a plague or wiping out the local population would violate the prime directive as well as significantly alter the course of history.

I am not suggesting that any piece of science fiction will receive a fail for *gasp* scientific inaccuracy if it doesn’t acknowledge the immunological implications of every decision being made. Requiring exposition describing vaccinations or atmospheric safety precautions would be boring and honestly irrelevant in many cases. But sometimes it is relevant and could really add something to the story. Good science fiction makes you think…about the future, and about how we might deal with extraordinary circumstances. Even if most people don’t know how their immune system works, disease is something that we all know and fear. Acknowledging the risk of personal contamination or plague in the right context creates character anxiety and drama because it is instantly relatable, even in an alien environment.

The absurd complexity of the immune system itself, combined with the infinite variety of microbial life, makes for a well-spring of potential creativity. You can’t remove microbes from the picture. They are part of us, and they will be a part of any ecosystem that supports life. These are obstacles that humanity will have to overcome if we ever hope to explore new worlds. What better way to inspire their solutions than through science fiction?

“Arsenic” Life, or: There Is TOO a Dragon in My Garage!

Note: This article was originally posted at Starship Reckless.

GFAJ-1 is an arsenate-resistant, phosphate-dependent organism — title of the paper by Erb et al, Science, July 2012

Everyone will recall the hype and theatrical gyrations which accompanied NASA’s announcement in December 2010 that scientists funded by NASA astrobiology grants had “discovered alien life” – later modified to “alternative terrestrial biochemistry” which somehow seemed tailor-made to prove the hypothesis of honorary co-author Paul Davies about life originating from a “shadow biosphere”.

As I discussed in The Agency that Cried “Awesome!, the major problem was not the claim per se but the manner in which it was presented by Science and NASA and the behavior of its originators. It was an astonishing case of serial failure at every single level of the process: the primary researcher, the senior supervisor, the reviewers, the journal, the agency. The putative and since disproved FTL neutrinos stand as an interesting contrast: in that case, the OPERA team announced it to the community as a puzzle, and asked everyone who was willing and able to pick their results apart and find whatever error might be lurking in their methods of observation or analysis.

Those of us who are familiar with bacteria and molecular/cellular biology techniques knew instantly upon reading the original “arsenic life” paper that it was so shoddy that it should never have been published, let alone in a top-ranking journal like Science: controls were lacking or sloppy, experiments crucial for buttressing the paper’s conclusions were missing, while other results contradicted the conclusions stated by the authors. It was plain that what the group had discovered and cultivated were extremophilic archaea that were able to tolerate high arsenic concentrations but still needed phosphorus to grow and divide.

The paper’s authors declined to respond to any but “peer-reviewed” rebuttals. A first round of eight such rebuttals, covering the multiple deficiencies of the work, accompanied its appearance in the print version of Science (a very unusual step for a journal). Still not good enough for the original group: now only replication of the entire work would do. Of course, nobody wants to spend time and precious funds replicating what they consider worthless. Nevertheless, two groups finally got exasperated enough to do exactly that, except they also performed the crucial experiments missing in the original paper: for example, spectrometry to discover if arsenic is covalently bound to any of the bacterium’s biomolecules and rigorous quantification of the amount of phosphorus present in the feeding media. The salient results from both studies, briefly:

– The bacteria do not grow if phosphorus is rigorously excluded;
– There is no covalently bound arsenic in their DNA;
– There is a tiny amount of arsenic in their sugars, but this happens abiotically.

The totality of the results suggests that GFAJ-1 bacteria have found a way to sequester toxic arsenic (already indicated by their appearance) and to preferentially ingest and utilize the scant available phosphorus. I suspect that future work on them will show that they have specialized repair enzymes and ion pumps. This makes the strain as interesting as other exotic extremophiles – no less, but certainly no more.

What has been the response of the people directly involved? Here’s a sample:

Felisa Wolfe-Simon, first author of the “arsenic-life” paper: “There is nothing in the data of these new papers that contradicts our published data.”

Ronald Oremland, Felisa Wolfe-Simon’s supervisor for the GFAJ-1 work: “… at this point I would say it [the door of “arsenic based” life] is still just a tad ajar, with points worthy of further study before either slamming it shut or opening it further and allowing more knowledge to pass through.”

John Tainer, Felisa Wolfe-Simon’s current supervisor: “There are many reasons not to find things — I don’t find my keys some mornings. That doesn’t mean they don’t exist.”

Michael New, astrobiologist, NASA headquarters: “Though these new papers challenge some of the conclusions of the original paper, neither paper invalidates the 2010 observations of a remarkable micro-organism.”

At least Science made a cautious stab at reality in its editorial, although it should have spared everyone — the original researchers included — by retracting the paper and marking it as retracted for future reference. The responses are so contrary to fact and correct scientific practice (though familiar to politician-watchers) that I am forced to conclude that perhaps the OPERA neutrino results were true after all, and I live in a universe in which it is possible to change the past via time travel.

Science is an asymptotic approach to truth; but to reach that truth, we must let go of hypotheses in which we may have become emotionally vested. That is probably the hardest internal obstacle to doing good science. The attachment to a hypothesis, coupled with the relentless pressure to be first, original, paradigm-shifting can lead to all kinds of dangerous practices – from cutting corners and omitting results that “don’t fit” to outright fraud. This is particularly dangerous when it happens to senior scientists with clout and reputations, who can flatten rivals and who often have direct access to pop media. The result is shoddy science and a disproportionate decrease of scientists’ credibility with the lay public.

The two latest papers have done far more than “challenge” the original findings. Sagan may have said that “Absence of evidence is not evidence of absence,” but he also explained how persistent lack of evidence after attempts from all angles must eventually lead to the acceptance that there is no dragon in that garage, no unicorn in that secret glade, no extant alternative terrestrial biochemistry, only infinite variations at its various scales. It’s time to put “arsenic-based life” in the same attic box that holds ether, Aristotle’s homunculi, cold fusion, FTL neutrinos, tumors dissolved by prayer. The case is obviously still open for alternative biochemistry beyond our planet and for alternative early forms on earth that went extinct without leaving traces.

We scientists have a ton of real work to do without wasting our pitifully small and constantly dwindling resources and without muddying the waters with refuse. Being human, we cannot help but occasionally fall in love with our hypotheses. But we have to take that bitter reality medicine and keep on exploring; the universe doesn’t care what we like but still has wonders waiting to be discovered. I hope that Felisa Wolfe-Simon remains one of the astrogators, as long as she realizes that following a star is not the same as following a will-o’-the-wisp — and that knowingly and willfully following the latter endangers the starship and its crew.

Relevant links:

The Agency that Cried “Awesome!”

The earlier rebuttals in Science

The Erb et al paper (Julia Vorholt, senior author)

The Reaves et al paper (Rosemary Rosefield, senior author)

Images: 2nd, Denial by Bill Watterson; 3rd, The Fool (Rider-Waite tarot deck, by Pamela Cole Smith)

Landing on Other Planets: Seven Minutes of Terror

In less than a month, NASA’s Mars Science Laboratory (MSL) will land on Mars. But to just nonchalantly say “it will land on Mars” overlooks just how hard it is to land on another planet, especially one with an atmosphere. In science fiction, it’s commonplace to see ships land on planets like it’s no big deal, so I thought it would be worth taking a look at what NASA has to do to land MSL safely on Mars next month.

MSL launched on the day after Thanksgiving in November of last year and it has been drifting through space on a collision course with Mars since then. The spacecraft will hit the top of the martian atmosphere at around 8000 miles per hour, and it has to walk a fine line as it slows down and descends to the surface. Slow down too fast and you burn up in the atmosphere. Slow down too slowly… well, then you’re just another crater.

And it’s not just a matter of killing all that excess speed safely. You also want some control over where you end up on the surface. MSL has the most precise landing system ever used for a Mars mission, allowing us to drop the rover into the floor of Gale Crater at the base of an 18,000 ft tall mountain of layered rocks. To do this, MSL actually can steer itself as it is hurtling through the upper atmosphere. Early in the descent, the capsule drops a couple of tungsten bricks, offsetting the center of mass. This shifted center of mass means that the capsule is tilted so that it actually generates lift as it decelerates. Computer controlled jets fire to adjust the trajectory, giving us pinpoint landing capabilities.

Image credit: NASA/JPL

Once the capsule has slowed down to a mere 1000 mph, it is no longer in danger of burning up, so it gets rid of the heat shield and releases a supersonic parachute. This parachute claws at the thin atmosphere and slows the rover down to a few hundred miles per hour.

For previous missions, once the parachute brought the rover close enough to the surface, the rover would disconnect from the parachute and inflate a tetrahedron of giant airbags, allowing it to bounce and roll to a stop. (If you ever had to do an egg-drop project in physics class, it’s like that, but the egg costs hundreds of millions of dollars and if it breaks you will have destroyed a decade’s worth of work by more than a thousand people.)

MSL is too heavy to land on airbags, so the engineers decided to use rockets. The problem is, rockets kick up dust, which can damage the rover’s delicate moving parts and scientific instruments. The solution? Wear the rockets like a jetpack, and then lower to rover on a winch when it gets close enough to the ground.

When the wheels finally touch down, explosive bolts cut the bridle and the jetpack blasts away to crash safely in the distance.

Image Credit: NASA/JPL

All of this takes about 7 minutes. Mars will be about 14 light-minutes away, so the rover’s computer does it all on its own. All of us on the mission will just be watching helplessly. They call it the “seven minutes of terror”:

So, next time you are watching or reading (or writing) science fiction, spare a moment to consider: How are the spacecraft going to solve the problem that NASA’s engineers had to solve for MSL? What are the requirements for the landing? Does it have to be precise, or do they just need to get down safely? How fast is the ship going? How is it going to kill off all of that kinetic energy without killing off the crew? Is it safe for the propulsion system to kick up dust and contaminate the surface, or is a bit more creativity called for? And then, if the ship is like most in science fiction, how is it going to do all of that again and again as it hops from planet to planet? Does it need its heat shield replaced every time? What about parachutes? Fuel?

There’s a reason all Mars missions so far have been one-way trips. It’s hard enough to survive the seven minutes of terror once. Launching from the surface and surviving it again when returning to Earth is not possible. Yet.

 

Battlestar Galactica burns as it enters the atmosphere.

 

 

A Recipe for Sentience: The Energetics of Intelligence

“No man can be wise on an empty stomach.”

- Mary Anne Evans, under the pseudonym George Eliot

 

We humans have been suffering from a bit of a self-image problem for the last half century.

First we were Man the Tool-Maker, with our ability to reshape natural objects to serve a purpose acting to  separate us from the brute beasts.  This image was rudely shattered by Jane Goodall’s discovery in the 1960s that chimpanzees also craft and use tools, such as stripping leaves from a twig to fish termites out of their nest to eat, or using the spine of an oil palm frond as a pestle to pulverize the nutritious tree pulp.

Then we were Man the Hunter.  We’d lost our tool-making uniqueness but we still had our ability to kill, dismember, and eat much larger animals with even simple tools, and it was thought that this ability unlocked enough energy in our diet to fuel the growth of larger body size and larger brains1.  This idea rather famously bled into popular culture and science fiction of the time, such as the opening to the movie 2001: A Space Odyssey.  However, we would later find out that although it is not a large component of the diet, chimpanzees eat enough meat to act as significant predators on other primates in their forest homes.  We would also find out that the bone piles we had once attributed to our ancestors belonged to ancient savannah predators, and that the whole reason hominid bones showed up in the assemblage at all is because we were occasionally lunch.

So meat eating by itself doesn’t seem to make us as distinct from our closest living relatives as we had previously thought, and the argument of what makes us special has since moved on to language.  That does leave a standing question, though: if it wasn’t meat-eating that allowed us to get bigger and more intelligent, what was it?

While there is evidence in the fossil record that eating raw meat allowed humans to gain more size and intelligence, it is both unlikely that we were the hunters and that this behavioral change was enough to unlock a significant jump in brain size.  Instead, there is another hypothesis and human identity that has been gaining more traction as of late: the concept of Man the Cooking Animal, the only animal on Earth that can no longer survive on a diet of raw food because of the energy demands of its enormous brain2.

Napoleon is famously said to have declared that an army marches on its stomach (at least, after what may be a loose translation).  That is, the power of an army is limited by the amount of food that a society can divert to it.  What we have come to realize more recently is that this same limitation exists inside the body, be it human, animal, or speculative alien species.  No matter what the diet, a creature will only have a fixed amount of energy available to divert to activities such as maintaining a warm-blooded body temperature (homeothermy), digestion, reproduction, and the growth and maintenance of tissues.  We can track some of these changes in the human line in the fossil record, but others must at best be more speculative due to the difficulty of preserving evidence of behavioral changes (which of course, do not fossilize) as well as limited research on modern examples.  We’ll start by looking at the evolutionary pathway of humans to see what information is currently available.

 

 The Woodland Ape and the Handy Man

 

Size comparison of Australopithecus afarensis and Homo sapiens (by Carl Buell)

Some of the oldest human ancestors that we can unequivocally identify as part of our line lie in the genus Australopithecus.  These have been identified by some authors as woodland apes, to distinguish these more dryland inhabitants from the forest apes that survive today in Africa’s jungles (chimpanzees, bonobos, and gorillas).  They are much smaller than a modern human, only as tall as a child, but they have already evolved to walk upright.  They still show adaptations for climbing that were lost in later species, suggesting they probably escaped into the trees at night to avoid ground predators, as modern chimps do.  Their brains were not much larger than a modern chimpanzee’s, and their teeth are very heavy, even pig-like, as an adaptation to a tough diet of fibrous plant material – probably roots, tubers, and corms, perhaps dug from plants growing at the water’s edge2,3.

The hominids thought to have first started eating meat are Homo habilis, the “handy man”, and the distinction between them and the older Australopithecus group from which they descended are not very large.  The two are close enough that Homo habilis has been suggested it might be more properly renamed to Australopithecus habilis, while the interspecies variation suggests to some researchers that what we now call habilis may represent more than one species4Whatever its proper taxonomic designation, H. habilis shows a modest increase in brain size and evidence that it was using simple stone tools to butcher large mammals, probably those left behind by the many carnivorous mammals that lived on the savannahs and woodlands alongside it.

The transition between H. habilis and H. erectus is far more distinctive, with a reduction in tooth size, jaw size, and gut size, and an increase in brain volume.  They are also believed to have been larger, but the small number of available hominid fossils makes this difficult to verify.  H. erectus is also the first human to have been found outside of Africa.  While the habilis-erectus split has been attributed to the eating of significant amounts of meat in the Man-the-Hunter scenario (recall that habilis, despite its tool-using ability for deconstructing large animals, does not appear to have hunted them), the anthropologist Richard Wrangham has suggested that the turnover instead indicates the first place at which humans began to cook2,3.  Because the oldest solid evidence of cooking is far younger than the oldest known fossils of erectus, what follows is largely based on linking scraps of evidence from modern humans and ancient fossils using what is known as the Expensive-Tissue Hypothesis.

 

 Brains versus Guts: The Expensive-Tissue Hypothesis

 The Expensive-Tissue Hypothesis was first proposed by Leslie Aiello and Peter Wheeler in 19955, and it goes something like this.  Large brains evolve in creatures that live in groups because intelligence is important to creating and maintaining the social groups.  This is known as the social brain hypothesis, and it helps to explain why animals that live socially have larger brains than their more solitary relatives.  However, not all social primates, or even social animals, have particularly large brains.  Horses, for example, are social animals not known for their excessively large brain capacity, and much the same can be said for lemurs.  Meanwhile, apes have larger brains than most monkeys.  This can’t be accounted for purely by the social brain hypothesis, since by itself it would suggest that all social primates and perhaps all social animals should have very big brains, rather than the variation we see between species and groups.  What does account for the difference is the size of the gut and, by extension, the quality of the diet.

Both brains and guts fit the bill for expensive body tissues.  In humans, the brain uses about 20% of the energy we expend while resting (the basal metabolic rate, or BMR) to feed an organ that only makes up 2.5% of our body weight2.  This number goes down in species with smaller brains, but it is still disproportionately high in social, big-brained animals.  Aiello and Wheeler note that one way to get around this lockstep rule is to increase the metabolic requirements of the species5 (i.e., throw more calories at the problem), but humans don’t do this, and neither do other great apes.  Our metabolic rates are exactly what one would expect for primates of our size.  The only other route is to decrease the energy flow to other tissues, and among the social primates only the gut tissue shows substantial variation in its proportion of body weight.  In fact, the correlation between smaller guts and larger brains lined up quite well in the data then available for monkeys, gibbons, and humans5.  Monkeys and other animals that feed on low-quality diets containing significant amount of indigestible fibers or dangerous plant toxins have very large guts to handle the problem and must expend a significant amount of their BMR on digestion, and have less extra energy to shunt to operate a large brain.  Fruit-eating primates such as chimpanzees and spider monkeys have smaller guts to handle their more easily-digested food, and so have larger brains.  Humans spend the least amount of time eating of any living primate, with equally short digestion times as food speeds through a relatively small gut.  And ours, of course, are the largest brains of all2.

These tradeoffs are not hard-linked to intestinal or brain size, and have been demonstrated in other species.  For example, there is a South American fish species with a tiny gut that uses most of its energy intake to power a surprisingly large brain, while birds with smaller guts often use the energy savings not to build larger brains, but larger, stronger wing muscles2.  Similarly, muscle mass could be shed instead of gut mass to grow a larger brain or to cut overall energy costs.  The latter strategy is the one taken up by tree-dwelling sloths to survive on a very poor diet of tough, phytotoxin-rich leaves, and although it makes them move like rusty wind-up toys it also allows them to live on lower-quality food than most leaf-eating mammals.

Modern humans have, to a degree, taken this approach as well.  When compared to one of our last surviving relatives, H. neanderthalensis, humans have a skeletal structure that paleontologists describe as “gracile:” light bones for our body size, anchoring smaller muscles than our shorter, heavier relatives.  Lower muscle and bone mass in H. sapiens gives us an average energy cost on the order of 1720 calories a day for males and 1400 calories a day for females in modern cold-adapted populations, which are thought to have similar metabolic adaptations for cold weather as the as extinct Neanderthals.  By contrast, H. neanderthalis has been estimated to need 4000-7000 calories a day for males and 3000-5000 calories for females, with the higher costs reflecting the colder winter months6.

 

Cooked versus Raw

 

Tribe of Homo erectus cooking with fire (from sciencephoto.com)

At the point where human brain size first increases dramatically (H. erectus, as you might recall), both guts and teeth reduce significantly while the brain increases.  The expensive tissue hypothesis explains the tradeoff between guts and brains, but cooking provides a possible explanation for how both the teeth and the guts could reduce so significantly while still feeding a big brain.

Data on the energetics of cooked food are currently limited, but the experiments that have been performed so far indicate that the softer and more processed the food the more net calories are extracted, since less calories need to be spent on digestion.  A Japanese experiment with rats showed that they gained more weight on laboratory blocks that had been puffed up like a breakfast cereal versus rats on normal blocks, even though the total calories in the food were the same and the rats spent the same amount of energy on exercise2.  Similarly, experiments with pythons show that they expend about 12% more energy breaking down whole meat than either meat that has been cooked or meat that has been finely ground.  The two treatments reduce energy cost independently of each other, meaning that snakes fed ground, cooked meat used almost 24% less energy than pythons fed whole raw meat or rats2.

There is even less data on how humans utilize cooked food versus raw food.  Because it only recently occurred to us that we might not be able to eat raw food diets like other animals, only a few studies exist.  So far the most extensive is the Giessen Raw Food study performed in Germany, which used questionnaires to collect data from 513 raw foodists in Germany who eat anywhere from a 75% to 100% raw food diet.  The data are startling.  Modern humans appear to do extremely poorly on diets that our close relatives, the forest apes, would get sleek and fat on.  Body weights fall dramatically when we eat a significant amount of raw food, to the point where almost a third of those eating nothing but raw had body weights suggesting chronic energy deficiency.  About half of the women on total raw food diets had so little energy to spare that they had completely ceased to menstruate, and 10% had such irregular cycles that they were likely to be completely unable to conceive at their current energy levels2.   Mind you, these are  modern first-world people with the advantage of high-tech processing equipment to reduce the energy cost of eating whole foods, far less energy expenditure required to gather that food, and a cornucopia of modern domestic plants that have been selectively bred to produce larger fruits and vegetables with and lower fiber and toxin contents than their wild counterparts.  The outcome looks more dismal for a theoretical raw-food-eating human ancestor living  before the dawn of civilization and supermarkets.

 

Fantastic Implications

What this all ultimately suggests is that there are tradeoffs in the bodies of intelligent creatures that we may not have given much consideration: namely, that to build a bigger brain you either need a much higher level of caloric intake and burn (high BMR) or the size and energy costs in something in the body have to give.  Certain organs do not appear to have much wiggle room for size reduction, as Aiello and Wheeler discovered; hearts for warm-blooded organisms need to be a certain size to provide enough blood throughout the body, and similarly lungs must be a particular size to provide enough surface area for oxygen to diffuse into the blood.  However, gut size can fluctuate dramatically depending on the requirements of the diet, and musculature can also reduce to cut energy costs.

Humans seem to have done an end-run around some of the energy constraints of digestion by letting the cultural behaviors of cooking and processing do the work for them, freeing up energy for increased brain size following social brain hypothesis patterns.  This is pretty classic human adaptive behavior, the same thing that lets us live in environments ranging from arctic to deep desert, and should therefore not come as a great surprise.  It does, however, give us something to think about when building intelligent races from whole cloth: what energy constraints would they run up against, and assuming they didn’t take the human path of supplanting biological evolution with culture, how would they then get around them?

You're going to need to cook that first. (From http://final-girl.tumblr.com/)

Fantasy monsters and evil humanoids in stories tend to be described as larger and stronger than humans (sometimes quite significantly so) and as raw meat eaters, particularly of humanoid meat.  There’s a good psychological reason for doing so – both of these characteristics tap into ancient fears, one of the time period not so long ago when humans could end up as prey for large mammalian predators, and the other a deep-seated terror of cannibalism without a heavy dose of ritualism to keep it in check.  However, both the Neanderthal example and the Expensive Tissue Hypothesis suggest that such a species would be very difficult to produce; there’s a very good reason why large mammalian predators, whatever their intelligence level, are rare.  It wouldn’t be a large shift, however, to take a monstrous race and model them after a hybrid of Neanderthal and grizzly bear, making them omnivores that can supplement their favored meat diet with plant foods and use cooking to reduce the energy costs of digestion.  Or perhaps their high caloric needs and obligate carnivory could become a plot point, driving them to be highly expansionistic simply in order to keep their people fed, and to view anything not of their own race as a potential meal.

On the science fiction front, it presents limitations that should be kept in mind for any sapient alien.  To build a large brain, either body mass has to give somewhere (muscle, bone, guts) or the caloric intake needs to increase to keep pace with the higher energy costs.  Perhaps an alien race more intelligent than humans would be able to do so by becoming even more gracile, with fragile bones and muscles that may work on a slightly smaller, lower-gravity planet.  Or perhaps they reduce their energy needs by being an aquatic race, since animals that swim generally use a lower energy budget for locomotion than animals that fly or run7.

From such a core idea, whole worlds can be spun: low-gravity planets that demand less energy for terrestrial locomotion; great undersea empires in either a fantastic or an alien setting, where water buoys the body and reduces energy costs enough for sapience; or creatures driven by hunger and a decidedly human propensity for expansion that spread, locust-like, across continents, much as we did long ago when we first left our African cradle.

Food for thought, indeed.

 

References

1.  Stanford, C.B.,  2001.  The Hunting Apes: Meat Eating and the Origins of Human Behavior.   Princeton, NJ: Princeton University Press.

2. Wrangham, R., 2009.  Catching Fire: How Cooking Made us Human.  New York, NY: Basic Books.

3. —-, 2001.  “Out of the Pan, into the fire:  from ape to human. ”  Tree of Origin: What Primate Behavior Can Tell us About Human Social Evolution.  Ed.  F.B.M. de Waal.   Cambridge, MA:  Harvard University Press.   119-143.

4. Miller, J.A., 1991.  “Does brain size variability provide evidence of multiple species in Homo habilis?”  American Journal of Physical Anthropology 84(4): 385-398.

5. Aiello, L.C. and P. Wheeler, 1995.  “The Expensive-Tissue Hypothesis: The Brain and the Digestive System in Human and Primate Evolution.”  Current Anthropology 36(2): 199-221.

6. Snodgrass, J.J., and W.R. Leonard, 2009.  “Neanderthal Energetics Revisited: Insights into Population Dynamics and Life History Evolution.”  PaleoAnthropology 2009: 220-237.

7. Schmidt-Nielsen, K., 1972.  “Locomotion: Energy cost of swimming, flying, and running.”  Science 177: 222-228.