Archive

Posts Tagged ‘evolution’

Everybody Knows That The Dice Are Loaded, Everybody Rolls With Their Fingers Crossed

Every parent I know – particularly those with daughters – laments the near-impossibility of finding gender-neutral toys, clothing, and even toiletries for their young children. From bibs to booties, children’s items from birth onward are awash in a sea of pink and blue. Lucky the baby shower attendee, uncertain of the gender of an impending infant, who can find a green or yellow set of onesies as a present!

At the same time, I can’t count the number of times someone has told me, quite earnestly, how they discovered that boys “naturally” prefer blue and girls pink. Everyone knows this is true. No matter how hard parents try to keep their children clothed and entertained with carefully-selected gender-neutral colors and toys, girls just gravitate to princesslike frills, while boys invent the idea of guns ex nihilo and make them with their fingers or with sticks if they’re not provided with toy firearms by their long-suffering progenitors. There must be some genetic component to gunpowder/frill preferences1.

This is nonsense, of course, and obviously so; but there’s a pernicious strain of thought that insists that all of human behavior must have some underlying evolutionary explanation, and it’s trotted out with particular regularity to explain supposed gender (and other) differences or stereotypes as biologically “hard-wired”. These just-so stories about gender and human evolution pop up with depressing regularity, ignoring cultural and temporal counterexamples in their rush to explain matters as minor as current fashion trends as evolutionarily deterministic.

A phrenology chart from 1883, showing the areas of the brain and corresponding mental faculties as believed to exist by phrenologists of the time.

In the 19th century, everybodyknew that your intellectual and personal predispositions could be read using the measurements of your head.Image obtained via Wikimedia Commons

For example, a 2007 study purported to offer proof of, and an evolutionary explanation for, gender-based preferences in color – to wit, that boys prefer blue and girls prefer pink. Despite the fact that the major finding of the study was that both genders tend to prefer blue, the researchers explained that women evolved to prefer reds and pinks because they needed to find ripe berries and fruits, or maybe because they needed to be able to tell when their children had fevers2.

One problem with this idea is that currently-existing subsistence, foraging, or hunter-gatherer societies don’t all seem to operate on this sort of division of labor. The Aka in central Africa and the Agta in the Philippines are just two examples of such societies: men and women both participate in hunting, foraging, and caring for children. If these sorts of divisions of labor were so common and long-standing as to have become literally established in our genes, one would expect those differences to be universal, particularly among people living at subsistence levels, who can’t afford to allow egalitarian preferences to get in the way of their survival.

Of course, a much more glaring objection to the idea that “pink for boys, blue for girls” is the biological way of things is the fact that, less than a hundred years ago, right here in the United States from which I am writing, it was the other way around. In 1918,  Earnshaw’s Infants’ Department, a trade publication, says that “The generally accepted rule is pink for the boys, and blue for the girls. The reason is that pink, being a more decided and stronger color, is more suitable for the boy, while blue, which is more delicate and dainty, is prettier for the girl.” Pink was considered a shade of red at the time, fashion-wise, making it an appropriately manly color for male babies. Were parents in the interwar period traumatizing their children by dressing them in clothes that contradicted their evolved genetic preferences? Or do fashions simply change, and with them our ideas of what’s appropriate for boys and girls?

A photograph of US President Franklin Delano Roosevelt, age 2. He is wearing a frilled dress, Mary Jane sandals, and holding a feather-trimmed hat - an outfit considered gender-neutral for young children at the time.

This is FDR at age 2. No one has noted the Roosevelts to be a family of multigenerational cross-dressers, so take me at my word when I say this was normal clothing for young boys at the time.
Image obtained via Smithsonian Magazine

More recently, researchers at the University of Portmouth published a paper reporting that wearing high heels makes women appear more feminine and increases their attractiveness – a result they established by asking participants to view and rate videos of women walking in either high heels or flat shoes. The researchers don’t appear to have considered it necessary to test their hypothesis using videos of men in a variety of shoes3.

Naturally, articles about the study include plenty of quotes about the evolutionary and biological mechanisms behind this result4. But as with pink-and-blue, these ideas just aren’t borne out by history.  In the West, heels were originally a fashion for men. (In many non-Western societies heels have gone in and out of fashion for at least a few thousand years as an accoutrement of the upper classes of both genders.) They were a sign of status – a way to show that you were wealthy enough that you didn’t have to work for your living –  and a way of projecting power by making the wearer taller. In fact, women in Europe began wearing heels in the 17th century as a way of masculinizing their outfits, not feminizing them.

Studies like these, and the way that they reinforce stereotypes and cultural beliefs about the groups of people studied, have broader implications for society and its attitudes, but it’s also useful to think about them from a fictional and worldbuilding standpoint: the things we choose to study, and the assumptions we bring with us, often say more about us than about the reality of what we’re studying — particularly when the topic we’re studying is ourselves. Our self-knowledge is neither perfect nor complete. What are your hypothetical future-or-alien society’s blind spots? What assumptions do they bring with them when approaching a problem, and who inside or outside of that society is challenging them? What would they say about themselves that “everybody knows” that might not be true?

FOOTNOTES
1. Our ancestors, hunting and gathering on the savannah, evolved that way because the men were always off on big game safaris while the women stayed closer to home, searching out Disney princesses in the bushes and shrubs to complete their tribe’s collection. Frilly dresses helped them to disguise themselves as dangerous-yet-lacy wild beasts to scare off predators while the men weren’t there to protect them.
2. Primitive humans either lacked hands or had not yet developed advanced hand-on-forehead fever detection technology.
3. Presumably because everyone knows that heels are for girls and that our reactions to people wearing them are never influenced by our expectations about what a person with a “high-heel-wearing” gait might be like.
4. On the savannah, women often wore stiletto heels to help them avoid or stab poisonous snakes while the men were out Morris dancing.

A Question of Culture

I’ve been in quite a few museums in the last few weeks, and of all types: art, science, history, natural history.

20120917-212623.jpg

Looking at the exhibits and watching the way that people respond to them got me thinking about science fiction, of course. How do people choose what to preserve and display, whether it be paintings, fossils, historic objects, or military technologies?

20120917-212721.jpg

20120917-212905.jpg

How do others react to those exhibits, and what does it say about them as individuals and as members of the exhibiting or a different culture?

20120917-213006.jpg

On an alien planet, would the fossils follow similar stages? How does the evolutionary history of an entirely different world shape the way humans respond to it, and how is that response reflected in what we choose to present?

20120917-213115.jpg

Not every science fiction novel needs an explicit museum, but I think it’s a good world-building exercise for the writer. Character-building too, potentially.

20120917-213157.jpg

If you had an alien museum, in a world of your own creation or one you enjoy, what would be in it? Why? Would people come on their vacations? For school trips? Happily or not?

(Photos, from top to bottom: Smithsonian Institute; Sackler Gallery, Washington DC; Chicago Institute of Art; Korean War Memorial; Field Museum, Chicago; Natural History Museum, Washington DC.)

Metamorphosis, Transformation and Evolution

In the huge, crisp cocoon, extraordinary processes began.
The caterpillar’s swathed flesh began to break down. Legs and eyes and bristles and body-segments lost their integrity. The tubular body became fluid.
The thing drew on the stored energy it had drawn from the dreamshit and powered its transformation. It self-organized. Its mutating form bubbled and welled up into strange dimensional rifts oozing like oily sludge over the brim of the world into other planes and back again. It folded in on itself, shaping itself out of the protean sludge of its own base matter.
It was unstable.
It was alive, and then there was a time between forms when it was neither alive nor dead, but saturated with power.
And then it was alive again. But different.
~ Perdido Street Station, China Miéville

ManducaThe metamorphosis of caterpillars into butterflies (either beautiful or terrifying) is an amazing process.

The larva encases itself in a chrysalis or cocoon and enzymes begin to break down its tissues. Eventually all that is left of the original larva are clusters of cells known as imaginal discs.  The digested tissue from the remainder of the caterpillar supplies nutrients to the imaginal discs which rapidly grow and differentiate into the wings, antennae, legs and other parts of the adult butterfly.  The adult emerges from the chrysalis fully formed.

Amazingly, a recent study has shown that behavior learned as a larva can be retained in the adult, suggesting that the neurons involved in memory also survive metamorphosis and are integrated into the adult nervous system.

There are a number of hypotheses to explain how such a complicated system might have evolved. But the oddest hypothesis comes from zoologist Donald Williamson , who suggests that the larval caterpillar and adult butterfly evolved from two completely different organisms, whose genomes somehow fused together. He proposes that the transformation of a caterpillar into a butterfly is more one creature turning into another, than a juvenile turning into an adult.

Williamson’s idea has been pretty thoroughly debunked in light of what’s known about butterfly and moth biology and evolution. It’s especially hard to explain in light of the experiments showing the persistence of memory through the process. But I think it’s a great science fictional idea.

In Orson Scott Card’s Speaker for the Dead the alien Pequininos (or piggies) go through metamorphosis from animal to plant, which never seemed very biologically plausible to me.

So are there good science fiction examples of hybrid lifeforms that shift from one to the other during their lifetime?  What do you guys think?

Related reading:

Top image: Manduca sexta (tobacco horn worm) larva devouring a tomato plant in preparation for metamorphosis. Photo by me.

Bottom image: Adult butterfly, species unknown. Photo by me.

A Recipe for Sentience: The Energetics of Intelligence

“No man can be wise on an empty stomach.”

- Mary Anne Evans, under the pseudonym George Eliot

 

We humans have been suffering from a bit of a self-image problem for the last half century.

First we were Man the Tool-Maker, with our ability to reshape natural objects to serve a purpose acting to  separate us from the brute beasts.  This image was rudely shattered by Jane Goodall’s discovery in the 1960s that chimpanzees also craft and use tools, such as stripping leaves from a twig to fish termites out of their nest to eat, or using the spine of an oil palm frond as a pestle to pulverize the nutritious tree pulp.

Then we were Man the Hunter.  We’d lost our tool-making uniqueness but we still had our ability to kill, dismember, and eat much larger animals with even simple tools, and it was thought that this ability unlocked enough energy in our diet to fuel the growth of larger body size and larger brains1.  This idea rather famously bled into popular culture and science fiction of the time, such as the opening to the movie 2001: A Space Odyssey.  However, we would later find out that although it is not a large component of the diet, chimpanzees eat enough meat to act as significant predators on other primates in their forest homes.  We would also find out that the bone piles we had once attributed to our ancestors belonged to ancient savannah predators, and that the whole reason hominid bones showed up in the assemblage at all is because we were occasionally lunch.

So meat eating by itself doesn’t seem to make us as distinct from our closest living relatives as we had previously thought, and the argument of what makes us special has since moved on to language.  That does leave a standing question, though: if it wasn’t meat-eating that allowed us to get bigger and more intelligent, what was it?

While there is evidence in the fossil record that eating raw meat allowed humans to gain more size and intelligence, it is both unlikely that we were the hunters and that this behavioral change was enough to unlock a significant jump in brain size.  Instead, there is another hypothesis and human identity that has been gaining more traction as of late: the concept of Man the Cooking Animal, the only animal on Earth that can no longer survive on a diet of raw food because of the energy demands of its enormous brain2.

Napoleon is famously said to have declared that an army marches on its stomach (at least, after what may be a loose translation).  That is, the power of an army is limited by the amount of food that a society can divert to it.  What we have come to realize more recently is that this same limitation exists inside the body, be it human, animal, or speculative alien species.  No matter what the diet, a creature will only have a fixed amount of energy available to divert to activities such as maintaining a warm-blooded body temperature (homeothermy), digestion, reproduction, and the growth and maintenance of tissues.  We can track some of these changes in the human line in the fossil record, but others must at best be more speculative due to the difficulty of preserving evidence of behavioral changes (which of course, do not fossilize) as well as limited research on modern examples.  We’ll start by looking at the evolutionary pathway of humans to see what information is currently available.

 

 The Woodland Ape and the Handy Man

 

Size comparison of Australopithecus afarensis and Homo sapiens (by Carl Buell)

Some of the oldest human ancestors that we can unequivocally identify as part of our line lie in the genus Australopithecus.  These have been identified by some authors as woodland apes, to distinguish these more dryland inhabitants from the forest apes that survive today in Africa’s jungles (chimpanzees, bonobos, and gorillas).  They are much smaller than a modern human, only as tall as a child, but they have already evolved to walk upright.  They still show adaptations for climbing that were lost in later species, suggesting they probably escaped into the trees at night to avoid ground predators, as modern chimps do.  Their brains were not much larger than a modern chimpanzee’s, and their teeth are very heavy, even pig-like, as an adaptation to a tough diet of fibrous plant material – probably roots, tubers, and corms, perhaps dug from plants growing at the water’s edge2,3.

The hominids thought to have first started eating meat are Homo habilis, the “handy man”, and the distinction between them and the older Australopithecus group from which they descended are not very large.  The two are close enough that Homo habilis has been suggested it might be more properly renamed to Australopithecus habilis, while the interspecies variation suggests to some researchers that what we now call habilis may represent more than one species4Whatever its proper taxonomic designation, H. habilis shows a modest increase in brain size and evidence that it was using simple stone tools to butcher large mammals, probably those left behind by the many carnivorous mammals that lived on the savannahs and woodlands alongside it.

The transition between H. habilis and H. erectus is far more distinctive, with a reduction in tooth size, jaw size, and gut size, and an increase in brain volume.  They are also believed to have been larger, but the small number of available hominid fossils makes this difficult to verify.  H. erectus is also the first human to have been found outside of Africa.  While the habilis-erectus split has been attributed to the eating of significant amounts of meat in the Man-the-Hunter scenario (recall that habilis, despite its tool-using ability for deconstructing large animals, does not appear to have hunted them), the anthropologist Richard Wrangham has suggested that the turnover instead indicates the first place at which humans began to cook2,3.  Because the oldest solid evidence of cooking is far younger than the oldest known fossils of erectus, what follows is largely based on linking scraps of evidence from modern humans and ancient fossils using what is known as the Expensive-Tissue Hypothesis.

 

 Brains versus Guts: The Expensive-Tissue Hypothesis

 The Expensive-Tissue Hypothesis was first proposed by Leslie Aiello and Peter Wheeler in 19955, and it goes something like this.  Large brains evolve in creatures that live in groups because intelligence is important to creating and maintaining the social groups.  This is known as the social brain hypothesis, and it helps to explain why animals that live socially have larger brains than their more solitary relatives.  However, not all social primates, or even social animals, have particularly large brains.  Horses, for example, are social animals not known for their excessively large brain capacity, and much the same can be said for lemurs.  Meanwhile, apes have larger brains than most monkeys.  This can’t be accounted for purely by the social brain hypothesis, since by itself it would suggest that all social primates and perhaps all social animals should have very big brains, rather than the variation we see between species and groups.  What does account for the difference is the size of the gut and, by extension, the quality of the diet.

Both brains and guts fit the bill for expensive body tissues.  In humans, the brain uses about 20% of the energy we expend while resting (the basal metabolic rate, or BMR) to feed an organ that only makes up 2.5% of our body weight2.  This number goes down in species with smaller brains, but it is still disproportionately high in social, big-brained animals.  Aiello and Wheeler note that one way to get around this lockstep rule is to increase the metabolic requirements of the species5 (i.e., throw more calories at the problem), but humans don’t do this, and neither do other great apes.  Our metabolic rates are exactly what one would expect for primates of our size.  The only other route is to decrease the energy flow to other tissues, and among the social primates only the gut tissue shows substantial variation in its proportion of body weight.  In fact, the correlation between smaller guts and larger brains lined up quite well in the data then available for monkeys, gibbons, and humans5.  Monkeys and other animals that feed on low-quality diets containing significant amount of indigestible fibers or dangerous plant toxins have very large guts to handle the problem and must expend a significant amount of their BMR on digestion, and have less extra energy to shunt to operate a large brain.  Fruit-eating primates such as chimpanzees and spider monkeys have smaller guts to handle their more easily-digested food, and so have larger brains.  Humans spend the least amount of time eating of any living primate, with equally short digestion times as food speeds through a relatively small gut.  And ours, of course, are the largest brains of all2.

These tradeoffs are not hard-linked to intestinal or brain size, and have been demonstrated in other species.  For example, there is a South American fish species with a tiny gut that uses most of its energy intake to power a surprisingly large brain, while birds with smaller guts often use the energy savings not to build larger brains, but larger, stronger wing muscles2.  Similarly, muscle mass could be shed instead of gut mass to grow a larger brain or to cut overall energy costs.  The latter strategy is the one taken up by tree-dwelling sloths to survive on a very poor diet of tough, phytotoxin-rich leaves, and although it makes them move like rusty wind-up toys it also allows them to live on lower-quality food than most leaf-eating mammals.

Modern humans have, to a degree, taken this approach as well.  When compared to one of our last surviving relatives, H. neanderthalensis, humans have a skeletal structure that paleontologists describe as “gracile:” light bones for our body size, anchoring smaller muscles than our shorter, heavier relatives.  Lower muscle and bone mass in H. sapiens gives us an average energy cost on the order of 1720 calories a day for males and 1400 calories a day for females in modern cold-adapted populations, which are thought to have similar metabolic adaptations for cold weather as the as extinct Neanderthals.  By contrast, H. neanderthalis has been estimated to need 4000-7000 calories a day for males and 3000-5000 calories for females, with the higher costs reflecting the colder winter months6.

 

Cooked versus Raw

 

Tribe of Homo erectus cooking with fire (from sciencephoto.com)

At the point where human brain size first increases dramatically (H. erectus, as you might recall), both guts and teeth reduce significantly while the brain increases.  The expensive tissue hypothesis explains the tradeoff between guts and brains, but cooking provides a possible explanation for how both the teeth and the guts could reduce so significantly while still feeding a big brain.

Data on the energetics of cooked food are currently limited, but the experiments that have been performed so far indicate that the softer and more processed the food the more net calories are extracted, since less calories need to be spent on digestion.  A Japanese experiment with rats showed that they gained more weight on laboratory blocks that had been puffed up like a breakfast cereal versus rats on normal blocks, even though the total calories in the food were the same and the rats spent the same amount of energy on exercise2.  Similarly, experiments with pythons show that they expend about 12% more energy breaking down whole meat than either meat that has been cooked or meat that has been finely ground.  The two treatments reduce energy cost independently of each other, meaning that snakes fed ground, cooked meat used almost 24% less energy than pythons fed whole raw meat or rats2.

There is even less data on how humans utilize cooked food versus raw food.  Because it only recently occurred to us that we might not be able to eat raw food diets like other animals, only a few studies exist.  So far the most extensive is the Giessen Raw Food study performed in Germany, which used questionnaires to collect data from 513 raw foodists in Germany who eat anywhere from a 75% to 100% raw food diet.  The data are startling.  Modern humans appear to do extremely poorly on diets that our close relatives, the forest apes, would get sleek and fat on.  Body weights fall dramatically when we eat a significant amount of raw food, to the point where almost a third of those eating nothing but raw had body weights suggesting chronic energy deficiency.  About half of the women on total raw food diets had so little energy to spare that they had completely ceased to menstruate, and 10% had such irregular cycles that they were likely to be completely unable to conceive at their current energy levels2.   Mind you, these are  modern first-world people with the advantage of high-tech processing equipment to reduce the energy cost of eating whole foods, far less energy expenditure required to gather that food, and a cornucopia of modern domestic plants that have been selectively bred to produce larger fruits and vegetables with and lower fiber and toxin contents than their wild counterparts.  The outcome looks more dismal for a theoretical raw-food-eating human ancestor living  before the dawn of civilization and supermarkets.

 

Fantastic Implications

What this all ultimately suggests is that there are tradeoffs in the bodies of intelligent creatures that we may not have given much consideration: namely, that to build a bigger brain you either need a much higher level of caloric intake and burn (high BMR) or the size and energy costs in something in the body have to give.  Certain organs do not appear to have much wiggle room for size reduction, as Aiello and Wheeler discovered; hearts for warm-blooded organisms need to be a certain size to provide enough blood throughout the body, and similarly lungs must be a particular size to provide enough surface area for oxygen to diffuse into the blood.  However, gut size can fluctuate dramatically depending on the requirements of the diet, and musculature can also reduce to cut energy costs.

Humans seem to have done an end-run around some of the energy constraints of digestion by letting the cultural behaviors of cooking and processing do the work for them, freeing up energy for increased brain size following social brain hypothesis patterns.  This is pretty classic human adaptive behavior, the same thing that lets us live in environments ranging from arctic to deep desert, and should therefore not come as a great surprise.  It does, however, give us something to think about when building intelligent races from whole cloth: what energy constraints would they run up against, and assuming they didn’t take the human path of supplanting biological evolution with culture, how would they then get around them?

You're going to need to cook that first. (From http://final-girl.tumblr.com/)

Fantasy monsters and evil humanoids in stories tend to be described as larger and stronger than humans (sometimes quite significantly so) and as raw meat eaters, particularly of humanoid meat.  There’s a good psychological reason for doing so – both of these characteristics tap into ancient fears, one of the time period not so long ago when humans could end up as prey for large mammalian predators, and the other a deep-seated terror of cannibalism without a heavy dose of ritualism to keep it in check.  However, both the Neanderthal example and the Expensive Tissue Hypothesis suggest that such a species would be very difficult to produce; there’s a very good reason why large mammalian predators, whatever their intelligence level, are rare.  It wouldn’t be a large shift, however, to take a monstrous race and model them after a hybrid of Neanderthal and grizzly bear, making them omnivores that can supplement their favored meat diet with plant foods and use cooking to reduce the energy costs of digestion.  Or perhaps their high caloric needs and obligate carnivory could become a plot point, driving them to be highly expansionistic simply in order to keep their people fed, and to view anything not of their own race as a potential meal.

On the science fiction front, it presents limitations that should be kept in mind for any sapient alien.  To build a large brain, either body mass has to give somewhere (muscle, bone, guts) or the caloric intake needs to increase to keep pace with the higher energy costs.  Perhaps an alien race more intelligent than humans would be able to do so by becoming even more gracile, with fragile bones and muscles that may work on a slightly smaller, lower-gravity planet.  Or perhaps they reduce their energy needs by being an aquatic race, since animals that swim generally use a lower energy budget for locomotion than animals that fly or run7.

From such a core idea, whole worlds can be spun: low-gravity planets that demand less energy for terrestrial locomotion; great undersea empires in either a fantastic or an alien setting, where water buoys the body and reduces energy costs enough for sapience; or creatures driven by hunger and a decidedly human propensity for expansion that spread, locust-like, across continents, much as we did long ago when we first left our African cradle.

Food for thought, indeed.

 

References

1.  Stanford, C.B.,  2001.  The Hunting Apes: Meat Eating and the Origins of Human Behavior.   Princeton, NJ: Princeton University Press.

2. Wrangham, R., 2009.  Catching Fire: How Cooking Made us Human.  New York, NY: Basic Books.

3. —-, 2001.  “Out of the Pan, into the fire:  from ape to human. ”  Tree of Origin: What Primate Behavior Can Tell us About Human Social Evolution.  Ed.  F.B.M. de Waal.   Cambridge, MA:  Harvard University Press.   119-143.

4. Miller, J.A., 1991.  “Does brain size variability provide evidence of multiple species in Homo habilis?”  American Journal of Physical Anthropology 84(4): 385-398.

5. Aiello, L.C. and P. Wheeler, 1995.  “The Expensive-Tissue Hypothesis: The Brain and the Digestive System in Human and Primate Evolution.”  Current Anthropology 36(2): 199-221.

6. Snodgrass, J.J., and W.R. Leonard, 2009.  “Neanderthal Energetics Revisited: Insights into Population Dynamics and Life History Evolution.”  PaleoAnthropology 2009: 220-237.

7. Schmidt-Nielsen, K., 1972.  “Locomotion: Energy cost of swimming, flying, and running.”  Science 177: 222-228.

 

That Shy, Elusive Rape Particle

Note: This article originally appeared on Starship Reckless

[Re-posted modified EvoPsycho Bingo Card -- click on image for bigger version]

One of the unlovely things that has been happening in Anglophone SF/F (in line with resurgent religious fundamentalism and erosion of democratic structures in the First World, as well as economic insecurity that always prompts “back to the kitchen” social politics) is the resurrection of unapologetic – nay, triumphant – misogyny beyond the already low bar in the genre. The churners of both grittygrotty “epic” fantasy and post/cyberpunk dystopias are trying to pass rape-rife pornkitsch as daring works that swim against the tide of rampant feminism and its shrill demands.

When people explain why such works are problematic, their authors first employ the standard “Me Tarzan You Ape” dodges: mothers/wives get trotted out to vouch for their progressiveness, hysteria and censorship get mentioned. Then they get really serious: as artists of vision and integrity, they cannot but depict women solely as toilet receptacles because 1) that has been the “historical reality” across cultures and eras and 2) men have rape genes and/or rape brain modules that arose from natural selection to ensure that dominant males spread their mighty seed as widely as possible. Are we cognitively impaired functionally illiterate feminazis daring to deny (ominous pause) SCIENCE?!

Now, it’s one thing to like cocoa puffs. It’s another to insist they are either nutritional powerhouses or haute cuisine. If the hacks who write this stuff were to say “Yeah, I write wet fantasies for guys who live in their parents’ basement. I get off doing it, it pays the bills and it has given me a fan base that can drool along with me,” I’d have nothing to say against it, except to advise people above the emotional age of seven not to buy the bilge. However, when they try to argue that their stained wads are deeply philosophical, subversive literature validated by scientific “evidence”, it’s time to point out that they’re talking through their lower digestive opening. Others have done the cleaning service for the argument-from-history. Here I will deal with the argument-from-science.

It’s funny how often “science” gets brandished as a goad or magic wand to maintain the status quo – or bolster sloppy thinking and confirmation biases. When women were barred from higher education, “science” was invoked to declare that their small brains would overheat and intellectual stress would shrivel their truly useful organs, their wombs. In our times, pop evopsychos (many of them failed SF authors turned “futurists”) intone that “recent studies prove” that the natural and/or ideal human social configuration is a hybrid of a baboon troop and fifties US suburbia. However, if we followed “natural” paradigms we would not recognize paternity, have multiple sex partners, practice extensive abortion and infanticide and have powerful female alliances that determine the status of our offspring.

I must acquaint Tarzanists with the no-longer-news that there are no rape genes, rape hormones or rape brain modules. Anyone who says this has been “scientifically proved” has obviously got his science from FOX News or knuckledraggers like Kanazawa (who is an economist, by the way, and would not recognize real biological evidence if it bit him on the gonads). Here’s a variation of the 1986 Seville Statement that sums up what I will briefly outline further on. It goes without saying that most of what follows is shorthand and also not GenSci 101.

It is scientifically incorrect to say that:
1. we have inherited a tendency to rape from our animal ancestors;
2. rape is genetically programmed into our nature;
3. in the course of our evolution there has been a positive selection for rape;
4. humans brains are wired for rape;
5. rape is caused by instinct.

Let’s get rid of the tired gene chestnut first. As I’ve discussed elsewhere at length, genes do not determine brain wiring or complex behavior (as always in biology, there are a few exceptions: most are major decisions in embryo/neurogenesis with very large outcomes). Experiments that purported to find direct links between genes and higher behavior were invariably done in mice (animals that differ decisively from humans) and the sweeping conclusions of such studies have always had to be ratcheted down or discarded altogether, although in lower-ranking journals than the original effusions.

Then we have hormones and the “male/female brain dichotomy” pushed by neo-Freudians like Baron-Cohen. They even posit a neat-o split whereby too much “masculinizing” during brain genesis leads to autism, too much “feminizing” to schizophrenia. Following eons-old dichotomies, people who theorize thusly shoehorn the two into the left and right brain compartments respectively, assigning a gender to each: females “empathize”, males “systematize” – until it comes to those intuitive leaps that make for paradigm-changing scientists or other geniuses, whereby these oh-so-radical theorists neatly reverse the tables and both creativity and schizophrenia get shifted to the masculine side of the equation.

Now although hormones play critical roles in all our functions, it so happens that the cholesterol-based ones that become estrogen, testosterone, etc are two among several hundred that affect us. What is most important is not the absolute amount of a hormone, but its ratios to others and to body weight, as well as the sensitivity of receptors to it. People generally do not behave aberrantly if they don’t have the “right” amount of a sex hormone (which varies significantly from person to person), but if there is a sudden large change to their homeostasis – whether this is crash menopause from ovariectomy, post-partum depression or heavy doses of anabolic steroids for body building.

Furthermore, as is the case with gene-behavior correlation, much work on hormones has been done in mice. When similar work is done with primates (such as testosterone or estrogen injections at various points during fetal or postnatal development), the hormones have essentially no effect on behavior. Conversely, very young human babies lack gender-specific responses before their parents start to socialize them. As well, primates show widely different “cultures” within each species in terms of gender behavior, including care of infants by high-status males. It looks increasingly like “sex” hormones do not wire rigid femininity or masculinity, and they most certainly don’t wire propensity to rape; instead, they seem to prime individuals to adopt the habits of their surrounding culture – a far more adaptive configuration than the popsci model of “women from Venus, men from Mars.”

So on to brain modularity, today’s phrenology. While it is true that there are some localized brain functions (the processing of language being a prominent example), most brain functions are diffuse, the higher executive ones particularly so – and each brain is wired slightly differently, dependent on the myriad details of its context across time and place. Last but not least, our brains are plastic (otherwise we would not form new memories, nor be able to acquire new functions), though the windows of flexibility differ across scales and in space and time.

The concept of brain modularity comes partly from the enormously overused and almost entirely incorrect equivalence of the human brain to a computer. Another problem lies in the definition of a module, which varies widely and as a result is prone to abuse by people who get their knowledge of science from new-age libertarian tracts. There is essentially zero evidence of the “strong” version of brain modules, and modular organization at the level of genes, cells or organ compartments does not guarantee a modular behavioral outcome. But even if we take it at face value, it is clear that rape does not adhere to the criteria of either the “weak” (Fodor) or “strong” version (Carruthers) for such an entity: it does not fulfill the requirements of domain specificity, fast processing, fixed neural architecture, mandatoriness or central inaccessibility.

In the behavioral domain, rape is not an adaptive feature: most of it is non-reproductive, visited upon pre-pubescent girls, post-menopausal women and other men. Moreover, rape does not belong to the instinctive “can’t help myself” reflexes grouped under the Four Fs. Rape does not occur spontaneously: it is usually planned with meticulous preparation and it requires concentration and focus to initiate and complete. So rape has nothing to do with reproductive maxima for “alpha males” (who don’t exist biologically in humans) – but it may have to do with the revenge of aggrieved men who consider access to women an automatic right.

What is undeniable is that humans are extremely social and bend themselves to fit context norms. This ties to Arendt’s banality of evil and Niemöller’s trenchant observations about solidarity – and to the outcomes of Milgram and Zimbardo’s notorious experiments which have been multiply mirrored in real history, with the events in the Abu Ghraib prison prominent among them. So if rape is tolerated or used as a method for compliance, it is no surprise that it is a prominent weapon in the arsenal of keeping women “in their place” and also no surprise that its apologists aspire to give it the status of indisputably hardwired instinct.

Given the steep power asymmetry between the genders ever since the dominance of agriculture led to women losing mobility, gathering skills and control over pregnancies, it is not hard to see rape as the cultural artifact that it is. It’s not a sexual response; it’s a blunt assertion of rank in contexts where dominance is a major metric: traditional patriarchal families, whether monogamous or polygynous; religions and cults (most of which are extended patriarchal families); armies and prisons; tribal vendettas and initiations.

So if gratuitous depictions of graphic rape excite a writer, that is their prerogative. If they get paid for it, bully for them. But it doesn’t make their work “edgy” literature; it remains cheap titillation that attempts to cloak arrant failures of talent, imagination and just plain scholarship. Insofar as such work has combined sex and violence porn as its foundation, it should be classified accordingly. Mythologies, including core religious texts, show rape in all its variations: there is nothing novel or subversive about contemporary exudations. In my opinion, nobody needs to write yet another hack work that “interrogates” misogyny by positing rape and inherent, immutable female inferiority as natural givens – particularly not white Anglo men who lead comfortable lives that lack any knowledge to justify such a narrative. The fact that people with such views are over-represented in SF/F is toxic for the genre.

Further reading:

A brief overview of the modularity of the brain/mind
Athena Andreadis (2010). The Tempting Illusion of Genetic Virtue. Politics Life Sci. 29:76-80
Sarah Blaffer Hdry, Mothers and Others: The Evolutionary Origins of Mutual Understanding
Anne Fausto-Sterling, Sex/Gender: Biology in a Social World
Cordelia Fine, Delusions of Gender
Alison Jolly, Lucy’s Legacy: Sex and Intelligence in Human Evolution
Rebecca Jordan-Young, Brain Storm: The Flaws in the Science of Sex Differences
Kevin Laland and Gillian Brown, Sense and Nonsense: Evolutionary Perspectives on Human Behaviour
Edouard Machery and Kara Cohen (2012). An Evidence-Based Study of the Evolutionary Behavioral Sciences. Brit J Philos Sci 263: 177-226

Worldbuilding wonders

So far I’ve written about worldbuilding science for laying out entire worlds, and some unusual types of habitats. Most fiction doesn’t take place on a whole planet, or somewhere truly odd, it happens in a smaller region of an ordinary world. Even for a nearly-normal place, there are biological principles that the SFF author could use to make a locale distinctive and still plausible.

Island biogeography is one of my favorite, combining as it does migration and evolution and frequently-bizarre plants and animals.

Take an isolated habitat like an island. It might start with plants and animals on it, but once the island becomes separated from the original home ranges of those species they can’t mix with the larger population.

New species can come in through immigration, but the more isolated the island is the fewer kinds of species can get there. Isolation is relative: terrestrial mammals are a lot easier to stop than birds or marine mammals.

Original species or newly-introduced species can also go extinct on the island even if they are thriving elsewhere. Since new individuals can’t easily get there, the island populations are on their own. A small population can go extinct just by chance. Habitat change over time can also force species “off the island.”

The “island” doesn’t have to be land in the middle of an ocean, either. A lake in the middle of a continent might be an isolated habitat if you’re interested in fish. A mountaintop can be an “island” if the species can’t climb down one mountain and back up the next. Isolation is relative.

The size of the island is important too. A larger island offers more habitats so it can support a larger number of different species. A larger island might be able to support larger populations of each species, making it less likely that an individual species would go extinct.

Now comes the fun part: closed habitat (more or less), limited types and/or numbers of species, limited or abundant resources. Let the natural selection begin!

New Zealand makes a great example of what can happen on isolated islands. New Zealand separated from Gondwanaland between 80 and 100 million years ago. It started out with plenty of plants and reptiles, but if any very early mammals were present they quickly died off. (The mammals didn’t start to become dominant until 65 million years ago.)

New Zealand still has the descendant species of some of reptiles that were already there, like the tuatara – the only surviving species in its entire family. All its cousins died off about 65 million years ago.

Tuatara lizard
(via Wikipedia)

Birds could fly in, though it was a long trip. Without pesky mammals to eat eggs and compete for food, many of the bird species didn’t even need to fly once they settled in. Flying is an expensive skill, so it was entirely lost in some species. Because all of these birds evolved in isolation, they are found only on New Zealand. Many have gone extinct since European colonization: at least 43 species, or a third of the total number of birds.

Kiwi bird
(via Wikipedia)

The kiwi is a national symbol, but there were many more distinctive birds. The harpagornis, the largest eagle known to have ever existed, lived on the flightless birds like the moa, and died out not long after Maori settlement when the moa were hunted out.

Birds evolved to fill the roles that mammals would occupy in most other places. The largest known eagle (now extinct) lived on New Zealand.

Giant eagle
(via Wikipedia)

Since New Zealand started out without mammals, only the species that could travel long distances through or over the ocean got there. There are plenty of aquatic mammals, like whales, dolphins, seals, and sea lions. These species are not isolated from their parent populations.

Short-tailed bat
(via Wikipedia)

The only terrestrial mammals present on the island before humans arrived were three species of bats. The birds don’t fly, but the mammals do. Like the birds, the short-tailed bats adapted to the lack of ground mammals and evolved to crawl around on the ground as well as fly. They could tuck their wings away and scamper around like mice.

Once people started coming to New Zealand, another pathway for intentional and unintentional plant and animal introductions opened up. The Maori brought some species from 1250 or so, most notably the Polynesian rat and the domesticated dog.

After Europeans discovered the islands in 1769, they brought all sorts of animals: pigs for food, mice and rats accidentally, weasels and ferrets to catch the rats, possums for fur. These adaptable mammals have driven many of the native New Zealand species to extinction, and have threatened most of the rest. Conservation efforts are trying to reduce or eliminate these invaders. One such project led to the removal of 30 tons of dead possums from one of the smaller islands.

Back from science to fiction: I see all sorts of possibilities here. Some part of your created world could be isolated from the rest, and thus distinctive. One human colony or space ship could be isolated from the rest, allowing the humans and their attendant species to evolve in ways unlike the main human population. Or the entire human population could fragment into smaller isolated populations.

And what about accidental introductions, either by human settlers or alien visitors? What if we accidentally introduce rats to a space station? Or aliens introduce their scavenger species to London? What species go along with colonists, intentionally or accidentally, and what effects do those have on the native populations?

My brain is fizzing with ideas for alien ecosystems. I hope yours is too.

Adapting to Humanity

Recently I read an article on Cracked.com (Thanks to Roger Ebert for the link) titled “7 Animals That Are Evolving Right Before Our Eyes”. It’s a fascinating list, ranging all over the animal kingdom. (And before you make a crack – sorry, bad pun – about scientific accuracy from a comedy site, every item on the list has multiple sources cited and links provided.)

As I read the list, I noticed something a bit disturbing: Almost every animal mentioned is evolving (apparently) in response to the effects of humans on their habitat. Tuskless elephants are being born with greater frequency (most likely due to poaching of tusked elephants); Peppered moths apparently changed color to better hide from predators amidst the pollution of the Industrial Revolution; and some of Moscow’s stray dog population appears to have learned to ride the subway.

The one item that most gave me pause was the discovery of the Grolar bear – that’s a hybrid between Grizzly and Polar bears. Amazingly, in April 2010 one bear was killed by a native Inuvialuit hunter, and DNA tests confirmed that not only was it a grizzly/polar hybrid, it was 2nd generation – meaning unlike some other hybrids (like the mule), grolar bears are fertile. This means that, as the polar bears’ habitat melts from global warming, there may be more and more meetings between polars and grizzlies, resulting in more hybrids, which could continue to reproduce – creating a large population of grolar bears (or Brolar bears, as polar bears can also successfully mate with brown bears).

As Cracked pointed out, polar bears are carnivorous, while 80-90% of the grizzlies’ diet is plants. So by nature, polar bears tend to be more aggressive. And this means that a hybrid grolar could be, in essence, a highly aggressive carnivorous beast that’s better suited to warmer climates than its grizzly parent or grandparent.

It occurred to me to wonder whether our damaging the environment has led to the creation of a predator that will intrude on our territory before long. And that got me thinking: what other crazy or unexpected animals might our influence on the planet jumpstart into existence?

So let’s hear your theories! What animal will evolve in the near future as a result of our effect on the world that we’ll come to regret? And please be realistic – no matter how entertaining it sounds in theory, there will never be a sharktopus.

Co-Dependency, the Natural Way

Species don’t exist in a vacuum. That is, if you go nearly anywhere on this planet, you’re not going to find just one form of life. You’re going to find several, all filling different niches and frequently interacting with each other. (I say nearly because I don’t know whether the extremophile bacteria in the Earth’s crust are one species or several. I’m betting there are a range of them in any given location, given how resilient and diverse life is.)

The most familiar relationship between species is probably that of predator and prey. There’s the lion and gazelle, the wolf and the caribou, the anteater and the ant. We’re familiar with parasitism too—one species feeding off another without killing them first. It’s as easy to cast Earth parasites as villains as it is to cast predators. Parasites are often widely known, affect humans, have historical impact, or get handy evil-sounding names. Examples of the first three categories are fleas, mosquitoes, intestinal worms, ringworm, lice, and insect-born diseases such as malaria. Examples of the last one include strangler figs, vampire bats, and the zombie ant fungi that have been in the news lately.

Stargate’s Goa’uld, Spider-man’s Venom, and the xenomorph from Alien are examples of fictional parasitic antagonists. There’s a list of other made-up parasites on Wikipedia, though it’s probably incomplete. That said, I think we could go further. I’m not sure I’ve seen a bio-apocalypse or bio-thriller with protozoa or insects as a vector, though I’ve seen them with bacteria and viruses. And we shouldn’t forget the parasites that don’t affect humans. Some insects, such as wasps, lay eggs in other animals. A number of vines choke the life out of their supporting plants. Who knows what other kinds of parasites might evolve on other planets? Or if an alien parasite could use the strangling or egg-seeding techniques on humans?

Discussion of parasites leads us into other types of symbiotic relationships. (Yes, parasitism is a form of symbiosis.) There’s mutualism, where both species benefit. There’s commensalism, where one species benefits and the other is neither helped nor harmed. There’s also amensalism, where one species inhibits or kills off another but isn’t affected itself. Penicillium mold does this with some bacteria, for instance, and some plants produce substances that kill off competing plant life.

Mutualism can involve trading resources (think of nitrogen-fixing bacteria in plants), trading a service for a resource (pollinators, remora and sharks, animals dispersing seeds), or trading services (clownfish and anemones, ants nesting in trees). Humans are in mutually beneficial relationships with the bacteria in their intestines, and with domesticated animals.

Examples of commensalism include the cattle egret, which feeds off the insects stirred up by grazing cattle; barnacles, which attach to animals and plants as well as rocks and ships; plants that use other plants for support, such as orchids or moss; and hermit crabs, which use shells as housing.

Of course, the plants, animals, fungi, protozoa, and bacteria that engage in symbiotic relationships continue to evolve. They become better parasites, or better killers of parasites, or better nitrogen producers, or better protectors of their symbiotes. They’ll change size or shape or color or biochemistry. A change in one often means a change in the other. Symbiotes may even suffer a disability if their counterpart is removed. Lichen wouldn’t even exist if you separated the fungi and algae that form it, and removing a symbiote from an ecosystem could cause a cascade of species deaths and ultimately destroy the ecosystem.

Some questions that may spark a story or two:

  • What happens if you introduce an alien (let’s say, truly alien) species that becomes a parasite or other symbiote to a native organism?
  • Could you bioengineer a lifeform to enter into a symbiotic relationship with a plant/animal that needed a boost? Could you turn parasitism into mutualism?
  • Could you alter a symbiote’s genes to give it freedom? Would there ever be a circumstance where you’d want to do that?
  • What if one intelligent species was oppressing its equally intelligent symbiote for, say, eating insects instead of plant matter or having a strange physiology?
  • Since symbiotes tend to co-evolve, pick a possible resource or service that a species could provide, the crazier the better, then create a species that would make use of it. Remember that it will likely also be providing a service or resource for the other species. (E.g., a mollusk that feeds off electricity produced by electric eels; a plant that grows on a herbivore’s head and acts as a sound amplifier; an insect that cultivates a particular plant so that its eggs can hitch a ride on the seeds)

Non-Conformist Aliens Wanted

It occurred to me the other day that many fictional alien species conform to a small number of body plans: humanoid, insectoid, feline, robot, and reptilian. There’s a huge amount of creativity in appearances and cultures, admittedly, but most races out there fit one of those body plans. The Wikipedia list of fictional aliens is a good overview, for the above and other body plans, though it’s definitely not complete.

I realize there are reasons why most aliens are humanoid or nearly so. It’s easier to sympathize with something that looks human. It’s easier to conceive of aliens based on familiar Earth species. It’s easier to put make-up on an actor than to deal with CGI, or it was until recently. Still, why doesn’t more science fiction push the envelope? Why don’t we see more unusual body plans? It’s not as though we’d have to create entirely new physiologies, though we need those too. Earth has a whole host of creatures that have been underutilized in science fiction, including a few with proven intelligence.

Sharks, for instance, are ancient. They have cartilage instead of bone. They sense electricity and have an excellent sense of smell. They have problem-solving and social skills. There are documented cases of parthenogenesis. They’re built for predation and we’re already conditioned to cast them as villains.

Octopuses, corvids, parrots, and dolphins also have intelligence, or at least use tools and solve problems. Ravens can mimic sounds and have a wide range of calls, often for social purposes. Parrots are capable of communicating with humans. Dolphins have proto-language as well and are highly social. When was the last time you saw them (or parrots, or octopuses, or sharks) cast as aliens? Well, except for Hitchhiker’s Guide to the Galaxy…. Other possible species include: horseshoe crabs, trilobites, rabbits, elephants, slime molds, moss, bacterial hive-minds, and marsupials, including monotremes.

And as I mentioned above, we need more entirely new physiologies too. Species that don’t match up with the Earth life we’re familiar with, or even with our extremeophiles. If we make the environment first, the species second…

One possible environment, close to home: the diamond oceans of Uranus and Neptune. This would be a hot, high pressure place to live. There probably wouldn’t be a lot of gas mixed into the diamond, let alone oxygen, so either the aliens wouldn’t breathe, they’d use a system like photosynthesis where they’d break down carbon for energy, or they’d be like whales, surfacing to breathe hydrogen, helium, or methane (those being the abundant gases). The aliens would almost certainly be carbon-based, and would consume other carbon-based life for energy. They’d likely evolve something like fins or flagella to propel themselves. Maybe they’d use jet propulsion.

There’s no reason why these aliens couldn’t evolve intelligence or even civilization, though I doubt they’d achieve buildings as we know them, because short of building on the solid-diamond floes, there’d be nothing solid, and the caps probably wouldn’t be all that stable. I can see floating structures tethered together, however, provided the aliens had something to build with. Perhaps after millennia of them using these structures, they’d adapt to them, becoming more amphibious than aquatic or losing the flippers and gaining something more like hands. Or perhaps not.

I have no idea what first contact will look like when we make it. The realistic version, of us finding microbes on Mars or one of Jupiter’s moons, lacks a certain something, though I’ll be pleased when it happens. But wouldn’t it be great if we ran into alien blue jays or giant platypuses or sentient hammerhead sharks?

An earlier, rougher version of this essay was posted on my blog, Specnology.

The color of alien pants

On June 4, Peggy Kolm posted her article Red hills of distant planets. Prior to that date, one title proposed for the article was “The color of alien plants”. During a discussion about the article, the proposed title was misheard as “The color of alien pants“. And the idea for this article was born.

Really, what color would alien pants be? And for that matter, would they wear them at all? This isn’t to suggest that all aliens are exhibitionists: maybe they just don’t need clothing.

Human use of clothing dates back (most likely) between 100,000 and 500,000 years. Its main purpose (initially) was protection against environmental threats; as humans evolved and lost natural physical protections like body hair, we needed extra help surviving harsh weather and difficult terrain. Clothing has evolved along with us, growing more sophisticated as we have: sewing needles date back as far as 30,000 years; flax fibers are known to have existed 30,000 years ago; and there’s strong evidence that humans have been weaving for a good 10,000 years or more.

Read the rest of this entry »