Archive

Posts Tagged ‘biology’

Everybody Knows That The Dice Are Loaded, Everybody Rolls With Their Fingers Crossed

Every parent I know – particularly those with daughters – laments the near-impossibility of finding gender-neutral toys, clothing, and even toiletries for their young children. From bibs to booties, children’s items from birth onward are awash in a sea of pink and blue. Lucky the baby shower attendee, uncertain of the gender of an impending infant, who can find a green or yellow set of onesies as a present!

At the same time, I can’t count the number of times someone has told me, quite earnestly, how they discovered that boys “naturally” prefer blue and girls pink. Everyone knows this is true. No matter how hard parents try to keep their children clothed and entertained with carefully-selected gender-neutral colors and toys, girls just gravitate to princesslike frills, while boys invent the idea of guns ex nihilo and make them with their fingers or with sticks if they’re not provided with toy firearms by their long-suffering progenitors. There must be some genetic component to gunpowder/frill preferences1.

This is nonsense, of course, and obviously so; but there’s a pernicious strain of thought that insists that all of human behavior must have some underlying evolutionary explanation, and it’s trotted out with particular regularity to explain supposed gender (and other) differences or stereotypes as biologically “hard-wired”. These just-so stories about gender and human evolution pop up with depressing regularity, ignoring cultural and temporal counterexamples in their rush to explain matters as minor as current fashion trends as evolutionarily deterministic.

A phrenology chart from 1883, showing the areas of the brain and corresponding mental faculties as believed to exist by phrenologists of the time.

In the 19th century, everybodyknew that your intellectual and personal predispositions could be read using the measurements of your head.Image obtained via Wikimedia Commons

For example, a 2007 study purported to offer proof of, and an evolutionary explanation for, gender-based preferences in color – to wit, that boys prefer blue and girls prefer pink. Despite the fact that the major finding of the study was that both genders tend to prefer blue, the researchers explained that women evolved to prefer reds and pinks because they needed to find ripe berries and fruits, or maybe because they needed to be able to tell when their children had fevers2.

One problem with this idea is that currently-existing subsistence, foraging, or hunter-gatherer societies don’t all seem to operate on this sort of division of labor. The Aka in central Africa and the Agta in the Philippines are just two examples of such societies: men and women both participate in hunting, foraging, and caring for children. If these sorts of divisions of labor were so common and long-standing as to have become literally established in our genes, one would expect those differences to be universal, particularly among people living at subsistence levels, who can’t afford to allow egalitarian preferences to get in the way of their survival.

Of course, a much more glaring objection to the idea that “pink for boys, blue for girls” is the biological way of things is the fact that, less than a hundred years ago, right here in the United States from which I am writing, it was the other way around. In 1918,  Earnshaw’s Infants’ Department, a trade publication, says that “The generally accepted rule is pink for the boys, and blue for the girls. The reason is that pink, being a more decided and stronger color, is more suitable for the boy, while blue, which is more delicate and dainty, is prettier for the girl.” Pink was considered a shade of red at the time, fashion-wise, making it an appropriately manly color for male babies. Were parents in the interwar period traumatizing their children by dressing them in clothes that contradicted their evolved genetic preferences? Or do fashions simply change, and with them our ideas of what’s appropriate for boys and girls?

A photograph of US President Franklin Delano Roosevelt, age 2. He is wearing a frilled dress, Mary Jane sandals, and holding a feather-trimmed hat - an outfit considered gender-neutral for young children at the time.

This is FDR at age 2. No one has noted the Roosevelts to be a family of multigenerational cross-dressers, so take me at my word when I say this was normal clothing for young boys at the time.
Image obtained via Smithsonian Magazine

More recently, researchers at the University of Portmouth published a paper reporting that wearing high heels makes women appear more feminine and increases their attractiveness – a result they established by asking participants to view and rate videos of women walking in either high heels or flat shoes. The researchers don’t appear to have considered it necessary to test their hypothesis using videos of men in a variety of shoes3.

Naturally, articles about the study include plenty of quotes about the evolutionary and biological mechanisms behind this result4. But as with pink-and-blue, these ideas just aren’t borne out by history.  In the West, heels were originally a fashion for men. (In many non-Western societies heels have gone in and out of fashion for at least a few thousand years as an accoutrement of the upper classes of both genders.) They were a sign of status – a way to show that you were wealthy enough that you didn’t have to work for your living –  and a way of projecting power by making the wearer taller. In fact, women in Europe began wearing heels in the 17th century as a way of masculinizing their outfits, not feminizing them.

Studies like these, and the way that they reinforce stereotypes and cultural beliefs about the groups of people studied, have broader implications for society and its attitudes, but it’s also useful to think about them from a fictional and worldbuilding standpoint: the things we choose to study, and the assumptions we bring with us, often say more about us than about the reality of what we’re studying — particularly when the topic we’re studying is ourselves. Our self-knowledge is neither perfect nor complete. What are your hypothetical future-or-alien society’s blind spots? What assumptions do they bring with them when approaching a problem, and who inside or outside of that society is challenging them? What would they say about themselves that “everybody knows” that might not be true?

FOOTNOTES
1. Our ancestors, hunting and gathering on the savannah, evolved that way because the men were always off on big game safaris while the women stayed closer to home, searching out Disney princesses in the bushes and shrubs to complete their tribe’s collection. Frilly dresses helped them to disguise themselves as dangerous-yet-lacy wild beasts to scare off predators while the men weren’t there to protect them.
2. Primitive humans either lacked hands or had not yet developed advanced hand-on-forehead fever detection technology.
3. Presumably because everyone knows that heels are for girls and that our reactions to people wearing them are never influenced by our expectations about what a person with a “high-heel-wearing” gait might be like.
4. On the savannah, women often wore stiletto heels to help them avoid or stab poisonous snakes while the men were out Morris dancing.

Room Needed on the Ark

Startling StoriesImagine in the not-so-distant future an asteroid is on a direct course to hit the Earth. It’s large enough to destroy most life as we know it. NASA, the European Space Agency and China’s National Space Administration are scrambling to launch teams that will attempt to deflect the asteroid, but there is no guarantee that they will be successful.

Meanwhile a team of scrappy and resourceful aerospace engineers and biologists put into motion a plan meant to rescue at least a few species – including humans – from extinction. A spacecraft that will carry genetic material, along with live plants and animals, is readied for launch.

The hope is that after escaping the cataclysmic effects of the asteroid strike, the space ark would travel long enough for the Earth’s dust to settle (literally) so that the ship could return and restore life on our planet. Or perhaps the ship would continue on to a distant solar system, and the life it carries would be used to start a new settlement on a habitable planet.

This would obviously be a technically complex operation that would require substantial advance planning. One of the big tasks for the biologists on the team would be to decide how the genetic material and live travelers on the space ark would be selected and collected.

An obvious source of genetic material would be gene banks that collect and store samples of a wide range of genetic material. Such repositories exist today. The Millenium Seed Bank Partnership, for example, is an international project meant to save seeds from wild plants around the world. There are a number of other more agriculture-focused gene banks around the world that preserve seeds from a variety of crops.

Animal genetic material is a bit more difficult to archive than plant seeds. Projects like the US Department of Agriculture’s national Animal Germplasm Program primarily focuses on collecting and storing semen and eggs, not embryos.

There is also a current push to sequence the genomes of as many different species as possible. Perhaps in the future will have the technology to start from a raw DNA sequence to create a living breathing animal. There have been recent proposals to use DNA sequences along with reproductive cloning technology to restore wild animal populations on the verge of extinction.

But whether an “archived” animal is grown from germ plasm or from a synthesized DNA sequence, there still must be at least one female for the fetuses to grow in. Not a simple proposition.

But it would not be enough for our space ark to carry a male and female of each species. There must be a minimum number of genetically distinct individuals to allow a population to survive and thrive.  Conservation biologists estimate that such a “minimum viable population” would require anywhere from a hundred to several thousand members to survive at least a century. The lower estimates usually assume that there would be minimal environmental changes and human intervention to keep the population going.

Even with human intervention a lack of genetic diversity in a population puts it at serious risk for being completely destroyed by disease or unexpected environmental changes. That’s already a problem today. Disease outbreaks have put agricultural “monocultures” of some crops (like the Cavendish banana) at risk of extinction.

If samples from many individuals of a species are required for genetic diversity, our hypothetical space ark might not have enough space to carry every known species. So how would a biologist decide which critters are most important to save? That turns out to be a complicated question.

Restoring – or creating – a stable ecosystem needs to have a wide variety of different species from microbes to large vertebrates and algae to trees. The exact needs would depend on the local climate, soil and atmospheric conditions, among other factors. So far, we humans haven’t been very successful in creating an ecosystem from scratch. And the less that’s known about the environment where the ecosystem is going to be established, the longer the list of potentially necessary species.

So for the space ark scenario to work, it would not only need to carry a variety of species, but a variety of individuals in each species. And that is, of course, in addition to the humans – not just an Adam and Eve, but a large mixed group of people with enough genetic variation to start a healthy human colony. Throw in the complex social and political considerations in selecting who gets rescued and the population would probably have to number in the thousands.

Our hypothetical space ark would have to be huge to carry them all!

The space ark scenario is admittedly pretty implausible, at least with present-day technology. Even so, I think it’s worth seriously considering how it might be done. That’s not just because catastrophe is always a possibility, but because I’d like to think that some day self-sufficient extraterrestrial colonies will be a reality. We need to start thinking about how we might do that now so that the genetic material can be saved and reproductive technologies can be developed before they become a necessity.

But there are many questions that need to be considered:

If we are going to collect and archive seeds and animal germ plasm and genomic DNA sequences should the focus be on agricultural species? or should we cast our species net as far and wide as possible?

Should we seriously consider setting up a gene bank on the Moon, just in case something terrible happens to the Earth? or would it be better to have our archives closer at hand so that they can be more easily maintained and added to? How much redundancy should there be between different seed and germ plasm repositories?

Or should we focus more of our resources on developing synthetic biology techniques, in the hope that they will eventually become advanced enough so that collections of physical specimens will become unnecessary?

And if Earthly life is destroyed, would it be worth trying to restore Earth’s ecosystems or better to start over elsewhere among the stars?

What do you all think?

Technical Reading 

Blackburn HD “Genebank development for the conservation of livestock genetic resources in the United States of America” Livestock Science 120:196-203 (2009) (pdf)

Holt WV et al “Wildlife conservation and reproductive cloning” Reproduction 127:317-324 (2004) (text)

Traill LW et al “Minimum viable population size” a meta-analysis of 30 years of published estimates” Biological Conservation 139:159-166 (2007) (pdf)

Shaffer ML “Minimum Population Sizes for species Conservation” BioScience 31(2):131-134 (1981) (pdf)

Zhu et al “Genetic diversity and disease control in rice” Nature 406:718-722 (2000) doi:10.1038/35021046 (text)

 

The Future of Green Energy?

Switch on treeThe luxuries of our modern life are heavily dependent on having continuing access to a source of electricity. But power generation often requires consumption of limited resources like oil or coal, and generate high levels of pollution. Even “clean” energy sources like solar or hydroelectric power can significantly harm the environment.

Imagine a clean and green source of power that not only doesn’t harm the environment, but helps clean the air. Trees, for example, help reduce atmospheric carbon dioxide levels, and provide shade that makes use of electricity-hogging air conditioners less necessary. And trees and other plants, it turns out, can generate an electrical current that can be tapped.

The xylem tissue in vascular plants like trees transports water, ions and mineral nutrients as sap from the roots to the rest of the plant. There is a difference in voltage between xylem and the soil, which allows the potential for plants to generate an electrical current that could be tapped into.

Recently a team of Japanese scientists demonstrated that a battery could be created from 10 ordinary potted house plants connected in a circuit. They found their “green battery” could generate 3 volts and 3 microAmps of current. So far it has apparently only been used to power a blinking light.

Another research group lead by Brian Otis and Babak Parviz at the University of Washington has shown they can run a circuit entirely powered by Bigleaf maple trees. Their key to success seems to be the use of “nanocircuits”. These custom integrated circuits have lower power requirements than standard chips. Such low power circuits would have broad applications in wireless devices like smartphones and even biosensor contact lenses.

Current applications for tree-powered devices seem limited to monitoring of the environment and wildlife in remote areas where battery-powered devices would be impractical. Trees and other plants simply don’t generate enough power to run our appliances or smartphones.

But I think it’s possible that devices in the future using low-power chips might be able to run on plant power. And perhaps we could engineer trees to produce more tappable electricity. Perhaps we’ll end up living in real-life tree houses.

What do you think the future could bring?

More reading:

Ferris Jabr “The Shocking Truth: Trees are Electric” ScienceLine (2010)

Love CJ, Zhang S, Mershin A (2008) “Source of Sustained Voltage Difference between the Xylem of a Potted Ficus benjamina Tree and Its Soil.” PLoS ONE 3(8): e2963. doi:10.1371/journal.pone.0002963 (free article)

Yamaguchi, T. and Hashimoto, S. (2012), “A green battery by pot-plant power.” IEEJ Trans Elec Electron Eng, 7: 441–442. doi: 10.1002/tee.21754 (subscription required)

Himes C. et al (2010) “Ultralow voltage nanoelectronics powered directly, and solely, from a tree” IEEE Transactions on Nanotechnology 9(1): 2-5 doi:10.1109/TNANO.2009.2032293 (free pdf)

Image: Violation by hapal, on Flickr, licensed under Creative Commons.

Realism in combat: perceptual distortions

Writing combat sequences and traumatic events is always a challenge. There are plenty of questions and answers out there about the mechanics of sword fighting, bare-handed combat, or guns. Most of us can extrapolate our more ordinary experiences with adrenaline to that sort of situation, to get those flavorful extra details: cold sweat, pounding pulse, hands shaking. But it turns out there are even more options than those.

Perceptual distortions are common, in combat situations. The following can be used for dramatic effect or to set up conflicts because of differing accounts of what happened. According to the numbers reported, it’s not unusual to experience more than one of the following distortions — in fact, it would be more unusual to not experience any.

Listed from most common to least common, as reported by Artwohl & Christensen in 1997.

Above 50%:

  • Diminished sound: Not to be confused with being deafened by the noise of gunshots or whatever else is going on. This is sound being actively screened out by the brain. It may be a case of sounds being lowered in volume, it might be a complete blockage of all noise, or it might be selective editing of certain noises (such as gunshots.)
  • Tunnel vision: The brain can actively screen out visual information, too, so as to narrow one’s focus to the most important (threatening) thing on hand.
  • Automatic pilot: This is why soldiers and police officers drill certain sequences of actions into becoming reflexes — because when one’s conscious brain shuts down under the tidal wave of adrenaline, that’s what still works. If your character hasn’t trained his automatic pilot to fight…?
  • Heightened visual clarity: This is why some combat pilots can describe, 50 years later, the look on the face of the enemy pilot they shot down. Adrenaline can burn images onto the brain.
  • Slow motion time: In addition to being a cool movie effect, this can actually seem to happen. Some swear that they saw the bullets zipping by, and who’s to say they didn’t?
  • Memory loss: In addition to the sights and sounds that the brain might edit out, the entire memory can be simply lost. Or is it only misplaced and waiting to burble up in a nightmare…

Below 50%:

  • Dissociation: Some get the sensation of watching themselves from a distance, in these situations.
  • Intrusive, distracting thoughts: Perhaps it’s thoughts of loved ones, one’s god/goddess, or “did I leave the oven on?”
  • Memory distortions: Not lost memories, but incorrect ones. This is part of why eyewitnesses are not as reliable as we wish they were.
  • Accelerated time: Blink and you missed it. Or was your brain editing stuff out?
  • Intensified sounds: Terror can crank everything up to eleven. This can make the situation even more overwhelming, and maybe accounts for the character losing nerve and running.
  • Temporary paralysis: This was relatively rare, but terrifying. How quickly the subject can realize it isn’t real will improve his chances of survival.

Metamorphosis, Transformation and Evolution

In the huge, crisp cocoon, extraordinary processes began.
The caterpillar’s swathed flesh began to break down. Legs and eyes and bristles and body-segments lost their integrity. The tubular body became fluid.
The thing drew on the stored energy it had drawn from the dreamshit and powered its transformation. It self-organized. Its mutating form bubbled and welled up into strange dimensional rifts oozing like oily sludge over the brim of the world into other planes and back again. It folded in on itself, shaping itself out of the protean sludge of its own base matter.
It was unstable.
It was alive, and then there was a time between forms when it was neither alive nor dead, but saturated with power.
And then it was alive again. But different.
~ Perdido Street Station, China Miéville

ManducaThe metamorphosis of caterpillars into butterflies (either beautiful or terrifying) is an amazing process.

The larva encases itself in a chrysalis or cocoon and enzymes begin to break down its tissues. Eventually all that is left of the original larva are clusters of cells known as imaginal discs.  The digested tissue from the remainder of the caterpillar supplies nutrients to the imaginal discs which rapidly grow and differentiate into the wings, antennae, legs and other parts of the adult butterfly.  The adult emerges from the chrysalis fully formed.

Amazingly, a recent study has shown that behavior learned as a larva can be retained in the adult, suggesting that the neurons involved in memory also survive metamorphosis and are integrated into the adult nervous system.

There are a number of hypotheses to explain how such a complicated system might have evolved. But the oddest hypothesis comes from zoologist Donald Williamson , who suggests that the larval caterpillar and adult butterfly evolved from two completely different organisms, whose genomes somehow fused together. He proposes that the transformation of a caterpillar into a butterfly is more one creature turning into another, than a juvenile turning into an adult.

Williamson’s idea has been pretty thoroughly debunked in light of what’s known about butterfly and moth biology and evolution. It’s especially hard to explain in light of the experiments showing the persistence of memory through the process. But I think it’s a great science fictional idea.

In Orson Scott Card’s Speaker for the Dead the alien Pequininos (or piggies) go through metamorphosis from animal to plant, which never seemed very biologically plausible to me.

So are there good science fiction examples of hybrid lifeforms that shift from one to the other during their lifetime?  What do you guys think?

Related reading:

Top image: Manduca sexta (tobacco horn worm) larva devouring a tomato plant in preparation for metamorphosis. Photo by me.

Bottom image: Adult butterfly, species unknown. Photo by me.

“Arsenic” Life, or: There Is TOO a Dragon in My Garage!

Note: This article was originally posted at Starship Reckless.

GFAJ-1 is an arsenate-resistant, phosphate-dependent organism — title of the paper by Erb et al, Science, July 2012

Everyone will recall the hype and theatrical gyrations which accompanied NASA’s announcement in December 2010 that scientists funded by NASA astrobiology grants had “discovered alien life” – later modified to “alternative terrestrial biochemistry” which somehow seemed tailor-made to prove the hypothesis of honorary co-author Paul Davies about life originating from a “shadow biosphere”.

As I discussed in The Agency that Cried “Awesome!, the major problem was not the claim per se but the manner in which it was presented by Science and NASA and the behavior of its originators. It was an astonishing case of serial failure at every single level of the process: the primary researcher, the senior supervisor, the reviewers, the journal, the agency. The putative and since disproved FTL neutrinos stand as an interesting contrast: in that case, the OPERA team announced it to the community as a puzzle, and asked everyone who was willing and able to pick their results apart and find whatever error might be lurking in their methods of observation or analysis.

Those of us who are familiar with bacteria and molecular/cellular biology techniques knew instantly upon reading the original “arsenic life” paper that it was so shoddy that it should never have been published, let alone in a top-ranking journal like Science: controls were lacking or sloppy, experiments crucial for buttressing the paper’s conclusions were missing, while other results contradicted the conclusions stated by the authors. It was plain that what the group had discovered and cultivated were extremophilic archaea that were able to tolerate high arsenic concentrations but still needed phosphorus to grow and divide.

The paper’s authors declined to respond to any but “peer-reviewed” rebuttals. A first round of eight such rebuttals, covering the multiple deficiencies of the work, accompanied its appearance in the print version of Science (a very unusual step for a journal). Still not good enough for the original group: now only replication of the entire work would do. Of course, nobody wants to spend time and precious funds replicating what they consider worthless. Nevertheless, two groups finally got exasperated enough to do exactly that, except they also performed the crucial experiments missing in the original paper: for example, spectrometry to discover if arsenic is covalently bound to any of the bacterium’s biomolecules and rigorous quantification of the amount of phosphorus present in the feeding media. The salient results from both studies, briefly:

– The bacteria do not grow if phosphorus is rigorously excluded;
– There is no covalently bound arsenic in their DNA;
– There is a tiny amount of arsenic in their sugars, but this happens abiotically.

The totality of the results suggests that GFAJ-1 bacteria have found a way to sequester toxic arsenic (already indicated by their appearance) and to preferentially ingest and utilize the scant available phosphorus. I suspect that future work on them will show that they have specialized repair enzymes and ion pumps. This makes the strain as interesting as other exotic extremophiles – no less, but certainly no more.

What has been the response of the people directly involved? Here’s a sample:

Felisa Wolfe-Simon, first author of the “arsenic-life” paper: “There is nothing in the data of these new papers that contradicts our published data.”

Ronald Oremland, Felisa Wolfe-Simon’s supervisor for the GFAJ-1 work: “… at this point I would say it [the door of “arsenic based” life] is still just a tad ajar, with points worthy of further study before either slamming it shut or opening it further and allowing more knowledge to pass through.”

John Tainer, Felisa Wolfe-Simon’s current supervisor: “There are many reasons not to find things — I don’t find my keys some mornings. That doesn’t mean they don’t exist.”

Michael New, astrobiologist, NASA headquarters: “Though these new papers challenge some of the conclusions of the original paper, neither paper invalidates the 2010 observations of a remarkable micro-organism.”

At least Science made a cautious stab at reality in its editorial, although it should have spared everyone — the original researchers included — by retracting the paper and marking it as retracted for future reference. The responses are so contrary to fact and correct scientific practice (though familiar to politician-watchers) that I am forced to conclude that perhaps the OPERA neutrino results were true after all, and I live in a universe in which it is possible to change the past via time travel.

Science is an asymptotic approach to truth; but to reach that truth, we must let go of hypotheses in which we may have become emotionally vested. That is probably the hardest internal obstacle to doing good science. The attachment to a hypothesis, coupled with the relentless pressure to be first, original, paradigm-shifting can lead to all kinds of dangerous practices – from cutting corners and omitting results that “don’t fit” to outright fraud. This is particularly dangerous when it happens to senior scientists with clout and reputations, who can flatten rivals and who often have direct access to pop media. The result is shoddy science and a disproportionate decrease of scientists’ credibility with the lay public.

The two latest papers have done far more than “challenge” the original findings. Sagan may have said that “Absence of evidence is not evidence of absence,” but he also explained how persistent lack of evidence after attempts from all angles must eventually lead to the acceptance that there is no dragon in that garage, no unicorn in that secret glade, no extant alternative terrestrial biochemistry, only infinite variations at its various scales. It’s time to put “arsenic-based life” in the same attic box that holds ether, Aristotle’s homunculi, cold fusion, FTL neutrinos, tumors dissolved by prayer. The case is obviously still open for alternative biochemistry beyond our planet and for alternative early forms on earth that went extinct without leaving traces.

We scientists have a ton of real work to do without wasting our pitifully small and constantly dwindling resources and without muddying the waters with refuse. Being human, we cannot help but occasionally fall in love with our hypotheses. But we have to take that bitter reality medicine and keep on exploring; the universe doesn’t care what we like but still has wonders waiting to be discovered. I hope that Felisa Wolfe-Simon remains one of the astrogators, as long as she realizes that following a star is not the same as following a will-o’-the-wisp — and that knowingly and willfully following the latter endangers the starship and its crew.

Relevant links:

The Agency that Cried “Awesome!”

The earlier rebuttals in Science

The Erb et al paper (Julia Vorholt, senior author)

The Reaves et al paper (Rosemary Rosefield, senior author)

Images: 2nd, Denial by Bill Watterson; 3rd, The Fool (Rider-Waite tarot deck, by Pamela Cole Smith)

A Recipe for Sentience: The Energetics of Intelligence

“No man can be wise on an empty stomach.”

- Mary Anne Evans, under the pseudonym George Eliot

 

We humans have been suffering from a bit of a self-image problem for the last half century.

First we were Man the Tool-Maker, with our ability to reshape natural objects to serve a purpose acting to  separate us from the brute beasts.  This image was rudely shattered by Jane Goodall’s discovery in the 1960s that chimpanzees also craft and use tools, such as stripping leaves from a twig to fish termites out of their nest to eat, or using the spine of an oil palm frond as a pestle to pulverize the nutritious tree pulp.

Then we were Man the Hunter.  We’d lost our tool-making uniqueness but we still had our ability to kill, dismember, and eat much larger animals with even simple tools, and it was thought that this ability unlocked enough energy in our diet to fuel the growth of larger body size and larger brains1.  This idea rather famously bled into popular culture and science fiction of the time, such as the opening to the movie 2001: A Space Odyssey.  However, we would later find out that although it is not a large component of the diet, chimpanzees eat enough meat to act as significant predators on other primates in their forest homes.  We would also find out that the bone piles we had once attributed to our ancestors belonged to ancient savannah predators, and that the whole reason hominid bones showed up in the assemblage at all is because we were occasionally lunch.

So meat eating by itself doesn’t seem to make us as distinct from our closest living relatives as we had previously thought, and the argument of what makes us special has since moved on to language.  That does leave a standing question, though: if it wasn’t meat-eating that allowed us to get bigger and more intelligent, what was it?

While there is evidence in the fossil record that eating raw meat allowed humans to gain more size and intelligence, it is both unlikely that we were the hunters and that this behavioral change was enough to unlock a significant jump in brain size.  Instead, there is another hypothesis and human identity that has been gaining more traction as of late: the concept of Man the Cooking Animal, the only animal on Earth that can no longer survive on a diet of raw food because of the energy demands of its enormous brain2.

Napoleon is famously said to have declared that an army marches on its stomach (at least, after what may be a loose translation).  That is, the power of an army is limited by the amount of food that a society can divert to it.  What we have come to realize more recently is that this same limitation exists inside the body, be it human, animal, or speculative alien species.  No matter what the diet, a creature will only have a fixed amount of energy available to divert to activities such as maintaining a warm-blooded body temperature (homeothermy), digestion, reproduction, and the growth and maintenance of tissues.  We can track some of these changes in the human line in the fossil record, but others must at best be more speculative due to the difficulty of preserving evidence of behavioral changes (which of course, do not fossilize) as well as limited research on modern examples.  We’ll start by looking at the evolutionary pathway of humans to see what information is currently available.

 

 The Woodland Ape and the Handy Man

 

Size comparison of Australopithecus afarensis and Homo sapiens (by Carl Buell)

Some of the oldest human ancestors that we can unequivocally identify as part of our line lie in the genus Australopithecus.  These have been identified by some authors as woodland apes, to distinguish these more dryland inhabitants from the forest apes that survive today in Africa’s jungles (chimpanzees, bonobos, and gorillas).  They are much smaller than a modern human, only as tall as a child, but they have already evolved to walk upright.  They still show adaptations for climbing that were lost in later species, suggesting they probably escaped into the trees at night to avoid ground predators, as modern chimps do.  Their brains were not much larger than a modern chimpanzee’s, and their teeth are very heavy, even pig-like, as an adaptation to a tough diet of fibrous plant material – probably roots, tubers, and corms, perhaps dug from plants growing at the water’s edge2,3.

The hominids thought to have first started eating meat are Homo habilis, the “handy man”, and the distinction between them and the older Australopithecus group from which they descended are not very large.  The two are close enough that Homo habilis has been suggested it might be more properly renamed to Australopithecus habilis, while the interspecies variation suggests to some researchers that what we now call habilis may represent more than one species4Whatever its proper taxonomic designation, H. habilis shows a modest increase in brain size and evidence that it was using simple stone tools to butcher large mammals, probably those left behind by the many carnivorous mammals that lived on the savannahs and woodlands alongside it.

The transition between H. habilis and H. erectus is far more distinctive, with a reduction in tooth size, jaw size, and gut size, and an increase in brain volume.  They are also believed to have been larger, but the small number of available hominid fossils makes this difficult to verify.  H. erectus is also the first human to have been found outside of Africa.  While the habilis-erectus split has been attributed to the eating of significant amounts of meat in the Man-the-Hunter scenario (recall that habilis, despite its tool-using ability for deconstructing large animals, does not appear to have hunted them), the anthropologist Richard Wrangham has suggested that the turnover instead indicates the first place at which humans began to cook2,3.  Because the oldest solid evidence of cooking is far younger than the oldest known fossils of erectus, what follows is largely based on linking scraps of evidence from modern humans and ancient fossils using what is known as the Expensive-Tissue Hypothesis.

 

 Brains versus Guts: The Expensive-Tissue Hypothesis

 The Expensive-Tissue Hypothesis was first proposed by Leslie Aiello and Peter Wheeler in 19955, and it goes something like this.  Large brains evolve in creatures that live in groups because intelligence is important to creating and maintaining the social groups.  This is known as the social brain hypothesis, and it helps to explain why animals that live socially have larger brains than their more solitary relatives.  However, not all social primates, or even social animals, have particularly large brains.  Horses, for example, are social animals not known for their excessively large brain capacity, and much the same can be said for lemurs.  Meanwhile, apes have larger brains than most monkeys.  This can’t be accounted for purely by the social brain hypothesis, since by itself it would suggest that all social primates and perhaps all social animals should have very big brains, rather than the variation we see between species and groups.  What does account for the difference is the size of the gut and, by extension, the quality of the diet.

Both brains and guts fit the bill for expensive body tissues.  In humans, the brain uses about 20% of the energy we expend while resting (the basal metabolic rate, or BMR) to feed an organ that only makes up 2.5% of our body weight2.  This number goes down in species with smaller brains, but it is still disproportionately high in social, big-brained animals.  Aiello and Wheeler note that one way to get around this lockstep rule is to increase the metabolic requirements of the species5 (i.e., throw more calories at the problem), but humans don’t do this, and neither do other great apes.  Our metabolic rates are exactly what one would expect for primates of our size.  The only other route is to decrease the energy flow to other tissues, and among the social primates only the gut tissue shows substantial variation in its proportion of body weight.  In fact, the correlation between smaller guts and larger brains lined up quite well in the data then available for monkeys, gibbons, and humans5.  Monkeys and other animals that feed on low-quality diets containing significant amount of indigestible fibers or dangerous plant toxins have very large guts to handle the problem and must expend a significant amount of their BMR on digestion, and have less extra energy to shunt to operate a large brain.  Fruit-eating primates such as chimpanzees and spider monkeys have smaller guts to handle their more easily-digested food, and so have larger brains.  Humans spend the least amount of time eating of any living primate, with equally short digestion times as food speeds through a relatively small gut.  And ours, of course, are the largest brains of all2.

These tradeoffs are not hard-linked to intestinal or brain size, and have been demonstrated in other species.  For example, there is a South American fish species with a tiny gut that uses most of its energy intake to power a surprisingly large brain, while birds with smaller guts often use the energy savings not to build larger brains, but larger, stronger wing muscles2.  Similarly, muscle mass could be shed instead of gut mass to grow a larger brain or to cut overall energy costs.  The latter strategy is the one taken up by tree-dwelling sloths to survive on a very poor diet of tough, phytotoxin-rich leaves, and although it makes them move like rusty wind-up toys it also allows them to live on lower-quality food than most leaf-eating mammals.

Modern humans have, to a degree, taken this approach as well.  When compared to one of our last surviving relatives, H. neanderthalensis, humans have a skeletal structure that paleontologists describe as “gracile:” light bones for our body size, anchoring smaller muscles than our shorter, heavier relatives.  Lower muscle and bone mass in H. sapiens gives us an average energy cost on the order of 1720 calories a day for males and 1400 calories a day for females in modern cold-adapted populations, which are thought to have similar metabolic adaptations for cold weather as the as extinct Neanderthals.  By contrast, H. neanderthalis has been estimated to need 4000-7000 calories a day for males and 3000-5000 calories for females, with the higher costs reflecting the colder winter months6.

 

Cooked versus Raw

 

Tribe of Homo erectus cooking with fire (from sciencephoto.com)

At the point where human brain size first increases dramatically (H. erectus, as you might recall), both guts and teeth reduce significantly while the brain increases.  The expensive tissue hypothesis explains the tradeoff between guts and brains, but cooking provides a possible explanation for how both the teeth and the guts could reduce so significantly while still feeding a big brain.

Data on the energetics of cooked food are currently limited, but the experiments that have been performed so far indicate that the softer and more processed the food the more net calories are extracted, since less calories need to be spent on digestion.  A Japanese experiment with rats showed that they gained more weight on laboratory blocks that had been puffed up like a breakfast cereal versus rats on normal blocks, even though the total calories in the food were the same and the rats spent the same amount of energy on exercise2.  Similarly, experiments with pythons show that they expend about 12% more energy breaking down whole meat than either meat that has been cooked or meat that has been finely ground.  The two treatments reduce energy cost independently of each other, meaning that snakes fed ground, cooked meat used almost 24% less energy than pythons fed whole raw meat or rats2.

There is even less data on how humans utilize cooked food versus raw food.  Because it only recently occurred to us that we might not be able to eat raw food diets like other animals, only a few studies exist.  So far the most extensive is the Giessen Raw Food study performed in Germany, which used questionnaires to collect data from 513 raw foodists in Germany who eat anywhere from a 75% to 100% raw food diet.  The data are startling.  Modern humans appear to do extremely poorly on diets that our close relatives, the forest apes, would get sleek and fat on.  Body weights fall dramatically when we eat a significant amount of raw food, to the point where almost a third of those eating nothing but raw had body weights suggesting chronic energy deficiency.  About half of the women on total raw food diets had so little energy to spare that they had completely ceased to menstruate, and 10% had such irregular cycles that they were likely to be completely unable to conceive at their current energy levels2.   Mind you, these are  modern first-world people with the advantage of high-tech processing equipment to reduce the energy cost of eating whole foods, far less energy expenditure required to gather that food, and a cornucopia of modern domestic plants that have been selectively bred to produce larger fruits and vegetables with and lower fiber and toxin contents than their wild counterparts.  The outcome looks more dismal for a theoretical raw-food-eating human ancestor living  before the dawn of civilization and supermarkets.

 

Fantastic Implications

What this all ultimately suggests is that there are tradeoffs in the bodies of intelligent creatures that we may not have given much consideration: namely, that to build a bigger brain you either need a much higher level of caloric intake and burn (high BMR) or the size and energy costs in something in the body have to give.  Certain organs do not appear to have much wiggle room for size reduction, as Aiello and Wheeler discovered; hearts for warm-blooded organisms need to be a certain size to provide enough blood throughout the body, and similarly lungs must be a particular size to provide enough surface area for oxygen to diffuse into the blood.  However, gut size can fluctuate dramatically depending on the requirements of the diet, and musculature can also reduce to cut energy costs.

Humans seem to have done an end-run around some of the energy constraints of digestion by letting the cultural behaviors of cooking and processing do the work for them, freeing up energy for increased brain size following social brain hypothesis patterns.  This is pretty classic human adaptive behavior, the same thing that lets us live in environments ranging from arctic to deep desert, and should therefore not come as a great surprise.  It does, however, give us something to think about when building intelligent races from whole cloth: what energy constraints would they run up against, and assuming they didn’t take the human path of supplanting biological evolution with culture, how would they then get around them?

You're going to need to cook that first. (From http://final-girl.tumblr.com/)

Fantasy monsters and evil humanoids in stories tend to be described as larger and stronger than humans (sometimes quite significantly so) and as raw meat eaters, particularly of humanoid meat.  There’s a good psychological reason for doing so – both of these characteristics tap into ancient fears, one of the time period not so long ago when humans could end up as prey for large mammalian predators, and the other a deep-seated terror of cannibalism without a heavy dose of ritualism to keep it in check.  However, both the Neanderthal example and the Expensive Tissue Hypothesis suggest that such a species would be very difficult to produce; there’s a very good reason why large mammalian predators, whatever their intelligence level, are rare.  It wouldn’t be a large shift, however, to take a monstrous race and model them after a hybrid of Neanderthal and grizzly bear, making them omnivores that can supplement their favored meat diet with plant foods and use cooking to reduce the energy costs of digestion.  Or perhaps their high caloric needs and obligate carnivory could become a plot point, driving them to be highly expansionistic simply in order to keep their people fed, and to view anything not of their own race as a potential meal.

On the science fiction front, it presents limitations that should be kept in mind for any sapient alien.  To build a large brain, either body mass has to give somewhere (muscle, bone, guts) or the caloric intake needs to increase to keep pace with the higher energy costs.  Perhaps an alien race more intelligent than humans would be able to do so by becoming even more gracile, with fragile bones and muscles that may work on a slightly smaller, lower-gravity planet.  Or perhaps they reduce their energy needs by being an aquatic race, since animals that swim generally use a lower energy budget for locomotion than animals that fly or run7.

From such a core idea, whole worlds can be spun: low-gravity planets that demand less energy for terrestrial locomotion; great undersea empires in either a fantastic or an alien setting, where water buoys the body and reduces energy costs enough for sapience; or creatures driven by hunger and a decidedly human propensity for expansion that spread, locust-like, across continents, much as we did long ago when we first left our African cradle.

Food for thought, indeed.

 

References

1.  Stanford, C.B.,  2001.  The Hunting Apes: Meat Eating and the Origins of Human Behavior.   Princeton, NJ: Princeton University Press.

2. Wrangham, R., 2009.  Catching Fire: How Cooking Made us Human.  New York, NY: Basic Books.

3. —-, 2001.  “Out of the Pan, into the fire:  from ape to human. ”  Tree of Origin: What Primate Behavior Can Tell us About Human Social Evolution.  Ed.  F.B.M. de Waal.   Cambridge, MA:  Harvard University Press.   119-143.

4. Miller, J.A., 1991.  “Does brain size variability provide evidence of multiple species in Homo habilis?”  American Journal of Physical Anthropology 84(4): 385-398.

5. Aiello, L.C. and P. Wheeler, 1995.  “The Expensive-Tissue Hypothesis: The Brain and the Digestive System in Human and Primate Evolution.”  Current Anthropology 36(2): 199-221.

6. Snodgrass, J.J., and W.R. Leonard, 2009.  “Neanderthal Energetics Revisited: Insights into Population Dynamics and Life History Evolution.”  PaleoAnthropology 2009: 220-237.

7. Schmidt-Nielsen, K., 1972.  “Locomotion: Energy cost of swimming, flying, and running.”  Science 177: 222-228.

 

That Shy, Elusive Rape Particle

Note: This article originally appeared on Starship Reckless

[Re-posted modified EvoPsycho Bingo Card -- click on image for bigger version]

One of the unlovely things that has been happening in Anglophone SF/F (in line with resurgent religious fundamentalism and erosion of democratic structures in the First World, as well as economic insecurity that always prompts “back to the kitchen” social politics) is the resurrection of unapologetic – nay, triumphant – misogyny beyond the already low bar in the genre. The churners of both grittygrotty “epic” fantasy and post/cyberpunk dystopias are trying to pass rape-rife pornkitsch as daring works that swim against the tide of rampant feminism and its shrill demands.

When people explain why such works are problematic, their authors first employ the standard “Me Tarzan You Ape” dodges: mothers/wives get trotted out to vouch for their progressiveness, hysteria and censorship get mentioned. Then they get really serious: as artists of vision and integrity, they cannot but depict women solely as toilet receptacles because 1) that has been the “historical reality” across cultures and eras and 2) men have rape genes and/or rape brain modules that arose from natural selection to ensure that dominant males spread their mighty seed as widely as possible. Are we cognitively impaired functionally illiterate feminazis daring to deny (ominous pause) SCIENCE?!

Now, it’s one thing to like cocoa puffs. It’s another to insist they are either nutritional powerhouses or haute cuisine. If the hacks who write this stuff were to say “Yeah, I write wet fantasies for guys who live in their parents’ basement. I get off doing it, it pays the bills and it has given me a fan base that can drool along with me,” I’d have nothing to say against it, except to advise people above the emotional age of seven not to buy the bilge. However, when they try to argue that their stained wads are deeply philosophical, subversive literature validated by scientific “evidence”, it’s time to point out that they’re talking through their lower digestive opening. Others have done the cleaning service for the argument-from-history. Here I will deal with the argument-from-science.

It’s funny how often “science” gets brandished as a goad or magic wand to maintain the status quo – or bolster sloppy thinking and confirmation biases. When women were barred from higher education, “science” was invoked to declare that their small brains would overheat and intellectual stress would shrivel their truly useful organs, their wombs. In our times, pop evopsychos (many of them failed SF authors turned “futurists”) intone that “recent studies prove” that the natural and/or ideal human social configuration is a hybrid of a baboon troop and fifties US suburbia. However, if we followed “natural” paradigms we would not recognize paternity, have multiple sex partners, practice extensive abortion and infanticide and have powerful female alliances that determine the status of our offspring.

I must acquaint Tarzanists with the no-longer-news that there are no rape genes, rape hormones or rape brain modules. Anyone who says this has been “scientifically proved” has obviously got his science from FOX News or knuckledraggers like Kanazawa (who is an economist, by the way, and would not recognize real biological evidence if it bit him on the gonads). Here’s a variation of the 1986 Seville Statement that sums up what I will briefly outline further on. It goes without saying that most of what follows is shorthand and also not GenSci 101.

It is scientifically incorrect to say that:
1. we have inherited a tendency to rape from our animal ancestors;
2. rape is genetically programmed into our nature;
3. in the course of our evolution there has been a positive selection for rape;
4. humans brains are wired for rape;
5. rape is caused by instinct.

Let’s get rid of the tired gene chestnut first. As I’ve discussed elsewhere at length, genes do not determine brain wiring or complex behavior (as always in biology, there are a few exceptions: most are major decisions in embryo/neurogenesis with very large outcomes). Experiments that purported to find direct links between genes and higher behavior were invariably done in mice (animals that differ decisively from humans) and the sweeping conclusions of such studies have always had to be ratcheted down or discarded altogether, although in lower-ranking journals than the original effusions.

Then we have hormones and the “male/female brain dichotomy” pushed by neo-Freudians like Baron-Cohen. They even posit a neat-o split whereby too much “masculinizing” during brain genesis leads to autism, too much “feminizing” to schizophrenia. Following eons-old dichotomies, people who theorize thusly shoehorn the two into the left and right brain compartments respectively, assigning a gender to each: females “empathize”, males “systematize” – until it comes to those intuitive leaps that make for paradigm-changing scientists or other geniuses, whereby these oh-so-radical theorists neatly reverse the tables and both creativity and schizophrenia get shifted to the masculine side of the equation.

Now although hormones play critical roles in all our functions, it so happens that the cholesterol-based ones that become estrogen, testosterone, etc are two among several hundred that affect us. What is most important is not the absolute amount of a hormone, but its ratios to others and to body weight, as well as the sensitivity of receptors to it. People generally do not behave aberrantly if they don’t have the “right” amount of a sex hormone (which varies significantly from person to person), but if there is a sudden large change to their homeostasis – whether this is crash menopause from ovariectomy, post-partum depression or heavy doses of anabolic steroids for body building.

Furthermore, as is the case with gene-behavior correlation, much work on hormones has been done in mice. When similar work is done with primates (such as testosterone or estrogen injections at various points during fetal or postnatal development), the hormones have essentially no effect on behavior. Conversely, very young human babies lack gender-specific responses before their parents start to socialize them. As well, primates show widely different “cultures” within each species in terms of gender behavior, including care of infants by high-status males. It looks increasingly like “sex” hormones do not wire rigid femininity or masculinity, and they most certainly don’t wire propensity to rape; instead, they seem to prime individuals to adopt the habits of their surrounding culture – a far more adaptive configuration than the popsci model of “women from Venus, men from Mars.”

So on to brain modularity, today’s phrenology. While it is true that there are some localized brain functions (the processing of language being a prominent example), most brain functions are diffuse, the higher executive ones particularly so – and each brain is wired slightly differently, dependent on the myriad details of its context across time and place. Last but not least, our brains are plastic (otherwise we would not form new memories, nor be able to acquire new functions), though the windows of flexibility differ across scales and in space and time.

The concept of brain modularity comes partly from the enormously overused and almost entirely incorrect equivalence of the human brain to a computer. Another problem lies in the definition of a module, which varies widely and as a result is prone to abuse by people who get their knowledge of science from new-age libertarian tracts. There is essentially zero evidence of the “strong” version of brain modules, and modular organization at the level of genes, cells or organ compartments does not guarantee a modular behavioral outcome. But even if we take it at face value, it is clear that rape does not adhere to the criteria of either the “weak” (Fodor) or “strong” version (Carruthers) for such an entity: it does not fulfill the requirements of domain specificity, fast processing, fixed neural architecture, mandatoriness or central inaccessibility.

In the behavioral domain, rape is not an adaptive feature: most of it is non-reproductive, visited upon pre-pubescent girls, post-menopausal women and other men. Moreover, rape does not belong to the instinctive “can’t help myself” reflexes grouped under the Four Fs. Rape does not occur spontaneously: it is usually planned with meticulous preparation and it requires concentration and focus to initiate and complete. So rape has nothing to do with reproductive maxima for “alpha males” (who don’t exist biologically in humans) – but it may have to do with the revenge of aggrieved men who consider access to women an automatic right.

What is undeniable is that humans are extremely social and bend themselves to fit context norms. This ties to Arendt’s banality of evil and Niemöller’s trenchant observations about solidarity – and to the outcomes of Milgram and Zimbardo’s notorious experiments which have been multiply mirrored in real history, with the events in the Abu Ghraib prison prominent among them. So if rape is tolerated or used as a method for compliance, it is no surprise that it is a prominent weapon in the arsenal of keeping women “in their place” and also no surprise that its apologists aspire to give it the status of indisputably hardwired instinct.

Given the steep power asymmetry between the genders ever since the dominance of agriculture led to women losing mobility, gathering skills and control over pregnancies, it is not hard to see rape as the cultural artifact that it is. It’s not a sexual response; it’s a blunt assertion of rank in contexts where dominance is a major metric: traditional patriarchal families, whether monogamous or polygynous; religions and cults (most of which are extended patriarchal families); armies and prisons; tribal vendettas and initiations.

So if gratuitous depictions of graphic rape excite a writer, that is their prerogative. If they get paid for it, bully for them. But it doesn’t make their work “edgy” literature; it remains cheap titillation that attempts to cloak arrant failures of talent, imagination and just plain scholarship. Insofar as such work has combined sex and violence porn as its foundation, it should be classified accordingly. Mythologies, including core religious texts, show rape in all its variations: there is nothing novel or subversive about contemporary exudations. In my opinion, nobody needs to write yet another hack work that “interrogates” misogyny by positing rape and inherent, immutable female inferiority as natural givens – particularly not white Anglo men who lead comfortable lives that lack any knowledge to justify such a narrative. The fact that people with such views are over-represented in SF/F is toxic for the genre.

Further reading:

A brief overview of the modularity of the brain/mind
Athena Andreadis (2010). The Tempting Illusion of Genetic Virtue. Politics Life Sci. 29:76-80
Sarah Blaffer Hdry, Mothers and Others: The Evolutionary Origins of Mutual Understanding
Anne Fausto-Sterling, Sex/Gender: Biology in a Social World
Cordelia Fine, Delusions of Gender
Alison Jolly, Lucy’s Legacy: Sex and Intelligence in Human Evolution
Rebecca Jordan-Young, Brain Storm: The Flaws in the Science of Sex Differences
Kevin Laland and Gillian Brown, Sense and Nonsense: Evolutionary Perspectives on Human Behaviour
Edouard Machery and Kara Cohen (2012). An Evidence-Based Study of the Evolutionary Behavioral Sciences. Brit J Philos Sci 263: 177-226

Empty your memory trash can? (This action cannot be undone)

PKMzeta is shaping up to be a single, target-able protein in the brain responsible for reconsolidating memories. Discover ran a three part article on it and there was a recent article in Wired, too — the original scientific papers are behind subscription walls, unfortunately.

In brief, reconsolidation is a maintenance process for long-term memories. We think our memories are firm and unchanging, but plenty of studies have proven that they aren’t. They shift a little each time we remember them, each time we reconsolidate them, and over time those shifts add up. (And they’re often inaccurate to begin with, but that’s another issue.)

PKMzeta is a protien that hangs out in the synapses between neurons and maintains a particular ion channel so that the neuron is able to receive signals from the neighbors. Without PKMzeta, the number of those particular ion channels drops and the neuron becomes less sensitive to nearby activity.

Block the PKMzeta when a memory is undergoing reconsolidation and the memory will fade.We already have one drug (propranolol) that does this, and there are sure to be more.

There are tons of questions still to be answered, of course. And there are tons of possible uses and abuses of such a thing. This is such a gold mine of science fiction possibilities that I’m sure I don’t have to list them. But I would like to bring up one.

“This isn’t Eternal Sunshine of a Spotless Mind-style mindwiping” the article in Wired says. That may be true, but it also does not address an excellent question that movie poses (if you haven’t seen it, I recommend it.) The question being: you can remove the memories associated with a bad relationship with a person, but what about the underlying attraction that drew you to that person in the first place? One of the implications I got from that movie was that the two of them were stuck in a cycle of attraction, falling apart and voluntary mind-wipes.

For “person,” above, substitute anything you like. Kittens. Drugs. Street racing. World domination… like I said, a gold mine of possibilities here.

YouTube Is The New Substitute Teacher

School, like most of everyday life, is at times boring and occasionally a waste of time. We can place blame for that squarely upon the education system and teachers, or share it with parents if we’d like to keep diplomacy in the PTA. But although it’s true that the adults who shape and deliver education as we know it are largely responsible for what we learn and how well we learn it while we are children, we have nobody but ourselves to blame for allowing ignorance to persist after we grow up.

No matter how dreadful your education experience was as a child, if you reached adulthood literate enough to use the internet, then you should find developing a passing acquaintance with basic science concepts both convenient and entertaining. The idea that learning should be fun and easy is so compelling that YouTube is positively swarming with video bloggers enthusiastically sharing knowledge.

Because I am a science enthusiast and a lifetime devotee of independent study, I’ve compiled a video playlist of some of my recent favorites in that genre. To eliminate some common misconceptions, the playlist opens with the definition of science. From there, it builds from some interesting basics about water and carbon, covers some of the science frequently botched by Hollywood and in other fiction, and demonstrates that girls plus math equals win. Then follows a musical interlude, but it’s all science, so it’s all good. The last few are a sampler of videos posted by universities and science publishers for viewers who prefer productions with bigger budgets.

Now all you have to do is watch and learn.