Archive

Archive for February, 2013

Take Two Skulls And Call Me In The Morning

There is a common and ancient opinion that certain prophetic women who are popularly called ‘screech-owls’ suck the blood of infants as a means, insofar as they can, of growing young again. Why shouldn’t our old people, namely those who have no [other] recourse, likewise suck the blood of a youth? — a youth, I say who is willing, healthy, happy and temperate, whose blood is of the best but perhaps too abundant. They will suck, therefore, like leeches, an ounce or two from a scarcely- opened vein of the left arm; they will immediately take an equal amount of sugar and wine; they will do this when hungry and thirsty and when the moon is waxing. If they have difficulty digesting raw blood, let it first be cooked together with sugar; or let it be mixed with sugar and moderately distilled over hot water and then drunk.

Marsilio Ficino, De Vita II (1489), 11: 196-199. Translated by Sergius Kodera

If your world-building has a medieval flavor and you’re looking to add some period-authentic medicine, you need look no further than cannibalism. For more than 200 years, cannibalism was a routine part of medicine. Walk in to the shop of any apothecary (the equivalent of today’s pharmacist), and you would find, among other things, the skull of a man killed by violent death, human blood (which could include menstrual blood), human urine (separated by sex, and if the urine came from a woman, by whether she was a virgin or not), human fat, and mummia.

Skulls

Perhaps the simplest form of this type of medicine was the skull and the moss of the skull. But not just any skull would do. It was widely believed that the skulls used should be from those who suffered violent death. There were disagreements of which type of violent death was best. The German professor Rudolf Goclenius (fl. c.1618) held that skulls should come from those who had been hanged. Flemish chemist, physiologist, and physician Jan (or Jean) Baptist van Helmont disagreed, claiming that a body broken on the wheel would do just as well. He also explained the skull was the most efficacious of all the human bones because, after death, “…all the brain is consumed and dissolved in the skull… by the continual… imbibing of [this] precious liquor” of dissolved brains “the skull acquires such virtues.”

One of the most important sources of skulls in England was Ireland. Sir Humphrey Gilbert slaughtered thousands of Irish men, women, and children during the late 1560′s, severing the heads of those he captured and place them in long rows, like a wall, leading to his tent. The skulls rotted and moss grew on them, and he began exporting the skulls to England, where they ended up being used as medicine by the English aristocracy. So much money was made by exporting the skulls that the English introduced an import tax of one shilling for each one. As late as 1778, the skulls were still liable for duty and were also listed amongst goods which were imported into England before being exported elsewhere.

One of the earliest descriptions of using human skull is from the 1651 book by John French, The Art of Distillation. One of the methods described in the book for turning human skull into spirit involved braking the skull up into small pieces, placing them a glass retort. Heat them in a “strong fire” which will eventually yield “a yellowish spirit, a red oil, and a volatile salt.” The salt and spirit are then further distilled for an additional 2-3 months. This spirit of the skull was said to be good for falling-sickness, gout, dropsy, and as a general panacea for all illnesses.

A different recipe for turning human skull into spirit was developed by Jonathon Goddard, Professor of Physic at London’s Gresham College, and was purchased by King Charles II for £6,000 (and enormous sum of money), which became know as “the King’s Drops.” This concoction was used against epilepsy, convulsions, diseases of the head, and often as an emergency treatment for the dying. Charles even manufactured and sold himself. Unfortunately they didn’t do Charles much good, as he died on February 6, 1685, after being treated with high doses of the distillation after falling ill four days earlier. The drops failed again in December of 1694, when despite having taken some of the King’s Drops, Queen Mary II died.

The moss of the skull, called usnea, was also important. Francis Bacon (d.1626), the father of scientific inquiry, probably started the trend in consuming fresher skulls with moss growing on them. Chemist and physicist Robert Boyle (d.1691) then found another use. One summer Boyle was badly afflicted by nosebleeds. During a violent bleed, Boyle decided to use “some true moss of a dead man’s skull” which had been sent from Ireland. The usual method was to insert the moss, often powdered, directly into one’s nostrils. But Boyle said he found that he was able to completely halt the bleeding merely by holding the moss in his hand, thus confirming that the moss could work at a distance.

Mummia

Mummia, or mummy, was a powder made from ground mummies. There were broadly four type of mummy – the mineral pitch (also known as “natural mummy”, “transmarine mummy”, or bitumen), matter derived from embalmed Egyptian corpses (“true mummy” or “mumia sincere”), the relatively recent bodies of travelers “drowned” in sandstorms in the Arabian desert (“Arabian mummy”), and flesh taken from fresh corpses, preferably those of felons who had died no more than three days prior to the flash being collected, then treated and dried.

Mummy was thought to cure everything from headaches to stomach ulcers. For example, in 1747, successful London physician Robert James, in his book Pharmacopeia Universalis: or A New Universal English Dispensatory, wrote

Mummy resolves coagulated Blood, and is said to be effectual in purging the Head, against pungent Pains of the Spleen, a Cough, Inflation of the body, Obstruction of the Menses and other uterine Affections: Outwardly it is of Service for consolidating Wounds. The Skin is recommended in difficult Labours, and hysteric Affections, and for a Withering and Contraction of the Joints. The Fat strengthens, discusses, eases pains, cures Contractions, mollifies the Hardness of Cicatrices, and fills up the pits left by the Measles. The Bones dried, discuss, astringe, stop all Sorts of Fluxes, and are therefore useful in a Catarrh, Flux of the Menses, Dysentery, and Lientery, and mitigate Pains in the Joints. The Marrow is highly commended for Contractions of the Limbs. The Cranium is found by Experience to be good for Diseases of the Head, and particularly for the Epilepsy; for which Reason, it is an Ingredient in several anti-epileptic Compositions. The Os triquerum, or triangular Bone of the Temple, is commended as a specific Remedy for the Epilepsy. The Heart also cures the same Distemper.

But the use of mummy as a medicine goes back much further. Thomas Willis, a 17th-century pioneer of brain science, brewed a drink for apoplexy, or bleeding, that mingled powdered human skull and chocolate.

In 1575, John Banister, Queen Elizabeth’s surgeon, describes a mummy plaster for a tumerous ulcer and a drink made of mummy and water of rhubarb for ulcers of the breast. In 1562, physician William Bullein published Bullein’s Bulwark of Defence Against all Sickness, which recommended mummy mixed with wild fennel, juice of black poppy, gentian, honey, and wild yellow carrots to make “Therica Galeni”, a treatment for ”the falling sickness… and convulsions”, headaches (including migraines), stomach pains, the “spitting of blood”, and “yellow jaundice”.

Earlier, anatomist and medical writer Berengario da Carpi (d.1530) made frequent use of mummy in medical plasters using a family secret recipe going back decades. His family insured they had sufficient amounts of mummy by keeping mummified heads in their house.

Blood

It is said that in July of 1492, the physician to dying Pope Innocent VIII bribed three healthy youths to help him save the pope. The youths were then bled, and the pope drank their blood, still fresh and hot. But the blood did not save the pope, and all three youths died of the bloodletting.

The belief that blood could cure disease goes back at least to Roman times. Between the first and the sixth century a single theological and several medical authors reported on the consumption of gladiator’s blood or liver to cure epileptics. The origins of this belief are thought to lie in Etruscan funeral rites. After the prohibition of gladiatorial combat in about 400 AD, an executed individual (particularly had he been beheaded) became the “legitimate” successor to the gladiator. Pliny the Elder (AD 23-79), one of the great historians of the Roman Empire, described the mad rush of spectators into arenas to drink the blood of fallen gladiators:

Epileptic patients are in the habit of drinking the blood even of gladiators, draughts teeming with life, as it were; a thing that, when we see it done by the wild beasts even, upon the same arena, inspires us with horror at the spectacle! And yet these persons, forsooth, consider it a most effectual cure for their disease, to quaff the warm, breathing, blood from man himself, and, as they apply their mouth to the wound, to draw forth his very life; and this, though it is regarded as an act of impiety to apply the human lips to the wound even of a wild beast! Others there are, again, who make the marrow of the leg-bones, and the brains of infants, the objects of their research!

Plin. Nat. 28.2

In the 16th and 17th centuries, various distillations of blood were used to treat consumption, pleurisy, apoplexy, goat, and epilepsy, as well as used for a general tonic for the sick. Moyse (or Moise) Charas, an apothecary in France during the reign of Louis XIV who compendiums of medication formulas, specified blood should be from “healthy young men”. Robert Boyle also had a lot to say about medicine, and was very interested in distillations of human blood. In 1663 he published Some Considerations touching the Usefulness of Experimental Natural Philosophy, in which he advises to

take of the blood of a healthy young man as much as you please, and whilst it is yet warm, add to it twice its weight of good spirit of wine, and incorporating them well together, shut them carefully up in a convenient glass vessel.

Poor people couldn’t afford physicians, and turned to other options for acquiring blood. English traveler Edward Browne reports that, while touring Vienna, he had the good fortune to be present at a number of executions. After one execution, he reports that “while the body was in the chair” he saw “a man run speedily with a pot in his hand, and filling it with the blood, yet spurting out of his neck, he presently drank it off, and ran away… this he did as a remedy against the falling-sickness.” In Germanic countries, the executioner was considered a healer; a social leper but with almost magical powers.

Fat

Human fat was mentioned in European pharmacopoeias as early as the 16th century. It was used to treat ailments on the outside of the body. German doctors, for instance, prescribed bandages soaked in it for wounds, and rubbing fat into the skin was considered a remedy for gout and rheumatism. But it could be used for other diseases as well. Human fat was frequently cited as a powerful treatment for rabies. Robert James, who we met earlier, published a book in 1741 on rabies. In it, he discusses the work of French surgeon J. P. Desault, including the remedy the surgeon had “…tried with constant success, and which I propose to prevent and cure the hydrophobia… the ointment made of one third part of mercury revived from cinnabar, one third part of human fat, and as much of hog’s lard.”

In Scotland, human fat was being sold and used as early as the beginning of the 17th century. An apothecary in Aberdeen, Scotland advertised advertised as part of his available medical ingredients “…human fat at 12s Scots per ounce”. The source of the fat was most likely executed criminals, as it was the most common source of fat available. But sometimes human fat came from much darker actions.

In July 1601, the Spanish began the siege of Ostend, one of the bloodiest battles of the Dutch revolt against the Spanish, and one of the longest sieges in history. An account of the battle tells of how on October 17, 1601, the Spanish ran in to a trap in an attack. Alll the attackers were killed, and afterwards “…the surgeons of the town went thither… and brought away sacks full of man’s grease which they had drawn out of the bodies.” It’s likely that the fat was then used to treat wounds from the battle.

 

Cannibalism as medicine may shock our sensibilities today, but it can be a useful starting point for developing medicinal practices in your world-building.

References

Bostock, John, 1855. The Natural History of Pliny the Elder. London, England: Taylor and Francis

Moog FP, and Karenberg A. Between horror and hope: gladiator’s blood as a cure for epileptics in ancient medicine. J Hist Neurosci. 2003 Jun;12(2):137-43.

Noble, Louise, 2011. Medicinal Cannibalism in Early Modern English Literature and Culture. Basingstoke, Hampshire, England: Palgrave Macmillan

Sugg, Richard, 2011. Mummies, Cannibals and Vampires: The History of Corpse Medicine from the Renaissance to the Victorians. Abingdon, Oxford, England: Routledge

Everybody Knows That The Dice Are Loaded, Everybody Rolls With Their Fingers Crossed

Every parent I know – particularly those with daughters – laments the near-impossibility of finding gender-neutral toys, clothing, and even toiletries for their young children. From bibs to booties, children’s items from birth onward are awash in a sea of pink and blue. Lucky the baby shower attendee, uncertain of the gender of an impending infant, who can find a green or yellow set of onesies as a present!

At the same time, I can’t count the number of times someone has told me, quite earnestly, how they discovered that boys “naturally” prefer blue and girls pink. Everyone knows this is true. No matter how hard parents try to keep their children clothed and entertained with carefully-selected gender-neutral colors and toys, girls just gravitate to princesslike frills, while boys invent the idea of guns ex nihilo and make them with their fingers or with sticks if they’re not provided with toy firearms by their long-suffering progenitors. There must be some genetic component to gunpowder/frill preferences1.

This is nonsense, of course, and obviously so; but there’s a pernicious strain of thought that insists that all of human behavior must have some underlying evolutionary explanation, and it’s trotted out with particular regularity to explain supposed gender (and other) differences or stereotypes as biologically “hard-wired”. These just-so stories about gender and human evolution pop up with depressing regularity, ignoring cultural and temporal counterexamples in their rush to explain matters as minor as current fashion trends as evolutionarily deterministic.

A phrenology chart from 1883, showing the areas of the brain and corresponding mental faculties as believed to exist by phrenologists of the time.

In the 19th century, everybodyknew that your intellectual and personal predispositions could be read using the measurements of your head.Image obtained via Wikimedia Commons

For example, a 2007 study purported to offer proof of, and an evolutionary explanation for, gender-based preferences in color – to wit, that boys prefer blue and girls prefer pink. Despite the fact that the major finding of the study was that both genders tend to prefer blue, the researchers explained that women evolved to prefer reds and pinks because they needed to find ripe berries and fruits, or maybe because they needed to be able to tell when their children had fevers2.

One problem with this idea is that currently-existing subsistence, foraging, or hunter-gatherer societies don’t all seem to operate on this sort of division of labor. The Aka in central Africa and the Agta in the Philippines are just two examples of such societies: men and women both participate in hunting, foraging, and caring for children. If these sorts of divisions of labor were so common and long-standing as to have become literally established in our genes, one would expect those differences to be universal, particularly among people living at subsistence levels, who can’t afford to allow egalitarian preferences to get in the way of their survival.

Of course, a much more glaring objection to the idea that “pink for boys, blue for girls” is the biological way of things is the fact that, less than a hundred years ago, right here in the United States from which I am writing, it was the other way around. In 1918,  Earnshaw’s Infants’ Department, a trade publication, says that “The generally accepted rule is pink for the boys, and blue for the girls. The reason is that pink, being a more decided and stronger color, is more suitable for the boy, while blue, which is more delicate and dainty, is prettier for the girl.” Pink was considered a shade of red at the time, fashion-wise, making it an appropriately manly color for male babies. Were parents in the interwar period traumatizing their children by dressing them in clothes that contradicted their evolved genetic preferences? Or do fashions simply change, and with them our ideas of what’s appropriate for boys and girls?

A photograph of US President Franklin Delano Roosevelt, age 2. He is wearing a frilled dress, Mary Jane sandals, and holding a feather-trimmed hat - an outfit considered gender-neutral for young children at the time.

This is FDR at age 2. No one has noted the Roosevelts to be a family of multigenerational cross-dressers, so take me at my word when I say this was normal clothing for young boys at the time.
Image obtained via Smithsonian Magazine

More recently, researchers at the University of Portmouth published a paper reporting that wearing high heels makes women appear more feminine and increases their attractiveness – a result they established by asking participants to view and rate videos of women walking in either high heels or flat shoes. The researchers don’t appear to have considered it necessary to test their hypothesis using videos of men in a variety of shoes3.

Naturally, articles about the study include plenty of quotes about the evolutionary and biological mechanisms behind this result4. But as with pink-and-blue, these ideas just aren’t borne out by history.  In the West, heels were originally a fashion for men. (In many non-Western societies heels have gone in and out of fashion for at least a few thousand years as an accoutrement of the upper classes of both genders.) They were a sign of status – a way to show that you were wealthy enough that you didn’t have to work for your living –  and a way of projecting power by making the wearer taller. In fact, women in Europe began wearing heels in the 17th century as a way of masculinizing their outfits, not feminizing them.

Studies like these, and the way that they reinforce stereotypes and cultural beliefs about the groups of people studied, have broader implications for society and its attitudes, but it’s also useful to think about them from a fictional and worldbuilding standpoint: the things we choose to study, and the assumptions we bring with us, often say more about us than about the reality of what we’re studying — particularly when the topic we’re studying is ourselves. Our self-knowledge is neither perfect nor complete. What are your hypothetical future-or-alien society’s blind spots? What assumptions do they bring with them when approaching a problem, and who inside or outside of that society is challenging them? What would they say about themselves that “everybody knows” that might not be true?

FOOTNOTES
1. Our ancestors, hunting and gathering on the savannah, evolved that way because the men were always off on big game safaris while the women stayed closer to home, searching out Disney princesses in the bushes and shrubs to complete their tribe’s collection. Frilly dresses helped them to disguise themselves as dangerous-yet-lacy wild beasts to scare off predators while the men weren’t there to protect them.
2. Primitive humans either lacked hands or had not yet developed advanced hand-on-forehead fever detection technology.
3. Presumably because everyone knows that heels are for girls and that our reactions to people wearing them are never influenced by our expectations about what a person with a “high-heel-wearing” gait might be like.
4. On the savannah, women often wore stiletto heels to help them avoid or stab poisonous snakes while the men were out Morris dancing.

Science LIVE

I’m going to try something different today: real-time science, and you can follow along at home.

I’ve written here before about my love of satellite imagery as a tool for understanding a planet. Landsat 5 is the most successful Earth observation satellite ever: it was launched in 1984 with an intended three-year operating life. The satellite was finally retired in November 2011, almost 29 years after it began sending back images.

Landsat 7 is still working, but it has suffered from sensor problems since 2003, so the loss of Landsat 5 has been a huge blow to people like me who use satellite imagery for scientific research.

But not to worry: the Landsat Data Continuity Mission (LDCM), soon to be known as Landsat 8, is going up today!

And here’s the real-time science part: I will be checking in throughout the day to post updates and images, as well as the link to the live video feed once it’s available. The launch is scheduled for today, Monday 11 February 2013, 10:02 a.m. Pacific Standard Time.

If you were here, you could have a rocketship cookie… I’m so excited about this! Even if it will take 100 days before imagery will be available to the public.

Update 1

An hour and a half until launch, and everything looks good. Here’s the Atlas V that will be taking the LCDM into orbit.

Here’s where you can watch live. Or here, which looks nicer on my computer but doesn’t have running text commentary.

Update 2

And it’s off! Launch went beautifully, and the module is in orbit. Next burn in 50 minutes or so.

Update 3

Here’s a nice BBC Science article on the Landsat launch, with some lovely images from 5/7 that illustrate well why we need this kind of long-term satellite monitoring.

You can also get official updates on LDCM from NASA.

Update 4, Thursday

One more update. This is a gorgeous NASA video of the booster separating, taken from the spacecraft, and there are more videos of the launch.

Intelligent Science Fiction

Warning: This post contains spoilers about the books Who Goes There? as well as I Am Legend and The Hunger Games.

Have you ever wondered what is it about science fiction that gives this particular genre such a broad appeal?

Looking at Hollywood movies, it’s tempting to think it is the visual sensation of blockbusting special effects, but nothing could be further from the truth. If anything, the reliance of movies on mind-bending special effects has diluted rather than enhanced great science fiction stories.

Science fiction has a strong appeal because it is intelligent, it stimulates our thinking. And, often times, this distinction is lost when books morph into movies.

In Who Goes There? John Campbell introduces us to a creature Hollywood immortalized as The Thing.

Although The Thing is a vivid and faithful rendition of Campbell’s novella, it misses a significant amount of the reasoning the scientists go through as both they and the readers struggle to comprehend a hostile alien encounter. And that is where the brilliance of the story lies, in the exploratory, inquisitive, reasoning nature of man.

The essence of the storyline in Who Goes There? is, how can reason triumph over a mindless, instinctive monster, one than can perfectly mimic its target? Don’t get me wrong, I love the movie, but the way the scientists drive their minds to understand the nature of this alien beast in the novella is brilliant, and it is lost in the screen adaptation.

In the novella, the trapped scientists consider the biological nature of the alien, they think about how the infection spreads at a cellular level, realizing that the infected cow at the Antarctic station would have laced their milk with parasitic spores, dooming them all. They discuss why the alien won’t engage in open combat with them, realizing it has evolved a unique strategy to avoid such confrontations, and they come to the chilling realization that it would sweep unopposed throughout the world if even the smallest biological trace remains. When it comes to The Thing, just a few cells is all that’s needed to overrun Earth’s entire biosphere. As a reader, you feel like an unnamed member of the ice station, traveling with them on this voyage of the damned.

In the same way, I Am Legend, takes an absurd, mythological notion and says, what would happen today if the legend of vampires were true? How could vampires exist in a modern world?

The protagonist of the novella, Neville, talks us through the logic of why vampires fear the cross. Surprisingly, it’s not because of any inherent supernatural power in that particular shape, it turns out that the shape is a catalyst for thought, a vivid reminder of what the vampire has become and so causes a physiological revulsion. Neville even conducts experiments with vampires of Jewish origin, noting they suffer the same aversion to the Star of David as former Christians do of the cross. He hypothesises that a Muslim vampire would find the crescent shape equally repugnant, but would not be worried by a cross.

In the same way, mirrors allow vampires to see themselves for what they really are, and they are repulsed by the realization that they are monsters.

Garlic, rather than an old wives’ fable, becomes a biological agent that causes anaphylactic shock within the vampire.

Sunlight, it seems, breaks down the vampiric bacteria, just as UV is known to destroy other types of bacteria.

And in the course of the story, the question is raised, why do stakes kill vampires and not bullets? Neville, our rational hero, applying science over superstition, learns that the hemorrhaging caused by a stake cannot be contained as easily as the smaller holes caused by a bullet. And the reader finds themselves inhabiting a world where the absurd has suddenly become plausible and rational, at least in a fictitious sense in which disbelief can be suspended for the enjoyment of the adventure.

The Hunger Games is another recent example of intelligent science fiction.

The movie is breathtaking, but action and adventure win out over the awe of reason. In the movie, we see Katness attack the supplies of the upper crust contestants from Districts 1-5, but without the audience really understanding why. In the book, we get a sense of the hunger and desperation Katness suffers in the wilderness (after all, it is called the Hunger Games). And so, rather than a mindless attack on the stores of wealthy tributes, we see Katness attack these stores to level the playing field, to square up the fight and ensure that the rich kids also have to scavenge and forage for basic necessities. In this way, they can no longer ruthlessly hunt down the other tributes with such ease.

And so the book allows us to explore this fictional world with Katness, and to understand its means and motives in a way that is glossed over in the movie.

As a science fiction author, I appreciate what these authors have done, they’ve started with a simple premise and explored the possibilities latent therein, seeking to build fictional worlds for our enjoyment.

It is said that the plot is the character in action. When it comes to science fiction, the plot is the character interacting with science in a way that influences both their actions and the actions of their opponents. I’m a little bias, of course, but I love the way science fiction makes us think about the challenges facing a protagonist.

Peter Cawdron is the author of the highly acclaimed dystopian novel, Monsters