Archive for April, 2012

Look Up!

[Ed.: Today’s post is by MJ Locke, but due to minor technical difficulties it appears under my name.]

Foreword: For several months during 2007, I collected data for a series of graphics-focused posts on space exploration. I wondered how far we humans have penetrated into space, in the years since our first vehicles rose above the layers of our world’s atmosphere.

Next Saturday, May 5, 2012, we will reach the fifty-first anniversary of the first U.S. launch of an astronaut into space. This is a revisit to a series of posts I put up then. I’ve updated the dates, but all of my analytical data is five years old.

Mercury Redstone 3

On May 5, 1961, 37-year-old Alan Shepard climbed into a tiny capsule atop a liquid-fueled rocket. He rode it up from Cape Canaveral, Florida to an altitude of 116 miles: about forty miles above the upper reaches of the atmosphere. He experienced six gees (six times Earth’s gravitational pull) during liftoff, stayed aloft about 15 and a half minutes, and then splashed down in the waters of the Gulf Stream.



I was very young then, a preschooler, but even so I remember my excitement, and also fear, as I watched the news footage. I recall watching the wind from the helicopter’s blades stirring up the waves that splashed against the capsule as it righted itself.

I can only imagine what it must have felt like, soaring up so high. Not to mention how it felt, coming down.

I remember seeking a glimpse of his face through the little portal, and the thrill I felt when the divers helped him emerge and climb into the sling.

President Kennedy was there, for that first launch.


Since that time, the US has launched over 170 piloted missions, and many, many robotic missions. Our astronauts have spent months at a time in the International Space Station, working in cooperation with people from a variety of other nations to do scientific and engineering research.

We have fifty-one years of human-piloted space exploration under our belt*. Alan Shepard and Mercury-Redstone 3 set the stage for everything that came after that.

Human Space Density, in Hours

What does that really mean, though? How far have we travelled in space, to date? How long have we lived there?

Here is a graph showing how many hours humans (only US astronauts, so far; see note below) have spent above the level of the atmosphere.

I’m counting the upper edge of the atmosphere as about 76 miles up, though you will find many different estimates–and in fact, it changes over time, with fluctuations in the solar wind and other factors, including global warming impacts. But 76 miles is a good average number for our purposes.

So how much time are we talking about, really? For comparison, the average American work-year is about 2,000 hours. A year has about 8,900 hours, all told.

As you can see in the chart above, after a promising start with Mercury, Gemini, and Apollo, the US manned space program languished, when SkyLab drew to a close. It wasn’t until 1981 that the space shuttle program re-energized space exploration. The hours really started racking up once the International Space Station was completed. You can also see the effect of the Challenger (1986) disaster. The Columbia re-entry breakup (2003) is not as easy to see, but it is the cause of the dip in 2003-2004. In fact, it slowed the pace of NASA shuttle missions through the remainder of its run.

If you were to add up all the hours every NASA astronaut has spent in space since our first manned mission, that’s almost 31 years. As of 2007, humans had spent nearly half a lifetime’s worth of time outside Earth’s atmosphere (A good deal more than that, in fact, if you include other nations’ efforts. I couldn’t find the data for them).

Granted, that’s a pittance, compared to how many people live beneath the atmosphere. (In fact, it surprised me. I thought it would be more.) But it’s a start.

As you can see from the chart above, the US has had seven major piloted space programs since we launched Alan Shepard into space.

Human Space Density, in Miles

You can think in terms of how many miles we have traveled overall, or in terms of how far away we have gotten away from the Earth, before we turned around and came back. At first glance, they might seem to be the same thing, but this is definitely not the case. An astronaut might travel many millions of miles in low Earth orbit, but never get any farther away than a handful of miles above the upper reaches of Earth’s atmosphere. Or an astronaut might take a trip to the moon and back, with very little in the way of orbiting either body—in which case their distance travel and maximum “altitude,” or distance they get from the Earth, would be very nearly the same.

Here is a chart that provides information on both kinds of travel.


The maroon tells you how many miles our astronauts traveled in all, by year, as if they had been traveling in a straight line away from Earth. The blue tells you how many miles away from the Earth’s surface they actually reached during their missions. In both cases, I used the annual miles traveled by US astronauts.

As you can see, the moon missions (that blue bump in the ’60s and early ’70s) stand out from the rest. The Apollo craft went much farther away from the Earth than any other space flights, before or since. For non-lunar missions, the average altitude was 179 miles, less than the distance from Houston to Dallas.

The maroon shows that 2001 was a banner year for space travel, when US space missions traveled a total of 233 million miles. That’s all the way to the sun and back, with enough left over to go to Mars. But our astronauts racked up all of those miles in low Earth orbit, never getting any farther from the Earth than about 250 miles.

The average distance missions travel, from the days of Mercury to the present day, is almost 10 million miles. For comparison, if you drive 10,000 miles per year on average, it would take you a thousand years to travel that far.

As you might guess, the International Space Station dragged the curve up all by itself, because astronauts spend months at a time on the ISS. The typical ISS mission lasts six months. An international team usually consists of three astronauts, who spend that half a year up there conducting experiments and maintaining the station. They’ve just added a new module to the ISS. The ISS has 15,000 cubic feet of living space. That’s about equivalent to a 2,100 square-foot home, down here.

By the way, some of my readers will note something odd about the above graph. The distances seem off. The 100-mile marker on the chart is the same distance from the 10-mile marker is the same length as the 10-mile marker is from zero. The thousand-mile marker is no farther from the 100-mile marker than the 100- is from the 10-. What gives?

It’s a logarithmic scale. A log scale scrunches the data together, to allow you to compare data that spans a very large range. In this case, I wanted to get the low-Earth-orbit data onto the same graph as the millions-of-miles traveled data. It’s useful to be able to look at them together, but it can be misleading. Here is a chart showing the actual distances, without the log scale.

The image above shows you about 250,000 miles’ distance, to scale (I couldn’t even begin to fit Mars and the sun on there, and still show you anything meaningful with regard to the NASA missions. The old space-is-really-big effect). Notice how most space missions barely leave the atmosphere, and notice how far it is, even just to our own moon.

The End of the US Space Era? Or a Pause?

Right now, our space exploration efforts seem becalmed. The fifty years between Shepard’s launch and the final voyage of the space shuttle Atlantis may have been our high-water mark, with regard to space travel. I’d be very sad if that were the case. I prefer to be optimistic, however. NASA’s rover, Curiosity, is nearing Mars. A variety of visionaries and entrepreneurs are seeking ways to commercialize space travel—everything from asteroid mining to space tourism, telecommunications, and spaceports in the New Mexican desert. New exoplanets are being discovered by the day now. Perhaps our robotic probes and astronomic surveys will reveal clues of life beyond our world, which might inspire us once again to reach upward again, and seek to escape the confines of Earth’s gravity. I hope so.

We have barely passed beyond the membrane of our atmosphere. Is there life on other worlds? What wonders lie in store out there? I hope that we will continue to find in us the spirit of our ancestors, and to continue to reach beyond our atmosphere, to explore and even someday perhaps settle on other worlds.

Notes: Let me haul out the usual caveats. I pulled the graphical data together primarily from NASA’s mission data pages, with Wikipedia as a secondary source (in particular for the International Space Station). About five percent of the data (in particular, maximum altitude and distance traveled) was not readily available online, in which case I SWAG’d^ it, based on data from other missions. In other words, there is slop in the data. Don’t use it for your doctoral thesis, or to calculate whether you have enough oxygen to survive till the rescue team arrives. Also, I only have information on US astronauts.

* Van Allen belt, that is.

^ Scientific Wild Assed Guess. It’s tethered to real numbers to some degree, but it definitely floats around in the ether to some degree, too.

Can science be anti-fiction?

I can’t find it online, but I read an introduction to Rose for Ecclesiastes in which Roger Zelazny was quoted as saying that he knew he had to hurry up and write the last of his Mars stories because he knew that new developments in science would make them impossible.

(Or possibly, he hesitated to publish that story because he already knew that science had outpaced him. Either way, it’s a fabulous story and you must read it.)

Rose was published in 1963, and Mariner 4 sent back the first close-up photos of the Martian surface in 1965.

Mariner 4 craters

Nope, no beautiful Martian dancers living there.

By now we know the surface of Mars better than we know the surface of Earth (those pesky oceans, you know.) But Zelazny’s fears aside, that hasn’t stopped the popular conception of Martians from appearing regularly in popular culture. (Yes, I enjoyed John Carter. Did you?)

The portrayal of Mars in more science-minded science fiction, though, has changed greatly as new information became available about the planet. Where Edgar Rice Burroughs and Roger Zelazny couldn’t have told their stories after 1965, Kim Stanley Robinson and Ben Bova couldn’t have written theirs earlier.

This leads me to two questions for you all: first, how much does it matter? Does science fiction have a place for both the most accurate possible science and for things we know aren’t true but love anyway? Is the answer different if the story used the best science of the time it was written, but knowledge has moved past that?

What kinds of stories are likely to become obsolete in the very near future? If you are a writer, are there ideas you love that you will never get to write because they are already past, or will you use them anyway? If a reader (and the two categories are by no means exclusive), are there topics you hate to see in SF because you know they’re already obsolete?

That’s not what I meant

This is a true story, and it’s based on the research of Dr. Scott Nixon at the University of Rhode Island. I spent last week at a conference in Newport, and was entirely fascinated by his plenary talk. Besides being a neat juxtaposition of history and technology, it has some interesting implications for worldbuilding in science fiction.

Narragansett Bay within Rhode Island

First, let me orient you. This is Rhode Island, and Narragansett Bay is outlined in red. Providence, the largest city in Rhode Island, is at the north end of the bay, about where it touches the red box. Rhode Island itself is 48 miles (77 km) long and 37 miles (60 km) wide.

The Narragansetts and the Wampanoag tribes lived along the bay when Giovanni da Verrazzano found it in 1524, and the first European settlement was established in the 1630s. It’s really the Europeans we’re concerned with here.

Providence was founded in 1636 by religious dissenters. After the American Revolution it had 7,614 people. The economy depended mostly on the bay for fishing, with a bit of agriculture.

The Industrial Revolution made it to the new United States when textile machinery was built in Rhode Island in 1787, following English plans. Industrialization took off, and by 1831 the population of Providence had reached 17,000.

The city is right on the water, at the head of Narragansett Bay, so anything it does affects the water quality of the entire bay. But even as Providence became a thriving industrial city, its impact on water quality was surprisingly low. as its population increased enormously In 1865, when the population of Providence was 54,595, eelgrass beds were mapped all along the Providence River.

eelgrass - Zostera

So what? Well, eelgrass (Zostera marina) is very sensitive to nitrogen levels in the water. All those people in Providence weren’t affecting the water quality much at all, or the eelgrass would be gone.

That’s a lot of people; how were they having such a small impact on the bay? Well, this is the age of outhouses. Most human waste was solid, or only small quantities of liquid. When you have to haul water from the town well, you don’t use very much of it. Most waste stayed where it was put, only leaching out slowly over time.

I’m certainly not claiming that outhouses are a good way to manage a city’s worth of human waste: Providence had at least two major cholera epidemics in the mid-nineteenth century. But that pollution wasn’t making it into the bay. Much of the human and animal solid waste was being hauled into the country and used as fertilizer.

The prospect of a public water supply was an exciting one, and after a couple decades of planning, the water was turned on in 1871. Public health and fire safety, not to mention simple convenience, were strong motivations.

People started using water at much, much higher rates: flush toilets! no more hauling buckets! (From 7-11 liters per person per day to 190-380.) The city planners expected that the existing street gutter system would be adequate to deal with the increased volume. They were wrong.

It didn’t take long at all for the cesspools and privy vaults to overflow and seep into the streets. Planning for a sewer system began almost immediately, but it didn’t begin service until 1878.

Providence wasn’t alone in this: many cities installed public waterworks in the nineteenth century, and none began planning for sewers until after the water was running.

The sewer system carried waste directly into the rivers. Where before the nutrients were being taken to inland farms, now they were swept right into the bay. The first Providence sewage treatment plant didn’t begin operation until 1901, and by then there were 175,597 people in Providence.

The eelgrass was long gone.

And it wasn’t just the people. Providence relied on horses for transport and hauling. The number of horses in the city peaked around 1900, and then fell off sharply when the automobile was introduced. During that peak, though, an estimated 90 g of horse manure per square meter coated the city streets.

Providence has gotten much better at managing its wastes over the past century, of course, although there’s still room for improvement.

I came away from this lecture with two thoughts about worldbuilding for fantasy and science fiction.

First, even though we often set stories in horse-dependent worlds and with primitive technologies, we don’t usually think about what comes in and what goes out. Scientists call this mass balance. Horses need to eat a lot, and they excrete a lot. So do people. How is this handled in fiction? (Usually by ignoring it!) Where do things come from, and where do they go? Thinking about this some can help to create a world that feels real. Energy too: where does it come from?

And then there’s the impact of new technologies. It seems so obvious in retrospect, but nobody considered how water use would increase when it became easy to use it. The city had to struggle to catch up, and the bay will never be the same. That kind of threshold event can make for a great story.

What are the human and environmental consequences of the next great thing?

Empty your memory trash can? (This action cannot be undone)

PKMzeta is shaping up to be a single, target-able protein in the brain responsible for reconsolidating memories. Discover ran a three part article on it and there was a recent article in Wired, too — the original scientific papers are behind subscription walls, unfortunately.

In brief, reconsolidation is a maintenance process for long-term memories. We think our memories are firm and unchanging, but plenty of studies have proven that they aren’t. They shift a little each time we remember them, each time we reconsolidate them, and over time those shifts add up. (And they’re often inaccurate to begin with, but that’s another issue.)

PKMzeta is a protien that hangs out in the synapses between neurons and maintains a particular ion channel so that the neuron is able to receive signals from the neighbors. Without PKMzeta, the number of those particular ion channels drops and the neuron becomes less sensitive to nearby activity.

Block the PKMzeta when a memory is undergoing reconsolidation and the memory will fade.We already have one drug (propranolol) that does this, and there are sure to be more.

There are tons of questions still to be answered, of course. And there are tons of possible uses and abuses of such a thing. This is such a gold mine of science fiction possibilities that I’m sure I don’t have to list them. But I would like to bring up one.

“This isn’t Eternal Sunshine of a Spotless Mind-style mindwiping” the article in Wired says. That may be true, but it also does not address an excellent question that movie poses (if you haven’t seen it, I recommend it.) The question being: you can remove the memories associated with a bad relationship with a person, but what about the underlying attraction that drew you to that person in the first place? One of the implications I got from that movie was that the two of them were stuck in a cycle of attraction, falling apart and voluntary mind-wipes.

For “person,” above, substitute anything you like. Kittens. Drugs. Street racing. World domination… like I said, a gold mine of possibilities here.

Interplanetary Communications

There have been numerous means of sending a message from point a to point b over the span of human existence, within the past couple centuries it has become possible to ask someone at point b what the weather is like without actually sending someone to physically deliver your missive. Naturally people have started to take the ability to receive an instantaneous response for granted and most science-fiction (and a few fantasy) authors have naturally incorporated it into their works, even including some form of “interplanetary internet” in some cases. Though sometimes they don’t think things through too much, making mistakes such as interstellar wi-fi, to prevent such errors why don’t we take a quick look at how communications may work across interplanetary and interstellar distances.

Electromagnetic Radiation

First off there’s the single most common medium of transmission since the mid-20th century, radio waves. Transmitters translate text, verbalization, or other forms of data into discrete or continuous pulses of electromagnetic radiation (aka light) with wavelengths ranging from 1 millimeter to 100 kilometers and frequencies of 300 GHz to 3 kHz and a receiver detects and re-translates the information sent. Their low frequency and long wavelengths mean that radio waves have very little energy compared to other forms of EM radiation (and most definitely cannot cause cancer) but can potentially carry information for light-years before losing coherence. However radio waves are limited to the speed of light, so any attempt at calling someone further out than a light-minute or two (for reference, the sun is about eight light-min from earth) is going to experience a considerable amount of lag as the time it takes the waves to travel to their destinations becomes noticeable. In addition signals sent using radio will become incoherent with distance, depending on the frequency, the absolute limit being one or two light-years.

Another common means of communication is concentrated pulses of visible light, usually along glass fiber-optic cables which shield the signals from interference by the atmosphere. This method allows for far superior data quality than radio but atmospheric gases or particles can block them easily, as can physical objects that radio waves can pass through. In the vacuum of outer space there is considerably less matter in any form that can block an optical signal, however, especially if the signal is transmitted in the form of a laser capable of maintaining integrity over great distances. Lasers are also less susceptible to jamming or disruption by solar flares. But there has to be a clear line-of-sight between the transmitter and receiver and even lasers spread out and become incoherent over interstellar distances.

The Internet

As for how the internet might cope with space travel, e-mail and social networks would still be possible, and probably the primary form of communication between planets, but instant messaging would no longer be “instant” and if you think AOL back in the 1990s took a long time to load webpages, you probably wouldn’t have the patience to try surfing the internet from Mars. In all likelihood deep space colonies would form their own separate internets, with unique web sites inaccessible on earth or any other fairly distant regions. Certain websites that may be determined to be “important” enough might set up localized servers that would receive updates from one another at specified intervals, but you’d have to wait several hours and most likely need a massive transmitter to look up any other sites based outside your local region of space.


Neutrinos, those supposedly massless particles that don’t interact with most normal matter and instead pass right through it, gained some publicity a few months ago when readings by CERN supposedly indicated that they travel slightly faster than the speed of light. Those readings were determined to be an equipment failure (a disconnected wire) but another group of researchers managed to do something not quite as amazing with neutrinos, but still significant. They managed to use neutrinos to send a one-word message through 240 meters of solid rock. Granted, the transmission speed was very slow, only 1 bit/second, and it took a particle accelerator to send the message, but still the neutrinos experienced negligible interference from materials that would block radio or optical signals completely. They could be very useful for communicating for people deep underground or underwater, or on the other side of a planet or star even. Neutrino transmission would need to be very tight beams like lasers to compensate for the low transmission rate, but the advantages of a transmission medium that is near impossible to block are considerable. Of course, if someone managed to place a neutrino detector between the sender and the receiver they could read the message without anyone knowing.

Quantum Entanglement

One of the science “buzzwords” of the century is “quantum mechanics”, relating to the behaviors of subatomic particles. One thing that science-fiction authors have extrapolated from the various “weird” properties covered under quantum mechanics is the use of “entanglement” to send messages instantaneously over any distance. The idea is that when two particles are “entangled” at the quantum level they can be separated and whatever happens to one particle happens to the other one instantaneously. Somewhere along the line someone decided that that could allow communication faster than the speed of light. In addition to sending messages instantaneously a quantum entanglement communique would be impossible to intercept as it would be teleported to the receiver. The harsh reality is that the act of observing an entangled particle breaks the connection with the paired particle, attempting to send data with entangled particles would by necessity require observing them.

However, quantum entanglement can be used to encrypt messages sent by conventional (currently only dedicated fiber optic cables) means such that only those who possess one of two “keys” can interpret the data. By encoding a transmission in the form of quantum states of a particle one ensures that the very act of intercepting it would corrupt the data and alert the holders of the keys as to how much of the message was intercepted. And it actually has been done, some governments and companies who consider security worth the expense use quantum cryptography for their most secure data transmissions, the Swiss canton of Geneva used it to send national election ballot results to the capital in 2007 for example. There have also been experiments with sending quantum encrypted messages over radio as well, it seems likely that the technology will become more prevalent over the next few decades. Though of course it only works between two specialized devices that have to be physically transported to their working locations.

The Utterly Fantastic

Of course, even quantum-encrypted FTL neutrinos would take years to travel from one solar system to another, so many authors have turned to the farther fringes of science in order to maintain “instantaneous communication”. For example, tachyons which are highly hypothetical particles that travel faster than light and which most scientists don’t believe exist. Or if their universe allows physical travel through some sort of “hyperspace” they might send radio transmissions through that same dimension where the normal laws of physics don’t apply. Heck, you might even use mentally “bonded” telepaths, worked for Heinlein.