Wizards, Aliens, and Starships: Physics and Math in Fantasy and Science Fiction

Home > Other > Wizards, Aliens, and Starships: Physics and Math in Fantasy and Science Fiction > Page 7
Wizards, Aliens, and Starships: Physics and Math in Fantasy and Science Fiction Page 7

by Adler, Charles L.


  Coming back to manned space travel: there are a lot of reasons why we might want manned spaceflight and colonies on other worlds. However, many of the reasons given in science fiction stories are economic at heart. The claim is that certain industries can be run more profitably in space, or that we can mine materials on the Moon or grow crops in L5 colonies or on other planets. I want to take a serious look at the economics of manned space travel and the establishment of colonies on other planets, in addition to the physics and chemistry.

  Another reason given for manned space travel is military. This idea was very prevalent in the 1960s through the late 1980s when the U.S. government’s Strategic Defense Initiative, more commonly known as the “Star Wars” program, was being presented as a serious space defense system. The nickname comes, of course, from the wildly popular 1977 movie Star Wars (now known as “Episode IV”), with its depictions of space battles looking like World War II–era dogfights. George Lucas was refighting the Battle of Midway in space, but real space battles will look nothing like this.

  The final reason given is usually presented as more noble: the survival of the human race itself. This has been expressed in many different ways in the literature. Sometimes the remnants of humanity are fleeing a disaster that overtook Earth, such as nuclear war or an ecological disaster. The last story in Ray Bradbury’s work The Martian Chronicles is of this sort, as is Larry Niven and Brenda Cooper’s novel Building Harlequin’s Moon [40][185]. Sometimes the catastrophe is more cosmological, as in Olaf Stapledon’s Last and First Men, in which the human race moves from Earth to Venus because the Moon falls from the skies [225]. Sometimes it is more a generalized expression of the need to find a new home to safeguard the survival of humanity, in case something happens to Earth. Many stories are constructed around this sentiment. There are a number of ways in which humanity might become extinct on Earth, so it is worth exploring these ideas as well.

  Two themes present themselves over and over when we examine space travel: energy and cost. The two are linked, though not as closely linked as they might at first appear. It takes a lot of energy to launch a rocket or other space vehicle from Earth to another planet, or even into orbit around Earth. It takes a lot of money to do this, too. The theme of this chapter can be summed up in the following idea: energetically, it’s a lot cheaper to do a computation than to move something around, and this is why we don’t yet have manned moon bases.

  5.2 THE REALITY OF SPACE TRAVEL

  In his nonfiction work The Promise of Space, Arthur C. Clarke writes that to the pioneers of space travel, Tsiolkovsky, Goddard, and Oberth, “Today’s controversies between the protagonists of manned and unmanned spacecraft would have seemed … as pointless as the theological disputes of the Middle Ages” [55]. He was implying that unmanned and manned spaceflight would go hand-in-hand, with unmanned probes helping in the planning of larger, manned expeditions. Clarke points out that Goddard’s notebooks, and most of Tsiolkovsky’s and Oberth’s writings, make it clear that “men would be the most important payloads which rockets carried into space.” However, a lot has happened since 1968, when the book was first published, or even 1985, the date of publication of my dog-eared second edition.

  What has really happened is that unmanned spaceflight, except in the popular imagination, has come to dominate the field, at least in its scientific impact. Although the Space Shuttle took up the lion’s share of the NASA budget until the program’s demise in 2012, its importance to science was minimal compared to that of unmanned projects such as the Mars Rovers, the Galileo missions to Jupiter, the Huygens probe of Saturn, or almost any other unmanned mission one could name. Almost the only thing of any scientific importance the shuttle did was to launch the Hubble Space Telescope and then send a mission to repair it later; important though these were, the price tag of the two launches, roughly half a billion dollars each, could have funded the building and unmanned launch of the telescope ten times over.

  The contrast between perception and reality is interesting from both social and scientific perspectives. I will leave the social perspective to other authors. Much, however, of the first several chapters of this section will be devoted to the close examination of manned spaceflight and the fundamental reasons that unmanned probes have done so much more than manned ones in the past five decades. We’ll consider this point carefully: it points up an interesting contrast from fundamental physics, namely, that spaceflight takes a lot of energy, whereas computing can be done essentially for free. In 1968, when Clarke wrote his book, computers were big mainframe devices that were programmed using punch cards. No science fiction writers anticipated the million-fold shrinkage in the size of computational circuits and the increase in computational speed from then till now—and we are still far from the ultimate limits of computation. The range of scientific instrumentation we can place on satellites has also increased exponentially since then, with the advent of CCD devices, x-ray and gamma-ray telescopes, and similar things. Computers can be made tiny and require little power; and they neither breathe nor eat. People, however, are large and require elaborate protection in space to keep them alive. Also, computers don’t have to be retrieved from space when they have finished their mission. Let’s not start with spacecraft; there are more prosaic examples to begin with. Our current transportation systems, in the form of cars, are energetically expensive when compared with computers.

  5.3 THE ENERGETICS OF COMPUTATION

  There’s a saying that goes something like this: if cars had improved as much as computers had in the last four decades, we’d have a car that could fit into your pocket, seat 100 people, and go around the world at a thousand miles per hour on a drop of gasoline. Some of these specs are obviously contradictory, but what the hell.

  Basic physics tells us why computers got so much better while cars didn’t. As Richard Feynman said, there is plenty of room at the bottom. The essential operation a computer does is flip the state of a bit. A bit is a representation of a number, either 0 or 1, for use in base two arithmetic. Readers interested in details of this should read Paul Nahin’s book, The Logician and the Engineer [173]. The archetypal computer has a memory in which bits are arranged in some way; when calculating some function, the computer can either choose to keep a given bit the same or erase it and rewrite it.

  The physicist Rolf Landauer proved that the lowest energy it took to erase a bit was equal to kT ln 2, where T was the temperature (in degrees Kelvin) and k is Boltzmann’s constant (1.38 × 10−23 J/K) [143]. At room temperature that’s only about 10−21 J per bit flip. Some physicists have even speculated about computing that takes no energy, but for now let’s stick to this as a minimum.1 The laws of physics also tell you that the minimum time to flip a bit from 0 to 1or vice versa is given by

  where E is the energy used to flip the bit and h is Planck’s constant (= 6.626×10−34 J-s). In principle, if we are down to the minimum energy, the maximum speed is the inverse of this: about 1013 bit flips per second, or some four orders of magnitude faster than today’s processors. Of course, limitations on silicon have already started to slow the pace (or even stall it), but this is the fundamental limit. The power use of this ultimate computer can be found by multiplying the energy by the frequency:

  Again, we may not be able to get near this limit, but it does show why computers got so good so fast: there’s a very long way to go before we approach the fundamental limits. What about automobiles and their fundamental limits?

  5.4 THE ENERGETICS OF THE REGULAR AND THE FLYING CAR

  Let’s start with a seemingly dumb question: Why do cars need engines? This does seem pretty stupid; they need engines to move. But it’s not quite so stupid when one thinks about Newton’s first law: an object in motion remains in motion. Maybe a car needs an engine to set it in motion, but once in motion, why shouldn’t it continue to go down the road forever without running the engine?

  The answer to this is why Aristotelian physics retains such a hold over the mind: i
f there were no other forces acting on the car, it would move in a straight line forever. But there are forces, called dissipative forces, that act to slow the automobile down. Two of the forces are the friction of the tires against the roadway and the drag due to the air as the car moves through it. However, they don’t account for most of the energy losses, which occur in the engine itself. A large fraction of the energy created by the engine is damped out by the motion of the pistons against atmospheric pressure and by losses in the drivetrain from the engine to the car’s wheels. Beyond this, however, are the ultimate limits placed on it by the laws of physics. The earliest engines were built before the concept of energy was fully understood; Joule’s experiments on the conversion of work into heat were published in 1844, fully eighty years after Watt’s first commercially successful steam engine was marketed. Joule’s work, and that of others, led to the first law of thermodynamics:

  The total energy of a closed system may be transformed from one kind to another, but cannot be created or destroyed.

  This is the principle of the conservation of energy. There are a number of different types of energy; however, we are concerned with the conversion of chemical energy to work. The combustion of 1 kg of gasoline liberates 45 million J of energy; in principle, this could lift a car with mass 1,000 kg a distance of 4,500 m in the air, or roughly three miles. This is why life has become so easy since the Industrial Revolution: the amount of work an engine can do dwarfs the amount of work any individual can perform by a factor of several hundred.

  However, energy conservation isn’t the only issue. Not all of the energy obtained through burning fuel can go into useful work; some of it must be exhausted in the form of useless heat, randomized energy, into the environment, from which it cannot be recovered. For example, there’s a lot of energy stored in a single ice cube. If the cube has a mass of 36 grams, there are about 12 trillion trillion atoms in it (12×1024), and each of those atoms has about 3 kT of randomized thermal energy. (The 3 kT comes from a more complete analysis of the total energy of the system.) The total thermal energy of the ice cube works out to about

  It’s not too shabby; it works out to about 5 million J/kg. The thermal energy in 1 kg of ice is about the same as the amount of work one person can do in a day. So, let’s solve the world’s energy problems: let’s take some ice, or anything else for that matter, extract its thermal energy, and use that to run our machines. Cool!

  Well, that in a word sums up the problem: cool. Ice is cooler than a car engine. We talked about this in chapter 2 (“Harry Potter and the Great Conservation Laws”) in a different context, but it’s worth repeating: if we put an ice cube on top of our hot car engine, the energy from the ice cube won’t flow into the engine—quite the reverse. We find that energy, in the form of heat, flows from the engine into the ice cube, warming and melting the cube while it cools off the engine. This is the second law of thermodynamics:

  Heat flows spontaneously from higher temperatures to lower temperatures, never the reverse.

  Another formulation of it states:

  In order to make heat flow from an object at a higher temperature to one at a lower temperature, you have to do work.

  If energy goes from the ice cube into the engine, it’ll further cool the ice cube and further warm the hot engine, which is something that never happens on its own. If you want it to happen, you have to spend more energy to do it. In other words, you can’t get something for nothing.

  The second law puts stringent limits on how well any engine can work. An engine cycle is a process whereby heat generated by some chemical process is extracted from a “reservoir” at high temperature and moved to another at a lower temperature. In the process, some of the heat is used to do something useful, which is to say, it is transformed into work. For example, in the Otto cycle of a car engine, heat is generated by the combustion of gasoline and air; the reaction heats the piston to a high temperature, driving it forward via the expansion of the hot gases. When the piston returns back, it pushes the combustion products out into the air at a lower temperature; the combustion products still carry a lot of heat that couldn’t be used in the work the piston did. The efficiency of the engine, defined as the ratio of the amount of useful work divided by the total energy used, has a strict upper bound:

  where TH is the highest absolute temperature reached during the cycle and TC is the lowest. Generally, the lower temperature is room temperature (around 300 K); for a typical car engine, the upper bound might be around 500 K, meaning that the upper bound on the efficiency is 1 − (300/500) = 0.4, or 40%. This is the theoretical limit; the actual efficiencies tend to be lower, around 20%. To be blunt: computers are ten trillion times less energy efficient than they could be, which is why they keep improving by leaps and bounds, whereas cars are within a factor of two of being as efficient as they could be, which is why improving cars is difficult.

  This is an important point. The first steam engines were less than 2% efficient, but their efficiency rose rapidly, essentially on an exponentially increasing curve, until they began to approach their maximum possible efficiency. Nowadays, to improve the internal combostion engine substantially would take a lot of work. The current emphasis seems to be on hybrid electric-gasoline engine designs, but there are also plans for new materials that would allow higher maximum temperatures.

  How about adding flight to the car? After all, science fiction is filled with portrayals of flying cars, from Hugo Gernsback on down. One gets into an interesting semantic problem at this point: many small airplanes aren’t much larger than large cars. If we add wings to a car, does that make it a flying car or a small airplane? Or would a hovercraft count as a flying car, even though it can’t get more than a few feet off the ground? In any event, let’s say we have some James Bond–type car with fold-out wings: how good can we make it? Well, the airflow over the wings is what keeps the car up, but it also creates drag on the car as well. We can estimate the amount of power needed to overcome the air drag by assuming that the drag force is some fraction of the lift force on the plane—say, 20% of it. The lift force is equal to the weight of the car—if we assume a small car of about 1,000 kg, its weight is about 10,000 N; from this, there will be about a 2,000 N drag force on the plane. If the car is moving at 60 mph (which is probably too low a speed to keep it in the air), the engine power needed to overcome the drag can be estimated from the formula.

  where P is the power expenditure, F is the drag force, and v is the car’s speed—about 30 m/s in this case. Thus the power required to overcome the drag force is around 60,000 W. However, since the car engine is only about 20% efficient in turning heat into work, it will have to operate at some 300,000 W to keep the car in the air. This is pretty hefty: it’s a 400 hp engine. This is about three times the horsepower of a 2010 Toyota Corolla, which has a higher mass (and would therefore require a larger engine if it flew). So, a flying car is not impossible, but it would be expensive to maintain. And also dangerous. If you run out of fuel in a car, you pull over to the side of the road. Running out of fuel in a flying car is not something good to imagine. In addition, learning to fly a plane is much harder than learning to drive, crashes from the air are a lot scarier than crashes on the ground, and highways don’t work well as landing strips if there are cars on them. I’m sure you can come up with other problems as well.

  Of course, the flying taxis in most science fiction stories aren’t usually portrayed as small airplanes; they’re usually much more impressive, and hence much more expensive to fly. In Robert Heinlein’s The Number of the Beast, the protagonist has a “flying” car that can be launched into a suborbital ballistic trajectory [120]. Heinlein once predicted (in 1979) that in the future (i.e., in 2010 or thereabouts), people would be able to routinely travel at thousands of miles per hour; this hasn’t happened yet, alas [121]. Such travel is very akin to spaceflight, however, so we will look at it in detail.

  5.5 SUBORBITAL FLIGHTS

  In the next chapter I discuss
vacations in space, but for the rest of this one I’ll talk about Heinlein’s suborbital “flying car,” which is just a ballistic missile in disguise. A manned, steerable ballistic missile, but in principle nothing different from the ones that housed the nuclear missiles the United States and the Soviet Union used to point at each other. How does a ballistic missile work?

  We can understand the ballistic missile (in a very rough sense) from the range equation, something all freshman physics majors are taught in the first two weeks of their first physics course. Simply put, if you throw a ball into the air at a certain speed, making a certain angle with respect to the ground, the range equation will tell you how far it travels.

  There are a few caveats here:

  • The equation doesn’t take air resistance into consideration.

  • The equation assumes that the projectile is being launched over flat ground.

  These two caveats are going to restrict the validity of what I am saying here: ballistic missiles travel over large distances, so we may not be able to ignore the curvature of the Earth when dealing with them. In addition, at least the launch and landing phase of the missile take place within Earth’s atmosphere, so we can’t ignore air resistance in a completely realistic treatment of the problem. But the basic idea is simple: you toss the “flying car” into the air, it follows a roughly parabolic trajectory because of gravity, and it comes back down.

  Here’s the equation:

  where v is the launch speed, g is the acceleration of gravity (about 10 m/s2 on Earth), and θ is the angle above the horizontal that the projectile is launched. One can also work out the flight time:

 

‹ Prev