Why Do Pirates Love Parrots?

Home > Other > Why Do Pirates Love Parrots? > Page 9
Why Do Pirates Love Parrots? Page 9

by David Feldman


  Pet foods (unlike most human foods) provide the sole diet of most pets and the product must be complete and balanced nutritionally. In addition, pet foods must also be appetizing and appeal to a pet’s sense of smell and taste. This is known as palatability and is a source of competition among pet food manufacturers. Scent and flavor must appeal to the dog and may differ from what would appeal to us.

  Ironically, dogs, who will eat just about anything lying on the street, also have sensitive stomachs. Lucille Kubichek, of the Chihuahua Club of America, notes that efforts to find a scent that humans would like could lead to health issues:

  Dog foods carry the odors of the ingredients of which they are composed. I doubt food odors could be neutralized without adding one or more chemicals, which probably would be harmful to the animal.

  By American law, dog food need not be fit for human consumption, and ingredient labels can be difficult to decipher. For example, dogs love lamb, and many kibbles include “lamb meal.” What the heck is lamb meal? It consists of dehydrated carcass, including muscle, bone, and internal organs. For humans who are skittish about finding a hair in their soup, it might be more than a little off-putting to find that lamb meal is often infested with wool and with starch from the inside of the lamb’s stomach.

  Fat is often the second ingredient listed on pet food nutrition labels, and this is often responsible for that awful dog food smell. In her book, Food Pets Die For, Ann N. Martin lambastes its quality:

  Fats give off a pungent odor that entices your pet to eat the garbage. These fats are sourced from restaurant grease. This oil is rancid and unfit for human consumption. One of the main sources of fat comes from the rendering plant. This is obtained from the tissues of mammals and/or poultry in the commercial process of rendering or extracting.

  While the pet food industry and its critics, such as Ann N. Martin, wrangle about whether commercial pet food is dangerous to their health, most dogs seem to be quite content to quickly clean their plates. That makes us happy. The faster they eat the dog food, the sooner the smell goes away.

  Submitted by Dotty Bailey of Decatur, Georgia.

  In Track Events with Staggered Starts, Why Do the Outside Runners Cut to the Inside Immediately Rather Than a More Gradual, Straight Line?

  In middle-distance track events, such as the 800-meter run, the athletes start the race in lanes. Because the runners in the outside lanes must travel a greater distance than those on the inside, the starting lines are staggered, with those closest to the inside farthest back at the start. At the “break point,” usually right after a full turn and at the beginning of the straightaway, runners in the other lanes have the opportunity to break to the inside to save running distance.

  But sharp-eyed reader Dov Rabinowitz wrote us:

  The outside runners always seem to start moving toward the inside at the beginning of a long straightaway, and by the time they are about one-quarter to one-half of the way down the straightaway, they have moved completely to the inside of the track, so that the runners are nearly single file.

  Rabinowitz theorizes that runners waste extra steps to break to the inside prematurely, and that a sharper turn wastes some of the runner’s forward momentum.

  We posed Dov’s Imponderable to four full-time running coaches and even more runners. All of the coaches agreed with Greg McMillan, an Austin, Texas coach, runner and exercise scientist, who preaches simple geometry:

  The question is a good one and the situation drives many coaches crazy. Every track athlete is taught that once you are allowed to “break for the rail,” the athlete should run at a gradual diagonal to the inside lane. Since the point where the athlete may break from running in lanes is usually after the first curve (as in the 800-meter race), the best thing to do is run in a straight line toward the rail at the far end of the back stretch (around the 200-meter mark on most tracks). This will create the shortest distance around the track.

  Obviously runners want to win, and running extra meters is an obvious hindrance to that goal. So we asked the coach why they do so. McMillan thinks that the fly in the ointment is often psychological:

  [The straight-line approach] sounds easy and every athlete will agree with it. But in the real world, this guideline goes out the window when the race starts. Most athletes will agree that it’s the “safety in the pack mentality.” The runner wants to get near the competition and feels vulnerable out on the open track. So while it doesn’t make logical sense to make a drastic cut toward the rail, the emotions of the athlete often cause this to happen. Some runners are better at controlling this urge than others.

  Coach Roy Benson, president of Running, Ltd., based in Atlanta, Georgia, has coached professionally for more than forty years, and says that most runners have strategy rather than mathematics on their minds:

  Those in the outer lanes are usually trying to cut off runners on their left and take the lead as soon as possible.

  But you can find yourself in traffic problems if you stay on the inside, too. Dr. Gordon Edwards, a coach and runner from Charlotte, North Carolina, wrote about the tactical dangers of breaking to the inside too soon:

  It would be impossible and hazardous to cut immediately to the first lane as you might impede other runners or bump into them. If you watch distance races, many runners run in the second or third lanes at times, even on the curves, for strategic reasons. Yes, they will run farther doing this, but sometimes it is a necessary tactic.

  One of the reasons why runners might not break to the extreme inside is so that they can draft behind the lead runner, just as NASCAR drivers “leech” on the lead car. Rather than risk clipping the heels of the lead runner, it can be safer to be on the side. McMillan says that drafting is especially effective on windy days, and the benefits of running in the slipstream of another runner can outweigh the extra few meters the racer on the outside must complete.

  We couldn’t entice any of the runners to admit that they scooted to the inside prematurely for psychological reasons. All of them understood the merits of the gradual drift to the inside, and several chided other runners for darting to the inside prematurely:

  Every coach I ever had hammered home the distance-saving value of taking a tangent rather than cutting in; there’s plenty of time…to make a cut, if the traffic pattern dictates it, but the farther away from the break point you are, the better, particularly because so many runners have a sheep-like mentality and break all at once; let them stumble all over each other fighting for the rail.

  But the runners were afraid of traffic problems. One used a roadway analogy:

  [This problem is] not dissimilar from merging onto a highway…Although the straight diagonal line is slightly shorter, a quick analysis of the runners you are merging with might make one decide to cut in a bit quicker to avoid a potential bump, or to wait a bit, then cut in, also to avoid someone. In theory, if everyone is exactly even when they all go to cut in, and they all take the straight diagonal line to the pole, well, then that’s one big jam up!

  Some runners, drafting be damned, prefer racing from the front. If the runner feels he is in the lead, but is on the outside, he might want to cut over “prematurely” to get to the inside immediately and force competitors to try to pass him on the outside. But runners realize that it is possible to be “boxed-in,” too—stuck in the inside lane, in a pack, with runners in front and outside of them, preventing acceleration.

  Many a horse race has been lost because the jockey couldn’t keep his mount from veering wide. Peter Sherry, a coach and former medalist at the World University Games, disagrees somewhat with the premise of the Imponderable. He thinks that the incidence of “elite” runners cutting to the left prematurely is lower than we’re implying, and that it’s more typical of high school races or races with less experienced athletes. He argues that if an elite runner darts to the inside quickly, there’s usually a good reason—usually that

  a runner wants to make sure he gets a position on the rail before the
race gets to the first turn. If you get caught on the outside of the pack during the turn, you will be running farther than someone on the inside rail.

  Submitted by Dov Rabinowitz of Jerusalem, Israel.

  For more information about the configuration of running tracks in general and staggered starts in particular, go to http://www.trackinfo.org/marks.html.

  Are Brussels Sprouts Really from Brussels?

  Oh ye of little faith! You might not be able to find Russian dressing in Moscow or French dressing in Paris, but Belgium is happy to stake its claim on its vegetable discoveries—endive and Brussels sprouts and, for that matter, “French” fries.

  Although there have been scattered reports about Brussels sprouts first being grown in Italy during Roman times, the first confirmed sighting of cultivation is near Brussels in the late sixteenth century. We don’t know whether the rumor is true that there was a sudden upsurge of little Belgian children running away from home during that era.

  We do know that by the end of the nineteenth century, Brussels sprouts had been introduced across the European continent, and had made the trek across the Atlantic to the United States. Today, most Brussels sprouts served in North America come from California.

  Both the French (choux de Bruxelles) and the Italians (cavollini di Bruselle) give credit to Brussels for the vegetable, and Belgians are so modest about their contributions to the deliciousness of chocolate, mussels, and beer, who are we to argue?

  Submitted by Bill Thayer of Owings Mills, Maryland. Thanks also to Mary Knatterud of St. Paul, Minnesota.

  How Did They Mark Years Before the Birth of Christ? And How Did They Mark Years in Non-Christian Civilizations?

  Reader Tim Goral writes:

  We live in 2004 A.D. [well, we did a few years ago], or anno domini (the year of our Lord); the time period of human history that supposedly began with the birth of Jesus Christ. The historical references before that are all “B.C.” (or sometimes “B.C.E.”), so my question is: How did the people that lived, say 2500 years ago, mark the years? I understand that we count backward, as it were, from 2 B.C. to 150 B.C. to 400 B.C., etc., but the people that lived in that time couldn’t have used that same method. They couldn’t have known they were doing a countdown to one. What did they do? For example, how did Aristotle keep track of years?

  In his essay, “Countdown to the Beginning of Time-Keeping,” Colgate University professor Robert Garland summarized this question succinctly: “Every ancient society had its own idiosyncratic system for reckoning the years.” We’ll put it equally succinctly and less compassionately: “What a mess!”

  Since we couldn’t contact ancient time-keepers (a good past-life channeler is hard to find), this is a rare Imponderable for which we were forced to rely on books. We can’t possibly cover all the schemes to mark time that were used, so if you desire in-depth discussions of the issue, we’ll mention some of our favorite sources at the end of this chapter.

  Our calendar is a gift from the Romans, but because early reckonings were based on incorrect assessments of the lunar cycles, our system has changed many times. “A.D.” is short for Anno Domini Nostri Iesu Christi (“in the year of our lord Jesus Christ”); the years before that are designated “B.C.” (before Christ). Obviously, the notion of fixing a calendar around Jesus did not occur immediately on his birth. Religious scholars wrestled with how to fix the calendar for many centuries.

  In the early third century A.D., Palestinian Christian historian Sextus Julius Africanus attempted to fix the date of creation (he put it at what we would call 4499 B.C., but he had not yet thought of the B.C./A.D. calendar designation). In the sixth century, Pope John I asked a Russian monk, Dionysius Exiguus, to fix the dates of Easter, which had been celebrated on varying dates. Exiguus, working from erroneous assumptions and performing errors in calculation, was the person who not only set up our B.C./A.D. system, but helped cement December 25 as Christmas Day (for a brief examination of all of the monk’s mistakes see http://www.westarinstitute.org/Periodicals/ 4R_Articles/Dionysius/dionysius.html). Two centuries later, Bede, an English monk later known as Saint Bede the Venerable, popularized Exiguus’s notions. Christians were attempting to codify the dates of the major religious holidays, partly to compete with Roman and Greek gods and the Jewish holidays, but also to make the case for a historical Jesus.

  Although the world’s dating schemes are all over the map (pun intended), most can be attributed to one of three strategies:

  1. Historical Dating. Christian calendars were derived from the calendars created by the Roman Empire. The early Romans counted the years from the supposed founding of Rome (ab urbe condita), which they calculated as what we would call 753B.C. The ancient Greeks attempted to establish a common dating system in the third century B.C., by assigning dates based on the sequence of the Olympiads, which some Greek historians dated back as far as 776 B.C.

  2. Regnal Dating. If you were the monarch, you had artistic control over the calendar in most parts of the world. In the ancient Babylonian, Roman, and Egyptian empires, for example, the first year of a king’s rule was called year one. When a new emperor took the throne—bang!—up popped a new year one. Although Chinese historians kept impeccable records of the reign of emperors, dating back to what we would call the eighth century B.C., they similarly reset to year one at the beginning of each new reign. In ancient times, the Japanese sometimes used the same regnal scheme, but other times dated back to the reign of the first emperor, Jimmu, in 660 B.C.

  3. Religious Dating. Not surprisingly, Christians were not the only religious group to base their numbering systems on pivotal religious events. Muslims used hegira, when Mohammed fled from Mecca to Medina in 622 A.D. to escape religious persecution, to mark the starting point of their calendar. In Cambodia and Thailand, years were numbered from the date of Buddha’s death. Hindus start their calendar from the birth of Brahma.

  Looking over the various numbering schemes, you can’t help but notice how parochial most calendar making was in the ancient world. Even scholars who were trying to determine dates based on astronomical events often ended up having to bow to political or religious pressure. And modern society is not immune to these outside forces—some traditionalist Japanese activists are trying to reintroduce a dating system based on the emperors’ reigns.

  Submitted by Tim Goral of Danbury, Connecticut.

  For more information about this subject, one of the best online sources can be found at http://webexhibits.org/calendars/index.html. Some of the books we consulted include Anno Domini: The Origins of the Christian Era by George Declercq; Countdown to the Beginning of Time-Keeping by Robert Garland, and maybe best of all, the Encyclopaedia Britannica section on “calendar.”

  Why Are There More Windows Than Rows in Commercial Airlines? Why Aren’t the Windows Aligned with the Rows of Seats?

  When we posed this Imponderable to reader Ken Giesbers, a Boeing employee, he wrote:

  I understand this one on a personal level. On one of the few occasions that I flew as a child, I had the good fortune of getting a window seat, but the misfortune of sitting in one that lined up with solid fuselage between two windows. Other passengers could look directly out their own window, but not me. I would crane my neck to see ahead or behind, but the view was less than satisfying. Why would designers arrange the windows in this way?

  From the airplane maker’s point of view, the goal is clearly, as a Boeing representative wrote Imponderables, to

  provide as many windows as they reasonably can, without compromising the integrity of a cabin that must safely withstand thousands of cycles of pressurization and depressurization.

  But the agenda of the airlines is a little different. Give passengers higher “seat pitch” (the distance between rows of seats) and you have more contented passengers. Reduce the seat pitch and you increase revenue if you can sell more seats on that flight. In 2000, American Airlines actually reduced the number of rows in its aircraft, increasing legroom and bo
asting of “More Room Throughout Coach” in its ads. When the airline started losing money, it panicked and cut out the legroom by putting in more rows.

  In 2006, when soaring fuel prices are squeezing the airlines’ costs, there isn’t much incentive to offer more legroom. Their loads (percentage of available seats sold) are high; if they reduce the number of seats available on a flight, there is a good chance they have lost potential revenue on that flight. And passengers who are concerned with legroom might spring for more lucrative seats in business or first class. On the other hand, the more you squish your passengers, the more likely you are to lose your customer to another airline who offers higher seat pitch (the downsizing of American Airline’s legroom in coach lost them one Imponderable author to roomier Jet-Blue). Web sites such as SeatGuru.com pinpoint the exact pitch dimensions of each aircraft on many of the biggest airlines, heating up the “pitch war.”

 

‹ Prev