A Brush with Maths
Some mathematicians pay attention to hairdressers more than other mathematicians do. Two modern scholars focused their attention very differently when they wrote about history’s most famous numerico-tonsorial collaboration.
In 1784, mathematicians joined forces with hairdressers on a scale probably never attempted before or since. A century and a half later, Raymond Clare Archibald looked back at their forces in wonder. Archibald’s ‘Tables of Trigonometric Functions in Non-Sexagesimal Arguments’ spanned twelve full pages in the April 1943 issue of the about-as-lively-as-you-might-expect journal Mathematical Tables and Other Aids to Computation.
Archibald, a professor of mathematics at Brown University in Providence, Rhode Island, exemplified terseness. He often abbreviated himself as R. C. Archibald. This monograph identifies him simply as ‘RCA’.
Though some mathematicians are bald, he was not. A former student wrote that Archibald ‘was striking in appearance, his hair wavy and beginning to grey, worn a little longer than was generally the custom’.
Archibald sketched the basics of the hairdresser story.
The French government wanted new, improved ‘tables of the sines, tangents, etc., and their logarithms’. The fellow in charge, the not-so-terse Gaspard Clair François Marie Riche de Prony, whose portraits credit his head with an abundant, curvilinear garden of hair, assembled a team. De Prony got three or four mathematicians to do the heavy mental lifting, seven or eight people to perform the tedious calculations, and – here the story took its little twist – ‘70 or 80’ people to check the work.
These checkers, Archibald said, were ‘endowed with no great mathematical abilities. In fact they were mainly recruited from among hairdressers whom the abandonment of the wig and powdered hair in men’s fashions, had deprived of a livelihood.’
Archibald devoted only one paragraph to those hairdressers, otherwise stubbornly persisting in an almost obsessive description of the sines, cosines, and other, sometimes tangential, niceties of the story. The project produced ‘17 large folio volumes’, he has us know, of which ‘8 volumes were devoted to logarithms of numbers to 200,000’.
In contrast, Ivor Grattan-Guinness practically babbled about the ex-coiffeurists. An emeritus professor of history of mathematics and logic at Middlesex Polytechnic, Grattan-Guinness sports healthy hanks of white hair in the photos of him that I have seen. His monograph called ‘Work for the Hairdressers: The Production of de Prony’s Logarithmic and Trigonometric Tables’ appeared in 1990 in the Annals of the History of Computation. He wrote: ‘Many of these workers were unemployed hairdressers: one of the most hated symbols of the ancien regime was the hairstyles of the aristocracy, and the obligatory reduction of coiffure “as the geometers say, to its most simplest expression” left the hairdressing trade in a severe state of recession. Thus these artists were converted into elementary arithmeticians.’
Everything was carefully organized, Grattan-Guinness explained, ‘to avoid multiplication and division and to reduce the calculations to sums and (especially) differences, which the hairdressers could fairly be expected to handle’.
The hairdressers finished their work in less than three years. Historians have (so far as I’m aware) ignored whatever they did after that.
Archibald, Raymond Clare (1943). ‘Tables of Trigonometric Functions in Non-Sexagesimal Arguments.’ Mathematical Tables and Other Aids to Computation 1 (2): 33–44.
Grattan-Guinness, Ivor (1990). ‘Work for the Hairdressers: The Production of de Prony’s Logarithmic and Trigonometric Tables.’ Annals of the History of Computation 12: 177–85.
Leftovers from Ham Sandwich Theories
The Ham Sandwich Theorem has been a treat and a spur to mathematicians for more than half a century. There was a bit of a kerfuffle about who invented it, but that question did get settled.
The Ham Sandwich Theorem cropped up in a branch of mathematics called algebraic topology.
The theorem describes a particular truth about certain shapes. Most published papers on the topic make a hash of explaining it to anyone who is not an algebraic topologist. But the authors of a 2001 paper called ‘Leftovers from the Ham Sandwich Theorem’ wrapped up an important little leftover – they put the idea into clear language.
The Ham Sandwich Theorem, they wrote, ‘rescues the careless sandwich maker by guaranteeing that it is always possible to slice the sandwich with one cut so that the ham and both slices of bread are each divided into equal halves, no matter how haphazardly the ingredients are arranged’.
For a while, most ham sandwich theorizing dealt with simple cases. A paper entitled ‘Computing a Ham-Sandwich Cut in Two Dimensions’, published in 1986 in the Journal of Symbolic Computation, is typical. It considered only ham sandwiches that had been flattened flatter than even the chintziest cook would dare to devise. Mathematicians often do things this way, first considering the extreme cases, digesting those thoroughly, and only then moving on to more substantial versions. Indeed, the ‘Computing a Ham-Sandwich Cut in Two Dimensions’ paper itself contains a section called ‘Getting Rid of Degenerate Cases’.
People did solve the mystery of slicing a thick ham sandwich. And, inevitably, they developed a hunger for more substantial problems.
In 1990, Yugoslavian theorists were writing in the Bulletin of the London Mathematical Society about ‘An Extension of the Ham Sandwich Theorem’. Two years later a theorist at Yaroslav State University in Russia published a paper called ‘A Generalization of the Ham Sandwich Theorem’. That same year, a team of hungry American, Czech, and German mathematicians assembled a master collection of recipes for slicing ham sandwiches. Mathematicians almost never use the word ‘recipe’, so they called their paper ‘Algorithms for Ham-Sandwich Cuts’. You’ll find it in the December 1994 issue of the journal Discrete and Computational Geometry.
Research then moved on to exotic, distantly related questions, exemplified by a 1998 monograph called ‘Green Eggs and Ham’.
Figure: ‘The knife cut which divides the ham into equal area portions’ as depicted in ‘Green Eggs and Ham’
And who started this? A 2004 paper called ‘The Early History of the Ham Sandwich Theorem’ took care of a lingering leftover: it identified the inventor. Mathematico-historians W. A. Beyer and Andrew Zardecki, of Los Alamos National Laboratory in New Mexico, say that it was a Jewish theorist who introduced the ham sandwich into mathematical theory. Beyer and Zardecki trace the theorem back to a 1945 paper by the Polish mathematician Hugo Steinhaus that ‘represents work Steinhaus did in Poland on the ham sandwich problem in World War II while hiding out with a Polish farm family’.
Byrnes, Graham, Grant Cairns, and Barry Jessup (2001). ‘Leftovers from the Ham Sandwich Theorem.’ American Mathematical Monthly 108 (3): 246–49.
Beyer, W. A., and Andrew Zardecki (2004). ‘The Early History of the Ham Sandwich Theorem.’ American Mathematical Monthly 111 (1): 58–61.
Edelsbrunner H., and R. Waupotitsch (1986). ‘Computing a Ham-Sandwich Cut in Two Dimensions.’ Journal of Symbolic Computation 2 (2): 171–78.
Zivaljevic, Rade T., and Sinisa T. Vrecica (1990). ‘An Extension of the Ham Sandwich Theorem.’ Bulletin of the London Mathematical Society 22 (2): 183–86.
Dolnikov, V. L., and P. G. Demidov (1992). ‘A Generalization of the Ham Sandwich Theorem.’ Matematicheskie Zametki 52 (2): 27–37.
Lo, Chi-Yuan, J. Matoušek, and W. Steiger (1994). ‘Algorithms for Ham-Sandwich Cuts.’ Discrete and Computational Geometry 11 (1): 433–52.
Kaiser, M. J., and S. Hossaien Cheraghi (1998). ‘Green Eggs and Ham.’ Mathematical and Computer Modeling 28 (1): 91–99.
Abbott, Timothy G., Michael A. Burr, Timothy M. Chan, Erik D. Demaine, Martin L. Demaine, John Hugg, Daniel Kane, Stefan Langerman, Jelani Nelson, Eynat Rafalin, Kathryn Seyboth, and Vincent Yeung. ‘Dynamic Ham-Sandwich Cuts in the Plane.’ Computational Geometry 42 (5): 419–28.
Steiger, William, and Jihui Zhao (2
009). ‘Generalized Ham-Sandwich Cuts.’ Discrete and Computational Geometry. 44 (3): 535–45.
May We Recommend
Greek Rural Postmen and Their Cancellation Numbers
edited by Derek Willan (publication of the Hellenic Philatelic Society of Great Britain, 1994)
The Perfect Second Cup of Coffee
Yes, there is a best way – mathematically – to pour your second cup of coffee, says a study called ‘Recursive Binary Sequences of Differences’.
But no one realized it until the year 2001, when Robert M. Richman published his simple recipe in the journal Complex Systems. During the subsequent passage of nine years and billions of cups of coffee, the secret has been available to all.
‘The problem is that the coffee that initially comes through the filter is much stronger than that which comes out last, so the coffee at the bottom of the pot is stronger than that at the top’, says Richman. ‘Swirling the pot does not homogenize the coffee, but using the proper pouring pattern does.’
Here’s all you have to do. Prepare coffee – two cups’ worth – in a carafe. Now get two mugs, call them A and B. Then: ‘If one has the patience to make four pours of equal volume, the possible pouring sequences are AABB, ABBA, and ABAB.’
Choose ABBA.
That’s it. You now have two nearly-identical-tasting cups of coffee.
Richman tells you what to do if you’re pernickety: ‘If one wishes to further reduce the difference and has more patience, one can make eight pours of equal volume, four in each cup. The number of possible sequences is now 35.’ The optimal sequence, he calculates, is ABBABAAB.
And if you are more finicky than that, Richman neglects you not. ‘With even more patience, one may make 16 pours, eight into each cup. There are now 6435 possible pouring sequences.’ ABBABAABBAABABBA is the way to go.
This same blending problem crops up elsewhere in modern life: in distributing pigments evenly when mixing paint, and even in choosing sides for a basketball game. ‘Consider the fairest way for “captain A” and “captain B” to choose sides,’ Richman instructs. The traditional method – alternating the choices – leads to unequally strong teams. Instead use the coffee recipe, which is ‘likely to result in the most equitable distribution of talent’. Insist that ‘captain A has the first, fourth, sixth, and seventh choices, while captain B has the second, third, fifth, and eighth choices’.
The mathematics in this study looks at coffee production as a collection of ‘Walsh functions’. These are trains of on/off pulses that add together in enlightening ways.
The monograph ends modestly, or perhaps realistically, with a wistful thought: ‘As is typically the case with fundamental contributions, scientifically significant applications of this work may not appear for some time.’
Richman recently retired as a chemistry professor at Mount St. Mary’s University in Emmitsburg, Maryland. He now has more time to devote to this mixing business, with pleasure. ‘It took me over ten years to develop the mathematics to solve this problem, which is well outside of my primary area of expertise. I’m trying to find a classical number theorist who is willing to collaborate on the sequel: I think I can definitively establish the best way to pour three cups of coffee.’
Richman, Robert M. (2001). ‘Recursive Binary Sequences of Differences.’ Complex Systems 13: 381–92.
The Full Weight of Science
A pound of lead feels heavier than a pound of feathers – a thing long suspected, but not carefully tested until 2007, when Jeffrey B. Wagman, Corinne Zimmerman, and Christopher Sorric ran an experiment involving lead, feathers, plastic bags, cardboard boxes, a chair, blackened goggles, and twenty-three volunteers from the city of Normal, Illinois.
The scientists are based at Illinois State University, which is located in that unassumingly named metropolis. In a study published in the journal Perception, they explain why they took the trouble. ‘“Which weighs more – a pound of lead or a pound of feathers?” The seemingly naïve answer to this familiar riddle is the pound of lead whereas the correct answer is that they weigh the same amount.’ But, they wrote, this ‘naïve answer may not be so naïve after all. For over 100 years, psychologists have known that two objects of equal mass can feel unequally heavy depending on the mass distribution of those objects.’
Wagman, Zimmerman, and Sorric poured some lead shot into a plastic bag, then sealed and taped the bag inside the bottom of a cardboard box. For clarity, let’s call this the box-with-lead-in-its-bottom. Then they stuffed a pound of goose down feathers into a large plastic bag. Feathers and bags being what they are, this fluffed, baggy entity entirely filled a box that looked just like the box-with-lead-in-its-bottom. Let’s call this snugly packed second box the box-with-feathers-spread-thoughout-its-innards.
Then came the test. One by one the volunteers sat in the chair, donned the blackened goggles, then ‘placed the palm of their preferred hand up with their fingers relaxed. On a given trial, each box was placed on the participant’s palm in succession. The participant hefted each box and reported which box felt heavier.’
Slightly more often than not, the volunteers said that the box-with-lead-in-its-bottom was heavier than the box-with-feathers-spread-thoughout-its-innards.
After weighing and judging all the data, the scientists educatedly hazarded a guess as to why one box seemed heavier. Probably, they said, it’s because ‘the mass of the feathers was distributed more or less symmetrically in the box (i.e., the feathers filled the box), but the mass of the lead was distributed asymmetrically along the vertical axis (i.e., the box was “bottom-heavy”). Therefore the box containing lead was more difficult to control, and it felt heavier.’
The scientists did not test how volunteers would respond if the lead were fixed precisely in the middle, rather than stuck to the bottom, of the box. This they left for future scientists to contemplate.
Wagman, Jeffrey B., Corinne Zimmerman, and Christopher Sorric (2007). ‘“Which Feels Heavier – A Pound of Lead or a Pound of Feathers?” A Potential Perceptual Basis of a Cognitive Riddle.’ Perception 36: 1709–11.
May We Recommend
‘Do Dogs Know Calculus?’
by Timothy J. Pennings (published in College Mathematics Journal, 2003) and
‘Dogs Don’t Need Calculus’
by Michael Bolt and Daniel C. Isaksen (published in College Mathematics Journal, 2010)
The Face Value of Numbers
A smiley-face is very expressive, statistically. By tweaking the eyes, mouth, and other bits, you can literally put a meaningful face on any jumble of numbers. Herman Chernoff pointed this out in 1973 in the Journal of the American Statistical Association, in an article entitled ‘The Use of Faces to Represent Points in K-Dimensional Space Graphically’.
Subsequently, folks took to calling these things Chernoff faces. Chernoff faces can make statistical analysis into a recognizably human activity.
Most people, when shown some statistics, sigh and get boggled. But Herman Chernoff realized that almost everyone is good at reading faces. So he devised recipes to convert any set of statistics into an equivalent bunch of smiley-face drawings.
Each data point, he wrote, ‘is represented by a cartoon of a face whose features, such as length of nose and curvature of mouth, correspond to components of the point. Thus every multivariate observation is visualized as a computer-drawn face. This presentation makes it easy for the human mind to grasp many of the essential regularities and irregularities present in the data.’
‘The Use of Faces to Represent Points in K-Dimensional Space Graphically’ is one of the few statistics papers that is visually goofy, rather than arid.
One page is filled with eighty-seven cartoon faces, each slightly different. Some faces have little beady eyes, others have big, startled, wide-awake peepers. There are wide mouths, little dried-up ‘I’m not here, don’t notice me’ mouths, and middling mouths. Another page shows off some of the cartoony variety that’s possible: roundish simpleto
n heads, jowly alien-visitor heads, and a smattering of noggins that look froggy. Elsewhere, the study perhaps inevitably includes conventional statistics machinery – charts of numbers, differential and integral calculus equations, and plenty of technical lingo.
Chernoff discovered, by experiment, that people could comfortably interpret a face that expresses quite large amounts of data. ‘At this point,’ he wrote, ‘one can treat up to eighteen variables, but it would be relatively easy to increase that number by adding other features such as ears, hair, [and] facial lines.’
The world has gone on to employ Chernoff faces a little, but not yet a lot. A 1981 report in the Journal of Marketing, for example, used them to display corporate financial data, with this explanation: ‘From Year 5 to Year 1, the nose narrows as well as increases in length, and the eccentricity of the eyes increases. Respectively, these facial features represent a decrease in total assets, an increase in the ratio of retained earnings to total assets, and an increase in cash flow.’
A note at the very end of Chernoff’s 1973 paper hints at a practical reason why his idea would not catch on immediately: ‘At this time the cost of drawing these faces is about 20 to 25 cents per face on the IBM 360-67 at Stanford University using the Calcomp Plotter. Most of this cost is in the computing, and I believe that it should be possible to reduce it considerably.’
Chernoff, Herman (1973). ‘The Use of Faces to Represent Points in K-Dimensional Space Graphically.’ Journal of the American Statistical Association 68 (342): 361–68.
This is Improbable Page 27