Book Read Free

Humble Pi

Page 14

by Matt Parker


  Feynman came to realize that the one in one hundred thousand probability was more the result of wishful thinking by management than a ground-up calculation. This thinking seemed to be that if the shuttle was going to transport humans, it needed to be that safe so everything would be engineered to that standard. Not only is that not how probabilities work, but how could they even calculate such long odds?

  It is true that if the probability of failure was as low as 1 in 100,000 it would take an inordinate number of tests to determine it (you would get nothing but a string of perfect flights from which no precise figure, other than that the probability is likely less than the number of such flights in the string so far).

  – Appendix F: Personal observations on the reliability of the Shuttle by R. P. Feynman, from Report to the President by the PRESIDENTIAL COMMISSION on the Space Shuttle Challenger Accident, 6 June 1986

  Far from getting a string of faultless test flights, NASA was seeing signs of possible failure during tests. There were also some non-critical failures during actual launches which did not cause any problems with the flight itself but showed that the chance of things going wrong was higher than NASA wanted to admit. They had calculated their probability based on what they wanted and not on what was actually happening. But the engineers had used the evidence from testing to try to calculate the actual risk, and they were about right.

  When humankind puts its mind to it and doesn’t let its judgement be clouded by what people want to believe, humans can be pretty good at probability. If we want to …

  EIGHT

  Put Your Money Where Your Mistakes Are

  What counts as a mistake in finance? Of course, there are the obvious ones, where people simply get the numbers wrong. On 8 December 2005 the Japanese investment firm Mizuho Securities sent an order to the Tokyo Stock Exchange to sell a single share in the company J-COM Co. Ltd for ¥610,000 (around £3,000 at the time). Well, they thought they were selling one share for ¥610,000 but the person typing in the order accidentally swapped the numbers and put in an order to sell 610,000 shares for ¥1 each.

  They frantically tried to cancel it, but the Tokyo Stock Exchange was proving resistant. Other firms were snapping up the discount shares and, by the time trading was suspended the following day, Mizuho Securities were looking at a minimum of ¥27 billion in losses (well over £100 million at the time). It was described as a ‘fat fingers’ error. I would have gone with something more like ‘distracted fingers’ or ‘should learn to double-check all important data entry but is probably now fired anyway fingers’.

  The wake of the error was wide-reaching: confidence dropped in the Tokyo Stock Exchange as a whole, and the Nikkei Index fell 1.95 per cent in one day. Some, but not all, of the firms which bought the discount stock offered to give them back. A later ruling by the Tokyo District Court put some of the blame on the Tokyo Stock Exchange because their system did not allow Mizuho to cancel the erroneous order. This only serves to confirm my theory that everything is better with an undo button.

  This is the numerical equivalent of a typo. Such errors are as old as civilization. I’d happily argue that the rise of civilization came about because of humankind’s mastery of mathematics: unless you can do a whole lot of maths, the logistics of humans living together on the scale of a city are impossible. And for as long as humans have been doing mathematics, there have been numerical errors. The academic text Archaic Bookkeeping came out of a project at the Free University of Berlin. It is an analysis of the earliest script writing ever discovered: the proto-cuneiform texts made up of symbols scratched on clay tablets. This was not yet a fully formed language but a rather elaborate bookkeeping system. Complete with mistakes.

  These clay tablets are from the Sumerian city of Uruk (in modern-day Southern Iraq) and were made between 3400 and 3000BCE, so over five thousand years ago. It seems the Sumerians developed writing not to communicate prose but rather to track stock levels. This is a very early example of maths allowing the human brain to do more than it was built for. In a small group of humans you can keep track of who owns what in your head and have basic trade. But when you have a city, with all the taxation and shared property that it requires, you need a way of keeping external records. And written records allow for trust between two people who may not personally know each other. (Ironically, online writing is now removing trust between humans, but let’s not get ahead of ourselves.)

  Some of the ancient Sumerian records were written by a person seemingly named Kushim and signed off by their supervisor, Nisa. Some historians have argued that Kushim is the earliest human whose name we know. It seems the first human whose name has been passed down through millennia of history was not a ruler, a warrior or a priest … but an accountant. The eighteen existing clay tablets which are signed Kushim indicate that their job was to control the stock levels in a warehouse which held the raw materials for brewing beer. I mean, that is still a thing; a friend of mine manages a brewery and does exactly that for a living. (His name is Rich, by the way, just in case this book is one of the few objects to survive the apocalypse and he becomes the new oldest-named human.)

  Kushim and Nisa are particularly special to me not because they are the first humans whose names have survived but because they made the first ever mathematical mistake, or at least the earliest that has survived (at least, it’s the earliest one I’ve managed to find; let me know if you locate an earlier error). Like a modern trader in Tokyo incorrectly entering numbers into a computer, Kushim entered some cuneiform numbers into a clay tablet incorrectly.

  From the tablets we can find out a bit about the maths that was being used so long ago. For a start, some of the barley records cover an administration period of thirty-seven months, which is three twelve-month years plus one bonus month. This is evidence that the Sumerians could have already been using a twelve-month lunar calendar with a leap month once every three years. In addition, they did not have a fixed number-base system for numbers but rather a counting system using symbols which were three, five, six or ten times bigger than each other.

  Just remember: a big dot is worth ten small dots. And those other things.

  Once you get through the alien number system, the mistakes are so familiar they could have been made today. On one tablet Kushim simply forgets to include three symbols when adding up a total amount of barley. On another one the symbol for one is used instead of the symbol for ten. I think I’ve made both those mistakes when doing my own bookkeeping. As a species, we are pretty good at maths, but we haven’t got any better over the last few millennia. I’m sure if you checked in on a human doing maths in five thousand years’ time, the same mistakes will be being made. And they’ll probably still have beer.

  Both Nisa and Kushim have signed off on a maths error in this tablet.

  Sometimes when I drink a beer I like to remember Kushim working away in the beer warehouse with Nisa checking up on them. What they, and others like them, were doing led to our modern writing and mathematics. They had no idea how important they, and beer, ended up being for the development of human civilization. Like I said before, living in cities was one of the things which caused humans to rely on maths. But which part of city living is recorded in our longest-surviving mathematical documents? Brewing beer. Beer gave us some of humankind’s first calculations. And beer continues to help us make mistakes to this very day.

  Computerized money mistakes

  Our modern financial systems are now run on computers, which allows humans to make financial mistakes more efficiently and quickly than ever before. As computers have developed they have given birth to modern high-speed trading, where a single customer within a financial exchange can put through over a hundred thousand trades per second. No human can be making decisions at that speed, of course; these are the result of high-frequency trading algorithms where traders have fed requirements into the computer programs they have designed to automatically decide exactly when and how to make purchases and sales.

 
Traditionally, financial markets have been a means of blending together the insight and knowledge of thousands of different people all trading simultaneously; the prices are the cumulative result of the hivemind. If any one financial product starts to deviate from its true value, then traders will seek to exploit that slight difference, and this results in a force to drive prices back to their ‘correct’ value. But when the market becomes swarms of high-speed trading algorithms, things start to change.

  In theory, the result of high-frequency trading algorithms should be the same as the results gained by high-frequency trading people – to synchronize prices across different markets and reduce the spread of values – but on an even finer scale. Automatic algorithms are written to exploit the smallest of price differences and to respond within milliseconds. But if there are mistakes in those algorithms, things can go wrong on a massive scale.

  On 1 August 2012 the trading firm Knight Capital had one of its high-frequency algorithms go off script. The firm acted as a ‘market maker’, which is a bit like a glorified currency exchange, but for stocks. A high-street currency exchange makes money because currencies will be sold at a lower price for the convenience of a quick sale. The exchange will then hang on to that foreign money until it can sell it at a higher price to someone who comes in later and asks for it. This is why you will see tourist currency exchanges with rather different buy and sell prices for the same currency. Knight Capital did the same thing, but with stocks, and could sometimes resell a stock it had just purchased in under a second.

  In August 2012 the New York Stock Exchange started a new Retail Liquidity Program, which meant that, in some situations, traders could offer stocks at slightly better prices to retail buyers. This Retail Liquidity Program received regulatory approval only a month before it went live, on 1 August. Knight Capital rushed to update its existing high-frequency trading algorithms to operate in this slightly different financial environment. But during the update Knight Capital somehow broke its code.

  As soon as it went live the Knight Capital software started buying stocks across 154 different companies on the New York Stock Exchange for more than it could sell them for. It was shut down within an hour but, once the dust had settled, Knight Capital had made a one-day loss of $461.1 million – roughly as much as the profit they had made over the previous two years.

  Details of what exactly went wrong have never been made public. One theory is that the main trading program accidentally activated some old testing code which was never intended to make any live trades – and this matches the rumour that went around at the time that the whole mistake was because of ‘one line of code’. Whatever the case, an error in the algorithms had some very real real-world consequences. Knight Capital had to offload the stocks it had accidentally bought to Goldman Sachs at discount prices and was then bailed out by a group including investment bank Jefferies in exchange for 73 per cent ownership of the firm. Three-quarters of the company gone because of one line of code.

  But that is just the result of some bad programming. And let’s be honest: finance is not the only situation where poorly written code can cause problems. Bad code can cause problems almost anywhere. Automatic-trading algorithms get extra interesting in a financial setting when they start to interact. Allegedly, the complex web of algorithms all trading between themselves should keep the market stable. Until they get caught in an unfortunate feedback loop and a new financial disaster is produced: the ‘flash crash’.

  On 6 May 2010 the Dow Jones Index plummeted by 9 per cent. Had it stayed there, it would have been the biggest one-day percentage drop in the Dow Jones since the crashes of 1929 and 1987. But it didn’t stay there. Within minutes, prices bounced back to normal and the Dow Jones finished the day only 3 per cent down. After a bumpy start to the day, the crash itself happened between 2.40 p.m. and 3 p.m. local time in New York.

  Try to spot where everyone’s heart stopped.

  What a twenty minutes it was. Two billion shares with a total volume of over $56 billion were traded. Over twenty thousand trades were at prices more than 60 per cent away from what the stock was worth at 2.40 p.m. And many of these trades were at ‘irrational prices’ as low as $0.01 or as high as $100,000 per share. The market had suddenly gone mad. But then, almost as quickly, it got a hold of itself and returned to normal. A burst of extreme excitement which ended as fast as it started, it was the Harlem Shake of financial crashes.

  People are still arguing about what caused the flash crash of 2010. There were accusations of a ‘fat finger’ error, but no evidence of this has come to light. The best explanation I can find is the official joint report put out by the US Commodity Futures Trading Commission and the US Securities and Exchange Commission on 30 September 2010. Their explanation has not been universally accepted but I think it’s the best we’ve got.

  It seems that a trader decided to sell a lot of ‘futures’ on a Chicago financial exchange. Futures are contracts to buy or sell something in the future at a pre-agreed price; these contracts can themselves then be bought and sold. They’re an interesting derivative financial product, but the complexities of how futures work is not relevant here. What is relevant is that the trader decided to sell 75,000 such contracts called E-Minis (worth around $4.1 billion) all at once. This was the third biggest comparable sale within the previous twelve months. But while the two bigger sales had been done gradually over the course of a day, this sale was completed in twenty minutes.

  Sales of this size can be made in a few different ways and, if they are done gradually (as overseen by a manual trader), they are normally fine. This sale used a simple selling algorithm for the whole lot, and it was based solely on the current trading volume, with no regard for what the price may be or how fast the sales were being made.

  The market was already a bit fragile on 6 May 2010, with the Greek debt crisis growing and a General Election taking place in the UK. The sudden, blunt release of the E-Minis slammed into the market and sent high-frequency traders haywire. The futures contracts being sold soon swamped any natural demand and the high-frequency traders began to swap them around among themselves. In the fourteen seconds between 2:45:13 and 2:45:27 over 27,000 contracts were passed between these automatic traders. This alone equalled the volume of all other trading.

  This chaos leaked into other markets. But then, almost as quickly as it started, the markets bounced back to normal as the high-frequency trading algorithms sorted themselves out. Some of them had safety-switch cut-offs built in which suspended their trading when prices moved around too much and would restart only after what was going on had been checked. Some traders assumed something catastrophic had happened somewhere in the world which they had not yet heard about. But it was just the interplay of automatic trading algorithms. The big short-circuit.

  The fly in the algorithm

  I own a copy of the ‘world’s most expensive’ book. Sitting on my desk right now is a copy of The Making of a Fly. It is a 1992 academic book about genetics and was once listed on Amazon at a price of $23,698,655.93 (plus $3.99 postage).

  But I managed to buy it at a pretty serious discount of 99.9999423 per cent. As far as I know, The Making of a Fly never sold for $23 million; it was merely listed at that price. And even if it had sold, a lot of people consider one of Leonardo da Vinci’s journals, which Bill Gates purchased for $30.8 million, as the most expensive book ever sold. Clearly, as well as having a penchant for non-transitive dice, Bill and I also share one for expensive reading material. I believe that The Making of a Fly holds the record for the highest-ever legitimate asking price for a not-one-of-a-kind book. Thankfully, my copy cost me only £10.07 (about $13.68 at the time). And the shipping was free.

  The most expensive book I didn’t pay full price for.

  The Making of a Fly hit its peak price in 2011 on Amazon when new copies were available for sale in the US only by two sellers, bordeebook and profnath. There are systems which let sellers set a price algorithmically on Amazon, and i
t seems that profnath enacted the simple rule ‘make the price of my book 0.07 per cent cheaper than the next cheapest price’. They most likely had a copy of The Making of a Fly and had decided they wanted to sell it by being the cheapest listing on Amazon, by a small margin. Like a Price is Right contestant who guesses $1 more than someone else, they’re a jerk but they’re within the rules.

  The seller bordeebook, however, wanted to be more expensive by a decent margin, and their rule was probably along the lines of ‘make the price of my book 27 per cent more than the cheapest other option’. A possible explanation for this is that bordeebook did not actually have a copy of the book but knew that if anyone purchased through them they would have enough of a margin to be able to hunt down and buy a cheaper copy which they could then resell. Sellers like this rely on their excellent reviews to attract risk-averse buyers happy to pay a premium.

  Had there been one other book at a set price, this would all have worked perfectly: profnath’s copy would be slightly cheaper than the third book and bordeebook’s would be way more expensive. But because there were only two books, the prices formed a vicious cycle, ratcheting each other up: 1.27 × 0.9983 = 1.268, so the prices were going up by about 26.8 per cent each time the algorithms looped, eventually reaching tens of millions of dollars. Evidently, neither of the algorithms had an upper limit to stop if the price became ridiculously high. Finally, profnath must have noticed (or their algorithm did have some crazy-high limit) because their price went back down to a much more normal $106.23, and bordeebook’s price quickly fell into alignment.

  The outrageous price for The Making of a Fly was noticed by Michael Eisen and his colleagues at the University of California, Berkeley. They use fruit flies in their research and so legitimately needed this book as an academic reference. They were startled to see two copies for sale at $1,730,045.91 and $2,198,177.95, and every day the prices were going up. Biology research was evidently put to one side as they started a spreadsheet to track the changing Amazon prices, untangling the ratios profnath and bordeebook were using (bordeebook was using the oddly specific ratio of 27.0589 per cent) – once again proving that there are very few problems in life which cannot be solved with a spreadsheet.

 

‹ Prev