Book Read Free

Randomistas

Page 3

by Andrew Leigh


  Sham surgery dates back to 1959, when a group of Seattle doctors became sceptical of a technique used to treat chest pain by tying tiny knots in chest arteries.4 They randomly performed the experiment on eight patients, and simply made incisions in the chests of another nine. The study found that the technique had no impact, and the surgery was phased out within a few years.

  In recent years, sham surgery has shown no difference between a control group and osteoporosis patients who have bone cement injected into cracked vertebrae (a procedure known as vertebroplasty).5 Sham surgery has even been performed by neurosurgeons, who found that injecting fetal cells into the brains of patients suffering from Parkinson’s disease had no more effect than the placebo treatment, in which patients had a small hole, known as a burr hole, drilled into the side of their skulls.6

  The most stunning sham surgery result came in 2013. After the finding that knee surgery didn’t help older patients with osteoarthritis, a team in Finland began to wonder about the knee surgery performed for a torn meniscus, the piece of cartilage that provides a cushion between the thighbone and shinbone. Their randomised experiment showed that among middle-aged patients, surgery for a torn meniscus was no more effective than sham surgery.7 This operation, known as a meniscectomy, is performed millions of times a year, making it the most common orthopaedic procedure in countries such as Australia and the United States.8 While some surgeons acknowledged the enormous significance of the finding, others were not so receptive.9 An editorial in the journal Arthroscopy thundered that sham surgery randomised trials were ‘ludicrous’. The editors went so far as to argue that because no ‘right-minded patients’ would participate in sham surgeries, the results would ‘not be generalizable to mentally healthy patients’.10

  Yet sham surgeries are growing in importance, as people realise that the placebo effect in surgery is probably bigger than in any other area of medicine. A recent survey of fifty-three sham surgery trials found that the treatment only outperformed the placebo 49 per cent of the time. But in 74 per cent of cases, patients appeared to respond to the placebo.11 In other words, three out of four patients feel that a surgery has made them better, even though half of the evaluated surgeries don’t work as intended. The results suggest that millions of people every year are undergoing surgeries that make them feel a bit better – yet they would feel just as good if they had undergone placebo surgery instead.

  Such a huge placebo effect is probably explained by the fact that surgery is a more invasive procedure than other medical interventions, and by the particularly high status of surgeons. As the joke goes, people are waiting in the cafeteria line in heaven when a man in a white coat cuts in and takes all the food. ‘Who’s that?’ one asks. ‘It’s just God,’ another replies. ‘He thinks he’s a surgeon.’12 Yet the results of sham surgery trials suggest that the profession is far from infallible. For nearly half of the procedures that have been evaluated in this way, the surgeon might as well have started by asking the patient: ‘Would you prefer the full operation, or should we just cut you open, play a few easy-listening tracks and then sew you back up again?’

  Ethical questions will continue to be one of the main issues confronting sham surgery. In the 1990s one surgical text stated baldly that ‘sham operations are ethically unjustifiable’.13 To confront this, researchers have gone to extraordinary lengths to ensure patients understand what is going on. In the Houston knee surgery trial, patients were required to write on their charts: ‘On entering this study, I realize that I may receive only placebo surgery. I further realize that this means that I will not have surgery on my knee joint. This placebo surgery will not benefit my knee arthritis.’ Surgeons explain to each patient that the reason for the randomised trial is that the world’s leading experts truly do not know whether the surgery works, a situation known as ‘clinical equipoise’. Because we are uncertain about the results of the treatment, it is possible that those who get the sham surgery may in fact be better off than those who get the real surgery.

  Despite the advocacy of surgeons such as Peter Choong, sham surgery remains in its infancy. A study of orthopaedic surgeries in Sydney hospitals found that only about one-third of procedures were supported by a randomised trial.14 Sydney surgeon Ian Harris points out that patients sometimes regard aggressive surgeons as heroic and conservative surgeons as cowardly. Yet ‘if you look beyond the superficial you often find that the heroic surgeon will have bad results . . . it is harder, and possibly more courageous, to treat patients without surgery’.15 Harris notes that more aggressive surgeons are less likely to be criticised and less likely to be sued – and get paid a lot more.

  Pittsburgh orthopaedic surgeon John Christoforetti tells of how the randomised evidence led him to advise a patient not to seek knee surgery for a meniscal tear. The man responded by going online and giving the surgeon a one-star rating and a rude comment. The patient firmly believed he needed the operation. ‘Most of my colleagues,’ Christoforetti says, ‘will say: “Look, save yourself the headache, just do the surgery. None of us are going to be upset with you for doing the surgery. Your bank account’s not going to be upset with you for doing the surgery. Just do the surgery.”’16 Sometimes it can be easier to ignore the evidence than to follow it.

  *

  In the Bible, the book of Daniel tells the story of an early medical experiment. King Nebuchadnezzar is trying to persuade Daniel and three other young men from Judah to eat the royal delicacies. When Daniel replies that they would prefer a vegetarian diet, he is told that they may end up malnourished. To settle the matter, the king agrees that for ten days the four young men will eat only vegetables, and will then be compared with youths who have eaten the royal delicacies. At the end of the experiment, Daniel and the other three are in healthier condition, so are allowed to remain vegetarian.

  Daniel’s experiment wasn’t a random one, since he and his colleagues chose to be in the treatment group. But the Bible’s 2200-year-old experiment was more rigorous than the kind of ‘pilot study’ sometimes we still see, which has no comparison group at all.

  In the ensuing centuries, randomised medical trials steadily advanced. In the 1540s French surgeon Ambroise Paré was a battlefield surgeon charged with tending to soldiers who had been burned by gunpowder. For these men, the chances of survival were grim. A few years earlier, in the Battle of Milan, Paré had found three French soldiers in a stable with severe burns. As he recounted in his autobiography, a passing French soldier asked if there was any way of curing them.17 When Paré said there was nothing that could be done, the soldier calmly pulled out his dagger and slit their throats. Paré told him he was a ‘wicked man’. The soldier replied that if it had been him in such pain, he hoped someone would cut his neck rather than let him ‘miserably languish’.

  Now Paré was responsible for an even larger group of burned soldiers. A bag of gunpowder had been set alight, and many Frenchmen had been wounded. He began applying the remedy of the day – boiling oil mixed with treacle. But at a certain point, he ran out of hot oil and switched to an old Roman remedy: turpentine, oil of roses and egg white. The next morning, when he checked the two groups of soldiers, Paré found that those who had been treated with boiling oil were feverish, while those who had received the turpentine (which acted as a disinfectant) had slept well. ‘I resolved with myself,’ he wrote, ‘never so cruelly to burn poor men wounded with gunshot.’

  By the standards of today, Paré’s experiment has its flaws. Suppose he had begun treating the most badly burned soldiers first, and then moved on to those with lighter injuries. In that case, we might expect those treated with oil to be in a worse condition, regardless of the effect of the remedy. Yet while Paré’s study was imperfect, medicine continued to inch towards more careful analysis. Two centuries after Paré, Lind would conduct his scurvy experiment on a dozen patients who were ‘as similar as I could have them’.

  An important step on the road towards today’s medical randomised trials was the noti
on that patients might be more inclined to recover – or at least to report that they were feeling better – after seeing a doctor. In 1799 British doctor John Haygarth became frustrated at the popularity of a quack treatment known as ‘Perkins tractors’. The tractors were simply two metal rods, which were to be held against the body of the patient to ‘draw off the noxious electric fluid’ that was hurting the patient. In an experiment on five rheumatic patients, Haygarth showed that wooden rods performed just as well as Perkins tractors, giving rise to the idea of the placebo.18

  The placebo, Haygarth pointed out, was one reason why famous doctors might produce better results than unknown ones. If authoritative doctors evoked a larger placebo effect, he reasoned, then their patients might be more likely to recover, even if their remedies were useless. And indeed, the air of authority was highly prized by doctors of the time, despite the poor quality of their remedies. One of the main treatments used by doctors was bloodletting, which involved opening a vein in the arm with a special knife, and served only to weaken patients.19 It wasn’t until the early 1800s that a randomised trial of bloodletting was conducted on sick soldiers. The result was a 29 per cent death rate among men in the treatment group and a 2 per cent death rate in the control group.20 Medicine’s bloody history is memorialised in the name of one of the discipline’s top journals: The Lancet. Before there was evidence-based medicine, there was eminence-based medicine.21

  In nineteenth-century Vienna, high-status doctors were literally costing lives.22 At a time when many affluent women still gave birth at home, Vienna General Hospital largely served underprivileged women. The hospital had two maternity clinics: one in which babies were delivered by female midwives, and the other where babies were delivered by male doctors. Patients were admitted to the clinics on alternate days. And yet the clinics had very different health outcomes. In the clinic run by midwives, a mother’s chance of death was less than 1 in 20. In the clinic run by doctors, maternal mortality was 1 in 10: more than twice as high. Patients knew this and would beg not to be admitted into the doctor-run clinic. Some would give birth on the street instead of in the doctors’ clinic, because their chance of survival was higher.

  To Ignaz Semmelweis, the doctor in charge of records, the results were puzzling. Because the two clinics admitted patients on alternate days, the health of the patients should have been similar. Indeed, it was almost as though the Vienna Hospital had set up a randomised trial to test the impact of the two clinics – and discovered the doctors were doing more harm than good. In trying to uncover reasons for this, Semmelweis first observed that midwives delivered babies while women lay on their sides, while doctors delivered babies while women lay on their backs. But when the doctors tried adopting side delivery, it didn’t help. Then he noted that when a baby died, the priest walked through the ward with a bell; he theorised that this might be terrifying the other mothers. But removing the priest’s bell also had no impact.

  Then a friend of Semmelweis was poked by a student’s scalpel while doing an autopsy, and died. Noticing that his friend’s symptoms were similar to those of many of the mothers who died, Semmelweis theorised that doctors might be infecting mothers with ‘cadaverous particles’, causing death by puerperal fever. He insisted that doctors wash their hands with chlorine from then on, and the death rate plummeted. Only thanks to Semmelweis and an accidental randomised trial did it become safer to give birth attended by a Viennese doctor than on the streets.

  And yet, like Lind’s findings, Semmelweis’s insistence on hand-washing was rejected by many medical experts of the time.23 The germ theory of disease was yet to be developed. Many doctors were insulted by the suggestion that the hands of gentlemen like themselves were unclean, and by the implication that they were responsible for infecting their patients. After Semmelweis left the Vienna General Hospital, chlorine handwashing was discontinued.

  In the mid-1800s, large elements of medicine remained profoundly unscientific. Addressing the Massachusetts Medical Society in 1860, physician Oliver Wendell Holmes Sr said, ‘I firmly believe that if the whole materia medica [body of medical knowledge], as now used, could be sunk to the bottom of the sea, it would be all the better for mankind, and all the worse for the fishes.’24 As historian David Wootton noted in Bad Medicine, his 2006 book on the history of medical missteps: ‘For 2,400 years patients have believed that doctors were doing good; for 2,300 years they were wrong.’25

  *

  Slowly medical researchers came to rely less on theory and more on empirical tests. At the end of the nineteenth century, diphtheria was the most dangerous infectious disease in the developed world, killing hundreds of thousands of people annually.26 To test the impact of serum treatment, Danish doctor Johannes Fibiger devised a randomised trial.27 Like the Vienna maternity hospitals, Fibiger assigned people to alternate treatments on alternate days. He found that patients given the serum were nearly four times less likely to die. The demand for Fibiger’s treatment was so great that in 1902 the Danish government founded the State Serum Institute to produce and supply the vaccine to its citizens.

  In the coming decades, randomised medical trials became more common. In the 1930s researchers suggested that the risk of investigators biasing their results could be significantly reduced if the person administering the drugs did not know which was the control and which was the treatment. Trials in which the identity of the treatments was hidden from both the patient and the administering doctor became known as ‘double-blind’ studies. In one telling, the term came from blindfold tests that the Old Gold cigarette company carried out to promote its products.28

  In the 1940s a randomised trial showed that antibiotics did not cure the common cold.29 A trial in 1954 randomly injected 600,000 US children with either polio vaccine or salt water.30 The vaccine proved effective, and immunisation of all American children began the following year. The 1960s saw randomised trials used to test drugs for diabetes and blood pressure, and the contraceptive pill.31 Strong advocates of evidence-based medicine, such as Alvan Feinstein and David Sackett, argued that the public should pay less attention to the prestige of an expert and more to the quality of their evidence.

  One of the best-known advocates of evidence-based medicine was Scottish doctor Archie Cochrane, whose early training was as a medical officer in German prisoner-of-war camps during World War II. In one camp, Cochrane was the only doctor to 20,000 men. They were fed about 600 calories a day (one-third of what is generally considered a minimum daily intake). All had diarrhoea. Epidemics of typhoid and jaundice often swept the camp. When Cochrane asked the Nazi camp commanders for more doctors, he was told: ‘Nein! Aerzte sind überflüssig.’ (‘No! Doctors are superfluous.’)32 Cochrane was furious.

  But over time, Cochrane’s anger softened. When he considered which British men lived and died, Cochrane came to understand that his medical expertise had little impact. He did his best, but was up against the limits of 1940s therapies. As Cochrane later acknowledged, what little aid doctors could provide was largely ineffective ‘in comparison with the recuperative power of the human body’. This was particularly true when he cared for tuberculosis patients, tending to them in the clinic before officiating at their funerals (‘I got quite expert in the Hindu, Moslem, and Greek Orthodox rites’).

  After the war, Cochrane wrote, ‘I had never heard then of “randomised controlled trials”, but I knew there was no real evidence that anything we had to offer had any effect on tuberculosis, and I was afraid that I shortened the lives of some of my friends by unnecessary intervention.’33 Cochrane realised then that the Nazi officer who had denied him more doctors might have been ‘wise or cruel’, but ‘was certainly right’.

  Reading Cochrane’s memoirs, it is hard not to be struck by his honesty, modesty and tenderness. He jokes that, ‘It was bad enough being a POW, but having me as your doctor was a bit too much.’34 At another point, he tells the story of the night when the Germans dumped a young Russian soldier into the ward late one ni
ght. The man’s lungs were badly infected; he was moribund and screaming. Cochrane had no morphine, only aspirin, which did nothing to stop the Russian crying out. Cochrane did not speak Russian, nor did anyone else on the ward. Eventually, he did the only thing he could. ‘I finally instinctively sat down on the bed and took him in my arms, and the screaming stopped almost at once. He died peacefully in my arms a few hours later. It was not the pleurisy that caused the screaming but loneliness. It was a wonderful education about the care of the dying.’35

  In the final decades of his life, Cochrane challenged the medical profession to regularly compile all the relevant randomised controlled trials, organised by speciality. In 1993, four years after Cochrane’s death, British researcher Iain Chalmers did just that. Known at the outset as the Cochrane Collaboration – and today simply as Cochrane – the organisation systematically reviews randomised trials to make them accessible for doctors, patients and policymakers. Today, Cochrane reviews are one of the first places that doctors will go when they encounter an unfamiliar medical problem. Chalmers also created the James Lind Alliance, an initiative that identifies the top ten unanswered questions for dozens of medical conditions: its aim is to guide future researchers towards filling in the gaps.

  Thanks to the work of past medical randomistas, new drugs must now follow an established path from laboratory to market. Since the late 1930s, when an experimental drug killed more than a hundred Americans, most countries require initial safety testing to be done on animals. Typically, this involves two species, such as mice and dogs.36 If a drug passes these tests, then it moves into the clinical trial phase. Phase I trials test safety in humans, based on less than a hundred people. Phase II trials test the drug’s efficacy on a few hundred people. Phase III trials test effectiveness in a large group – from several hundred to several thousand – and compares it with other drugs. If a drug passes all these stages and hits the market, post-marketing trials monitor its impact in the general population and test for rare adverse effects.

 

‹ Prev