The Rise and Fall of Modern Medicine
Page 22
Lewis stopped for a brief moment to greet me, as he stood in his cutaway morning coat at the operating table in the laboratory massaging the heart of a dog with one hand . . . On several occasions I walked along Oxford Street with Lewis back to his lodgings and then returned to the laboratory. Night after night for weeks we measured the time intervals of the electrocardiograms of cats and dogs down to a ten-thousandth of a second under various experimental conditions. He taught me how to burn the midnight oil . . . he was one of the best teachers I have ever had, a hard task master with a brain as sharp as a razor.7
This work culminated in ‘a truly magnificent volume’, The Mechanisms and Graphic Registration of the Heartbeat, 529 pages long with 400 figures and more than 1,000 references.
As a young man Lewis had come under the influence of the leading physiologists of the day, one of whom, E. H. Starling, summed up what was to become the main difference between the Lewis and Horder methods of practising medicine: ‘This is what I regard as the University spirit, not simply diagnosing a patient and deciding what to do for him in order to earn our fee, but what we can get out of his case in order to do better next time.’ Lewis’s biographer, Arthur Holman, elaborates:
All Lewis’s research had this factor of applying the experimental method to clinical problems and over the years he variously called this ‘progressive medicine’, ‘experimental medicine’ until he eventually adopted the phrase ‘clinical science’. He had a passionate belief that clinical science was just as good as any other science, and it would be established as a University discipline . . . one has to remember that in the 1930s in Britain, the concept of a full-time life-long career in clinical research was distinctly unlikely . . . when he started his campaign, full-time research was regarded rather as a refuge for those unable to withstand the strains of a consultant’s life.8
The science in question was essentially the application of the methods of physiological investigation to man. For two hundred years physiologists had been cutting up animals, investigating how their hearts beat and their nerves worked. Now, in the form of clinical science, precisely the same approach was to be applied to patients. Its appeal was obvious. Horder’s medicine of ‘clinical methods’ could not progress. It could be refined and added to, but essentially its knowledge base was grounded in the autopsy room of the late nineteenth century. Clinical science, by contrast, appeared to have apparently limitless possibilities of investigating, as Lewis did, the abnormal rhythms of the heart, or, as among his youthful protégés such as McMichael, what precisely happened to the circulatory system following a substantial loss of blood. This was ‘new’ knowledge, out of which might come ‘better’ understanding of disease and perhaps even ‘better’ treatments. This at least was the view that had inspired Thomas Lewis and a handful of others and had culminated in 1935 in the opening of the Postgraduate Medical School at the Hammersmith.
But John McMichael was to take Lewis’s concept of clinical science one small but definitive step further, which probably more than anything else explains what a truly radical departure it was to become. In December 1943, at a meeting at London’s University College Hospital chaired by Thomas Lewis, McMichael presented the research he had been conducting, where catheters were inserted into the heart to measure the fall in pressure following blood loss. At the end of the presentation Lewis described his work as ‘startling’ and strongly hinted that he should abandon it. ‘The study sent shock waves through medical London, as many physicians regarded the technique as unethical, even immoral.’9
The rubicon crossed by McMichael at this meeting requires some elaboration because it is so essential to subsequent developments. The technique of manipulating a catheter into the chambers of the heart would rightly be considered as ‘asking for trouble’, not to say life-threatening, as it could potentially precipitate a fatal disturbance of the heart rhythm. Further, the knowledge gained from McMichael’s experiment could reasonably be described as ‘trivial’, certainly from a therapeutic perspective, as the treatment of low blood pressure due to haemorrhage simply requires the replacement of blood. The precise mechanism by which the blood pressure actually falls, measured so precisely by the catheter placed inside the heart, is irrelevant.
This, it seems, was not Lewis’s idea of clinical science, but here was the rub. If clinical science was to progress it certainly could not place internal constraints upon itself, but must be capable of always pushing at the boundaries of the technically feasible. Here, then, is the decisive moment when the focus of medicine shifts from the Horder view of a professional contract – where the doctor’s sole concern is the best interests of the patient as an individual – to one where the welfare of the patient is subordinated to the progress of science. In this new world, patients become ‘interesting clinical material’ on whom the ambitious young doctor performs his experiments with a view to publication in a prestigious medical journal. Here is how one young doctor puts it: ‘A lot of the research you do is of no benefit to patients, and there is a real possibility you can do them harm. So, in order to do research you have got to close your eyes to some extent, or at least take calculated risks with those on whom you run the experiments.’10
Whatever reservations Lewis might have had about his protégé’s aggressive experimental approach, time and again, as medical progress accelerated, it was to be vindicated. Thus, coincidental with McMichael’s experiments on the mechanisms of the fall in blood pressure following substantial blood loss in 1944, Alfred Blalock performed the first ‘blue-baby’ operation to correct the congenital defect Fallot’s Tetralogy that in a few years would lead to the triumph of open-heart surgery. Surgeons obviously had to know in advance the precise nature of the anatomical defect they were operating on, and the only way this could be done was by using McMichael’s technique of introducing a catheter into the heart, through which a dye could be injected to illuminate the anatomical defect within. Similarly Sheila Sherlock’s ‘liver aspiration’ may not have been of much benefit to any of those who underwent it but, soon after the publication of her paper, she was being referred dozens of cases of jaundice. With the accumulating experience of their management, she rapidly became the world’s leading expert in liver disease. As for Eric Bywaters, his meticulous study of ‘crush syndrome’ might have seemed pointless as all his patients died, but at the end of the war his knowledge about kidney failure attracted to the Hammersmith others like Wilhelm Kolff, whose dialysis machine would in time be able to save those who would otherwise have died from this ‘previously unreported syndrome’.
In the ten years following the end of the war, the situation where research was regarded ‘as a refuge for those unable to withstand the strains of a consultant’s life’ was completely reversed. Now the ambitious doctor’s only hope for advance was as an investigative scientist in the mould of John McMichael. Under his leadership the Postgraduate Medical School became the dominant medical institution in the country, driven forward by an extraordinarily optimistic belief in medical progress. The many achievements during this time encompassed an exhilarating range of medical research, including investigating the ‘new’ antibiotics, the ‘wonder drug’ cortisone, the treatment of childhood leukaemia and the study of thyroid function with radioisotopes. From the Hammersmith the gospel of clinical science spread outwards, so that before long every teaching hospital had adopted its tenets.11
The contribution of clinical science to the post-war medical achievement was to create an atmosphere within which it was possible to believe that the most difficult of problems might eventually be soluble.
There is one further critical but neglected aspect of its legacy. It is now very difficult in retrospect to understand how the pioneers in the early years of treatment for childhood leukaemia or the ‘black’ years of renal transplantation carried on despite the enormous suffering they inflicted and the high mortality rate from their interventions. Why did they persevere? This is a complex question but part of the answer lies in Hu
man Guinea Pigs by Maurice Pappworth, published in 1967.12 Pappworth came from Liverpool, where he trained under Lord Cohen, who, like Tommy Horder, was a brilliant clinician. In time Pappworth became the standard-bearer of the tradition of clinical methods and coached several generations of young physicians to pass their postgraduate exams on principles set out in his book A Primer of Medicine, in which he consistently emphasised the superiority, when making a diagnosis, of the traditional clinical skills of history taking and examination over the tests and investigations vigorously promoted by the clinical scientists.13
Pappworth opened his Human Guinea Pigs by citing the views of Sir William Heneage Ogilvie, senior surgeon at Guy’s: ‘The science of experimental medicine is something new and sinister, for it is capable of destroying in our minds the old faith that we, the doctors, are the servants of the patients whom we have undertaken to care for and the complete trust that they can place their lives or the lives of their loved ones in our care.’14
This ‘sinister’ aspect of clinical science, where patients become ‘human guinea pigs’, Pappworth illustrates with numerous examples of experiments on infants, pregnant women, the mentally ill, prisoners, the old and the dying. They are variously cruel, dangerous or purposeless. Here the cardiac catheterisation popularised by John McMichael becomes in the hands of a group of doctors from Birmingham a superior form of torture, in which patients must sit on a bicycle with a mask on their face and catheters coming out of their arms to permit the pressure within their hearts to be recorded. Not a pleasant experience, but as Pappworth points out, the crucial point is that all these patients were seriously ill, suffering from anaemia, an overactive thyroid, or various forms of obstructive lung disease. Not only would they not have benefited from these experiments, but neither would anyone else, because the knowledge acquired usually had little value, other than providing the opportunity for those conducting the experiments to further their careers by writing up the results in a scientific journal.
Human Guinea Pigs outraged academic physicians and Pappworth paid a heavy price. He was ostracised by the medical establishment and denied the Fellowship of the Royal College of Physicians right up until a year before his death. This was the inevitable reverse side of the coin of clinical science, where the necessity for doctors to perform experiments as a requirement for their own advancement led to the sort of degenerate ‘scientism’ that was the antithesis of the Horderian concept of a ‘personal’ relationship between doctor and patient. Nonetheless, this medical ruthlessness was an indispensable requirement when it came to that perseverance necessary for pushing forward the boundaries of medicine. The ideology of clinical science encouraged a sort of emotional disconnectedness, without which the pioneers would never have persisted with their experimental therapies.
3
A CORNUCOPIA OF NEW DRUGS
The newly qualified doctor setting up practice in the 1930s had a dozen or so proven remedies with which to treat the multiplicity of different diseases he encountered every day: aspirin for rheumatic fever, digoxin for heart failure, the hormones thyroxine and insulin for an underactive thyroid and diabetes respectively, salvarsan for syphilis, bromides for those who needed a sedative, barbiturates for epilepsy, and morphine for pain. Thirty years later, when the same doctor would have been approaching retirement, those dozen remedies had grown to over 2,000. The medical textbook he had bought as a student – the first edition of Cecil’s Textbook of Medicine, published in 1927 – had, by the time he purchased the fourteenth edition in 1960, changed out of all recognition, as its chief editor Paul Beeson subsequently observed:
In going through the first edition, one cannot fail to be impressed by the paucity of available drugs. Many medicines used in 1927 have simply disappeared, such as strychnine, the arsenicals, tincture of capsicum, tincture of ginger, dilute hydrochloric acid, boric acid, and bromide preparations. Only about thirty drugs mentioned in the first edition are still in use today.
Dr Beeson then proceeds to enumerate the therapeutic cornucopia of new drugs that are mentioned in the fourteenth edition. They include:
. . . 86 anti-infective agents, 5 antihistamines, 10 synthetic steroids, 35 other hormone preparations, 9 drugs affecting blood coagulation, 13 anti-epileptic drugs, 31 cytotoxic or immunosuppressive agents, 18 analgesics, 11 sedatives, 39 drugs affecting the autonomic nervous system, 15 nutrients, 11 diuretics and 7 new preparations for the treatment of poisoning.1
In parallel with this massive increase in the range of treatments our doctor was now able to prescribe, his perception of the role of medicine had utterly changed. He would have been, when qualifying in the 1930s, a ‘therapeutic nihilist’, not only aware that there was little to offer his patients, but doubtful that there ever would be. He had, after all, spent time in the autopsy room and seen the terrible ravages of disease on human organs, against which no remedy could prevail. As the great William Osler, Regius Professor of Medicine at Oxford from 1905 to 1919, had expressed it: ‘We work by wit and not by witchcraft, and while our patients have our tenderest care and we must do what is best for the relief of their sufferings, we should not bring the art of medicine into disrepute by quack-like promises to heal or by attempts to cure “continuate and inexorable maladies”.’2
Osler’s profoundly influential views were themselves the outcome of the struggle, going back to the 1830s, to purge from the practice of medicine dubious and unproven remedies. For Osler the purpose of medicine was not to make people better, which was unrealistic, but rather to correctly diagnose what was amiss and to give a prognosis as to the likely outcome of the illness. Thus pneumonia he described as being ‘a self-limiting disease, which can neither be aborted nor cut short by any known means at our command. The young practitioner must bear in mind that patients are more often damaged than helped by the promiscuous drugging that is only too prevalent.’
Our doctor had watched this intellectually rigorous but nihilistic view of medicine’s possibilities melt away almost before his eyes, as every year brought new and extraordinary drugs to treat the previously untreatable. He had long since ceased to be a therapeutic nihilist, now his expectation – and that of his patients – was that for virtually every ill there should be a pill.
So what transformed the paucity of remedies of the 1930s into the cornucopia of the 1960s? It is natural to assume there must have been some scientific development to make it possible for scientists to design chemicals that could correct the defects of functioning caused by disease. But that is not what happened. Rather, as illustrated time and again in the account of the ‘definitive moments’, most drugs were discovered by accident: Fleming’s chance observation that led to penicillin; Hench’s surprising discovery of the astonishing effects of cortisone in rheumatoid arthritis; or Laborit’s astute perception of the ‘euphoric quietude’ in his surgical patients, which led to chlorpromazine. Alternatively drugs used to treat one condition were ‘accidentally’ found to relieve another, or ‘accidentally’ found to have side-effects that could be turned to therapeutic advantage. Even drugs that emerged from screening programmes were ‘accidental’, because it could not have been anticipated which few out of the hundreds of thousands of chemicals tested might prove to be effective against tuberculosis or cancer. Indeed, the origins of virtually every class of drug discovered between the 1930s and the 1980s can be traced to some fortuitous, serendipitous or accidental observation. It could not have happened any other way, for the following reasons.
It is obvious that drugs, being chemicals, must work by interfering in some way with the chemical composition of cells, either the constituents of the walls that surround them, the process of manufacturing proteins within them, or perhaps the chemical transmitters that connect the function of one cell to another. Clearly then, if a chemist were to intentionally design a drug for the treatment of some illness he would have to know at a cellular level the defect that, hopefully, his chemical would correct. For this he would have to know something about
the microscopic world of the cell, but – and it is an astonishing thing – during the period of the therapeutic revolution the knowledge of how the cell worked was virtually non-existent.
So, if the impetus for the therapeutic revolution could not come from the understanding of the chemistry of the cell and how it might be changed by drugs, then it had to come from the other side of the equation – the chemistry of the drugs themselves. Here the situation was very different. By the 1930s chemistry was a highly sophisticated science, in which it was possible to determine the composition of any chemical, the varying amounts of carbon, hydrogen, oxygen and sulphur it contained, its structure and how the molecules were bound together, and above all how one chemical could be changed into another.
In essence, the therapeutic revolution started with a ‘lead’ – a chance observation that some chemical seemed to have some effect on a disease. Then the research chemists played around with it, the fecundity of chemistry being such that it is possible to synthesise literally thousands of related compounds from a single lead. Then it was time to experiment, giving the chemicals to those suffering from an illness (or an animal ‘model’ in which it is simulated) to see what happens. The range of chemical variations is so vast that even if there is little understanding of what is wrong at a cellular level or how a synthesised chemical might put it right, the likelihood is that sooner or later one is going to hit the jackpot.
This is not to suggest that this process is ‘unscientific’. On the contrary, science is involved at every stage. Chemistry is a sophisticated science and the methods for synthesising new chemical compounds require great skill and ingenuity. The investigation and assessment of the effect of chemicals in altering symptoms of disease requires a rigorous and systematic scientific method. But the crucial point remains that the identification of the original ‘lead’ could only come about by accident.