The expansion of the range of treatments required the creation of four entirely new categories of specialist: gastroenterology (the gut), endocrinology (hormones), medical oncology (cancer) and clinical pharmacology (drugs). The numbers of kidney specialists increased almost four-fold to cope with the rising demands for dialysis and transplantation; the number of cardiologists almost doubled, primarily because of the widening range of treatments for coronary heart disease; the number of haematologists quadrupled to treat the now curable leukaemias and lymphomas and other cancers of the blood; the number of psychiatrists almost doubled because mental illness was now a treatable disorder, and so on.1
Such statistics by themselves cannot convey the optimism and enthusiasm of this period, which requires one to imagine the situation in an ordinary district hospital staffed by thirty or so consultants – general physicians, surgeons, anaesthetists, paediatricians and pathologists. From 1970 onwards the hospital expands, recruiting specialists with an interest in the new treatments of, for example, childhood cancer, kidney failure and heart disease. A further ophthalmic surgeon would be employed who, trained in the use of the operating microscope, would be able to perform the new ‘intraocular lens implant’ operation following cataract surgery. There would also be another orthopaedic surgeon, who may have spent time learning from John Charnley how to do hip replacements. These new consultants, young and bright and keen to make their mark at the cutting edge of their specialties, then set about building up their departments, acquiring new equipment – for the gastroenterologist a couple of endoscopes, for the cardiologist devices for pacing and an echocardiogram to examine the internal structure of the heart. Further, the hospital now has the facilities and expertise, thanks to its newly built intensive-care unit, to deal with the sort of seriously ill patients, such as those with kidney failure or following major trauma, who previously would have needed to be referred elsewhere. Medicine had matured into a highly sophisticated enterprise, with the intellectual energy and resources to deal with the whole range of human illness.
And yet just as medicine’s dramatic upward spiral started quite suddenly after the war, it was, by the close of the 1970s, almost equally suddenly coming to an end. For, as Thomas Mann has his hero observe so acutely in his great novel Buddenbrooks, the light of the stars we see shining brightly in the heavens has taken millions of years to reach us, by which time the energy that originally generated it has become exhausted. Similarly, the light of medicine’s success that was now shining so brilliantly was generated by the scientific endeavour of the previous thirty years. Where were the new ideas, the fresh shoots of research and innovation that would maintain that momentum?
Several apparently disconnected events combined to suggest that the apparently relentless onward and upward march of medical progress had come up against some invisible barrier. Thus in 1978, Colin Dollery, Professor of Clinical Pharmacology at the Postgraduate Medical School, chose as his title for the prestigious Rock Carling Fellowship monograph The End of an Age of Optimism. The Postgraduate Medical School, it will be recalled, had in the post-war years been the nursery for the revolutionary new creed of ‘clinical science’. Dollery had joined the staff in 1960 as John McMichael’s protégé, investigating the drugs that would very soon, by making hypertension a treatable disease, permit the prevention of strokes. Simultaneously, in the same hospital, cardiothoracic surgeon Bill Cleland was performing the first open-heart surgery in Britain and Ralph Shackman was doing the first kidney transplants. In his monograph, Professor Dollery wistfully recalls these momentous times and then turns his attention to the realities of 1978:
Problems seem larger, and solutions to them more elusive . . . the morality and cost-effectiveness of scientific medicine has been challenged . . . many people, including some of the most senior of the medical research hierarchy, are pessimistic about the claims of future advance. The age of optimism has ended.2
Why was the ‘age of optimism’ coming to an end?
In the following year James Wyngaarden, in his presidential address to the Association of American Physicians in Washington, DC, suggested an explanation encapsulated in the title of his speech ‘The Clinical Investigator as an Endangered Species’: ‘There has been a declining interest in medical research amongst medical students and young doctors for several years,’ he claimed, ‘clearly visible to the heads of professorial departments who increasingly find the recruitment pool smaller each year.’ This trend, said Wyngaarden, could be clearly seen in the falling numbers of traineeships awarded by the National Institutes of Health to doctors wishing to undertake postdoctoral research. Over the previous ten years it had declined by a half, from a peak of 3,000 in 1968 to a mere 1,500.3
A year later came the first recognition that the great bastion of the post-war therapeutic revolution, the pharmaceutical industry, was also in trouble. There was ‘A Dearth of New Drugs’, the editor of the prestigious science journal Nature observed.4 In a more detailed analysis, Dr Fred Steward of Birmingham’s Aston University observed that the rate of introduction of ‘new chemical entities’ (NCEs), i.e. genuinely new drugs, had dropped off sharply from over seventy a year in the 1960s to less than twenty in the 1970s. ‘The identification of many biologically important chemicals starting after the war may have created for a period a fruitful basis for innovation but has subsequently showed diminishing returns,’ he observed. Nor was it just proving harder to come up with genuinely new drugs. An analysis of the most recent NCEs found only a third that seemed to offer even ‘moderate therapeutic gain’.5
There was one further ominous sign. For almost 100 years physicians, surgeons and family doctors who wanted to stay in touch with the most recent developments in medicine had subscribed to The Medical Annual, which, as its name implies, provided every year a summary of the most recent innovations. This was a prestigious and authoritative publication whose editor, Sir Ronald Bodley-Scott, was ‘a man of penetrating intelligence, great clinical skill and an inordinate capacity for hard work’, renowned for tackling ‘the supreme challenge of treating childhood leukaemia’. It was an honour to be invited by Sir Ronald to contribute to The Medical Annual, which thanks to vigorous editing was both lucid and informative. Its purpose was obviously to keep doctors ‘up to date’, but it has subsequently acquired the status of an important historical document, serving as a contemporaneous commentary on the major moments of post-war medicine as they unfolded.6
Then in 1983 the format of The Medical Annual suddenly changed. It no longer aspired to keep its readers up to date and at the cutting edge of medical ideas. Its contents became ‘educational’, directed primarily to general practitioners and their trainees, with articles such as ‘Patient Participation in General Practice’ and ‘Changing Behaviour – Smoking’. In this dreary and attenuated form it limped on for the next few years before finally expiring.
So, in the five years since Colin Dollery had declared the ‘Age of Optimism’ to be ‘coming to an end’, the president of the American Association of Physicians had declared the clinical scientist to be an ‘endangered species’, Nature had drawn attention to ‘the dearth of new drugs’ and The Medical Annual had abandoned its role as the bulletin of the post-war medical achievement. The significance of these events is not difficult to grasp. The main pillars of the post-war medical achievement – clinical science, medicinal chemistry and, as will be seen, for rather different reasons, technological innovation – were in trouble. The implications were obvious. The relentless rise of the post-war years was coming to an end. This pivotal moment in the history of post-war medicine has until now hardly been commented on. Clearly it merits further examination.
2
THE DEARTH OF NEW DRUGS
When, in 1995, Richard Wurtman of the Massachusetts Institute of Technology reviewed the record of the previous fifty years of drug innovation, he observed: ‘Successes have been surprisingly infrequent during the past three decades. Few effective treatments have been discovere
d for the diseases that contribute most to mortality and morbidity.’ Whereas the number of NCEs was running at around seventy a year throughout the 1960s, by 1971 it was down to less than thirty a year, a position from which it has never recovered. Nonetheless, even thirty new drugs a year might still seem a respectable rate, as cumulatively they would be expected to have a significant impact on many illnesses. But it is not that simple. Many of the ‘new’ drugs introduced since the early 1970s were just more expensive treatments for diseases already taken care of by older and cheaper medicines.1
The common explanation for this decline in innovation was the tightening of safety regulations in the aftermath of the thalidomide disaster. It is difficult nowadays to imagine just how non-existent such regulations used to be. The Parisian psychiatrists Delay and Deniker started treating their schizophrenia patients with chlorpromazine within months of it having been synthesised by the pharmaceutical company Poulenc. As for its most important derivative, the antidepressant imipramine, only a few weeks elapsed between its synthesis and first administration to patients, without any toxicity tests, any study of its pharmacology in the body or any formal clinical trials. But imipramine was one of the last drugs introduced in this way. In 1966 reports started appearing, first in West Germany and Australia and then from around the world, of children being born without arms and legs whose mothers had been prescribed the sleeping pill thalidomide early in pregnancy. Missing limbs are a very prominent deformity and the pictures of thalidomide victims as they grew up over the next twenty years acquired in the public imagination a sort of symbolic significance, a metaphor of the negligence and avarice of the pharmaceutical industry.2
The momentum to introduce legislation requiring stricter testing of new drugs became unstoppable. In Britain from 1969 onwards initial toxicity testing in animals became mandatory, followed by several stages of clinical trials in humans before the drug could be approved for release to the general public. This naturally made the whole process of innovation much more complicated and therefore expensive and, claimed the pharmaceutical industry, unnecessarily so. It is never possible to be absolutely certain whether the results of toxicity tests of drugs on animals are necessarily applicable to humans, so in order to at least give the appearance of thoroughness – as inevitably some drugs will cause unexpected side-effects – the regulatory authorities insisted the pharmaceutical industry produce enormous quantities of data. This was a prolonged business. By 1978 the ‘development time’ for each new drug had increased to around ten years, while the ‘development costs’ had escalated from £5 million in the 1960s to £25 million in the mid-1970s to a staggering £150 million by the 1990s. Inevitably this acted as a disincentive to innovation and, it is alleged, several useful drugs were ‘lost’ on the way, having failed one or other of the required toxicity tests. This close correlation between the rise in regulation and the decline in the rate of innovation is self-evident, and one has to presume that over-regulation had, if not exactly killed off the golden goose, certainly reduced her production of golden eggs.3
There was, however, an equally important explanation for the ‘dearth of new drugs’. The most extraordinary aspect of the post-war therapeutic revolution is how it occurred in the absence of the most basic understanding of disease processes – of what, for example, was happening to cause the airways to constrict during an attack of asthma, or the functioning of the neurotransmitters in the brains of patients with schizophrenia. This ocean of ignorance had been bridged by the facility with which pharmaceutical research chemists could synthesise chemical compounds in their millions, which could then be investigated for any potential therapeutic effect.
But the pharmaceutical companies realised that sooner or later they would start to run out of new chemicals to test in this way. From the mid-1960s onwards there was a hope that it should be possible to replace this rather crude method of drug discovery with something altogether more elegant and ‘scientific’. Certainly pharmaceutical researchers now knew much more about the biochemical workings of the cell and had identified many of the chemical transmitters in the brain and elsewhere by which one cell communicated with another. So it seemed much better, rather than stumbling around in the dark hoping to chance upon some unexpected discovery, to exploit this new-found knowledge and deliberately design drugs to fulfil a defined function. This approach was not exactly new. George Hitchings and Gertrude Elion had discovered a whole string of drugs such as azathioprine by purposefully designing drugs to interfere with the synthesis of DNA. But it was Sir James Black’s two classic discoveries, first of propranolol (which blocked the beta receptors in the heart, thus relieving the symptoms of angina) and then of cimetidine (which blocked the histamine receptors in the gut, thus reducing the amount of acid secretions and allowing ulcers to heal), that convinced many that the future lay with ‘designing drugs’.4
In a curious paradox, this ‘scientific’ approach to drug discovery has turned out to be much less fruitful than was hoped, particularly when compared to the blind, random methods it was intended to replace. The philosophical rationale was that if the problems of human disease could be explained at the most fundamental level of the cell and its genes and proteins, it should then be possible to correct whatever was wrong. Though intuitively appealing, this approach presupposes that it is actually possible, given the complexity of biology, to ‘know’ enough to be able to achieve this. By contrast, the earlier mode of drug discovery, blind and dependent on chance as it might be, did at least allow for the possibility of the unexpected. Or, put another way, this scientific approach to drug discovery could never have led to penicillin or cortisone.
It would be wrong to suggest the scientific road to discovery from the mid-1970s onwards has not produced some genuinely useful drugs. Its successes include, most recently, a vaccine against the chronic liver infection hepatitis B, and ‘triple therapy’ for the treatment of AIDS.5 But by the mid 1990s the current list of the top ten big ‘blockbuster’ drugs – the ones that generate the billions of dollars of revenue that sustain the industry’s profitability – featured, for the most part, new or more expensive variants of the antibiotics, anti-inflammatories and antidepressants that were originally introduced twenty or more years ago.6 They might well be more effective, have fewer side-effects or be easier to take, but with the occasional exception none can be described as making a significant inroad into previously uncharted therapeutic areas in the way that the discovery of chlorpromazine, for example, transformed the treatment of schizophrenia. There was enormous optimism that biotechnology might generate a further cornucopia of new drugs, but, again with the occasional exception, these compounds – insulin, growth hormone, factor VIII – turned out to be no better therapeutically than those they have replaced. They are certainly a lot more expensive.
The most striking feature of many of the most recently introduced drugs is that there is considerable doubt about whether they do any good at all. Thus, there was much hope that the drug finasteride, ‘scientifically designed’ to block the metabolism of testosterone and thus shrink the size of the prostate, would reduce the need for an operation in those in whom the gland is enlarged. This would indeed have been a significant breakthrough but, as an editorial in the New England Journal of Medicine observed, ‘the magnitude of the change in symptoms [of patients] is not impressive’.7 Similarly, a new generation of drugs for the treatment of epilepsy based on interfering with the neurotransmitter GABA were dismissed by an editorial in the British Medical Journal as having been ‘poorly assessed’ with no evidence that they were any better than the anti-epileptic drugs currently in use.8 New treatments for multiple sclerosis and Alzheimer’s disease appear to offer such marginal benefits that their ‘clinical cost-effectiveness falls at the first hurdle’.9
Frustrated at the failure to find cures for serious diseases like cancer and dementia, the pharmaceutical industry has been forced to look elsewhere for profitable markets for its products. This explains the rise of so
-called ‘lifestyle’ drugs, whose prime function is to restore those social faculties or attributes that tend to diminish with age: Regaine for the treatment of baldness, Viagra for male impotence, Xenical for obesity and Prozac for depression. The pharmaceutical industry may have blamed the ‘dearth of new drugs’ on over-regulation but the problem seems to run much deeper. It should still have been able to come up with genuine breakthrough drugs irrespective of the new stringent regulatory requirements, but despite investment in research on a scale greater by orders of magnitude than that of the halcyon days of the 1950s and 1960s, they have not materialised. This dispiriting analysis is vulnerable to the charge of oversimplification, but it is confirmed by the one truly objective measurement of the fortunes of the pharmaceutical industry – its performance in the marketplace. Thanks to the ‘blockbuster drugs’, the industry remained profitable, but the twin pressures of massive research costs (£6 billion was spent by the top ten companies in 1994 alone) and the imminent prospect that the patent protection on many of the more profitable products would expire around the time of the millennium undermined the viability of many previously gilt-edged companies, leaving them no alternative other than to submerge their identity in a rash of massive billion-pound mergers: Glaxo with Wellcome, Smith, Kline & French (SKF) with Beechams, Upjohn of the United States with Pharmacia of Sweden, Sandoz with Ciba, and so on,10 Reflecting on this merger mania, John Griffin, formerly director of the Association of the British Pharmaceutical Industry, has observed: ‘These companies are “ideas poor”, resorting to finding new uses and novel delivery systems for established active products whose patent expiry is imminent . . . real innovations are very obviously not coming from those companies involved in merger mania, whose management currently appears unable to think radically or constructively.’11
The Rise and Fall of Modern Medicine Page 26