The contrasting fortunes of the pharmaceutical industry before and after the 1970s are underpinned by the profound paradox of an apparent inverse relationship between the scale of investment in research and drug innovation. Recognising this, the pharmaceutical industry in the early 1990s decided to reorient its approach to drug discovery, using automated techniques to screen millions of chemical compounds for their biological activity, hoping to identify the ‘lead compounds’ that might have the sort of genuinely novel therapeutic effect that could form the basis for new drugs. This reversion – albeit with techniques much more sophisticated than in the past – to the process by which the important drugs of the 1940s and 1950s were discovered is obviously highly significant, though whether it will ‘deliver the goods’ remains to be seen.12
3
TECHNOLOGY’S FAILINGS
Fire was the ‘original technology’, acquired for man by Prometheus, who had stolen it from the gods. Zeus was not amused and directed that Prometheus be bound to a rock with chains, to be visited there daily by an eagle who fed off his liver. The punishment may seem a bit harsh, but in one sense Zeus was right: technology is double-edged. It confers prodigious powers, yet such power can also be enslaving, controlling the actions of those who possess it.
Technology was out of step with the major trends of the End of the Age of Optimism. The 1980s were an important decade: for diagnostic imaging (with important developments in CT and MRI scanning, ultrasound and similar techniques);1 for ‘interventional radiology’ (with angioplasty, the dilation of narrowed arteries with plastic catheters);2 and for ever more sophisticated methods of endoscopy, culminating in the remarkable technical achievement of minimally invasive surgery.3
Nonetheless, against the background of these innovations, the general and probably correct perception of medical technology is that it is out of control. The discussion that follows examines the consequences with three examples in ascending order of seriousness: firstly ‘over-investigation’ (the overuse of diagnostic technology); secondly, the false premises, and promises, of foetal monitoring; and lastly, the role of intensive care in needlessly prolonging the process of dying.
The Misuse of Diagnostic Technology
The ever-captive Peter Medawar, Nobel Prize winner for his contribution to transplantation, observed that when people spoke about the ‘art and science’ of medicine they invariably got them the wrong way round, presuming the ‘art’ to be those aspects that involved being sympathetic and talking to the patient, and the ‘science’ to be the difficult bit of interpreting the results of sophisticated tests that permits the correct diagnosis to be made. The reverse is the case, argued Medawar. The real ‘science’ in medicine is the thorough understanding of the nature of a medical problem that comes from talking at length to the patient, and performing a physical examination to elicit the relevant signs of disease. From this old-fashioned, Tommy Horder-style of medicine it is usually possible to infer precisely what is wrong in 90 per cent of cases. By contrast, the technological gizmos and arcane tests that pass for the ‘science’ of medicine can frequently be quite misleading. The logic of Medawar’s argument leads to the playful paradox that the more tests doctors can do, the less ‘scientific’ (in the sense of generating reliable knowledge) medicine becomes. And throughout the 1970s doctors did ‘do’ more tests, twice as many at the end of the decade as at the beginning, resulting in the description of an entirely new syndrome of ‘medical vampirism’, where so much blood was taken for tests from patients while in hospital that they became anaemic, requiring in some instances a blood transfusion.4 ‘The comforting, if spurious, precision of laboratory results has the same appeal as a lifebelt to the weak swimmer,’ an editorial in The Lancet noted in 1981, before going on to enumerate the several reasons why doctors performed so many unnecessary tests: there was the ‘just-in-case test’ requested by junior doctors ‘just in case’ the consultant might ask for the result, and the ‘routine test’ whose results hardly ever contributed to the diagnosis, and the ‘ah-ha test’ whose results were known to be abnormal in certain conditions and which were ordered ‘to advertise the cleverness of the clinician’.5
This fetishisation of technical data was part of a more generalised phenomenon where the modern physician had become a doctor with technically specialised diagnostic skills. Thus it was no longer sufficient for the gastroenterologist to know a lot about gut diseases; he had also to be skilled in passing the endoscope down into the stomach and up into the colon. Nor was it sufficient for the cardiologist to rely on his traditional skills with the stethoscope, as he also had to acquire the necessary manipulative skills of the ‘catheter laboratory’, passing catheters into veins and arteries to measure the pressures within the heart.
There is, of course, no reason why gastroenterologists or cardiologists should not possess these skills, but they can easily become an end in themselves, a means of gathering information that might be gleaned by simpler means. There is, for example, little difficulty in establishing the diagnosis of a peptic ulcer by the traditional clinical methods of taking a history and examining the patient, but for the modern gastroenterologist any patient with stomach pains merits an endoscopy to visualise the ulcer, as well as a further endoscopy after treatment to see if it has healed. This inappropriate use of investigational techniques was, argued one of their number, Michael Clark of St Bartholomew’s Hospital, a sign of intellectual degeneration. ‘The young men of the 1960s became gastroenterologists because it was an expanding speciality with an intellectual challenge to understand more about the gut and apply this to clinical practice,’ he wrote, ‘but the young gastroenterologist of today is only happy if he can learn another endoscopic technique: the excitement of the 1960s has been replaced by the decade of the Peeping Tom.’6
The great virtue of endoscopy for the gastroenterologists was that it earned them a lot of money. The standard fee for a private consultation in Britain is around £100, but if the specialist throws in an endoscopy, graded by the insurance companies as an ‘intermediate operation’, he can make four times that sum. (In private medical systems such as the United States, the endoscope and ‘catheter lab’ generate 80 per cent of the specialist’s income.) This phenomenon of ‘over-investigation’ – the performing of large numbers of tests in patients whose medical problems are quite straightforward – may seem a fairly trivial matter, but it is costly and, more seriously, it introduces an alien element into the medical encounter, downgrading the importance of wisdom and experience in favour of spurious objectivity.
Foetal Monitoring: Technology and a Shot in the Foot
The success of technology in so many fields of medicine encouraged doctors to believe there must be a technical solution to every problem; that, for example, foetal monitoring during labour would prevent death or damage to the baby. The argument was as follows: the shift from home to hospital deliveries had coincided with a decline in both maternal and infant mortality rates, from which one might quite naturally infer that, thanks to medical intervention, childbirth was becoming ever safer for both mother and baby. Nonetheless, babies still died during labour (approximately 3,000 a year in the United States) while several times that number (approximately 15,000) were born with severe forms of brain damage, such as cerebral palsy. Such misfortunes, it was legitimate to presume, arose because the foetus was deprived of oxygen during the stress of labour, so further medical intervention to determine when it was ‘distressed’ might act as a red-alert system, prompting an emergency Caesarean to avert disaster. ‘Since the stress of labour is clearly capable of causing foetal death, it seems not unreasonable to assume that labour may also be a factor in producing brain damage,’ observed two protagonists of this view, obstetricians Edward Quilligan and Richard Paul of the University of Southern California in 1974.7 The inference was indeed ‘not unreasonable’, and appeared to be supported, they pointed out, by crude experiments on monkey foetuses which, while still within the womb, were deprived of oxygen
by separating the placenta from the side of the mother’s uterus. Following birth, they were killed and their brains examined, apparently revealing a particular pattern of damage ‘identical to that seen in human subjects who are afflicted with cerebral palsy’.8
Two technological developments in the late 1960s would, it was hoped, by improving on the traditional methods of assessing ‘foetal distress’, alert the obstetrician to the possibility the foetus was being deprived of oxygen and thus prevent the catastrophe of cerebral palsy. The first was a monitor strapped to the mother’s abdomen to give a continuous read-out of the heart rate of the foetus, providing objective evidence of rapid ‘accelerations’ or ‘decelerations’ that can occur when the foetus is in trouble. Secondly, soon after labour had begun and just as the baby was starting its descent down the birth canal, a needle was placed in its scalp, through which small quantities of blood could be removed and its acidity measured, a useful warning sign that the baby was being deprived of oxygen and thus vulnerable to brain damage. Clearly the initial costs of purchasing the necessary equipment and training the nursing staff would be considerable – estimated at around $100 million for the United States – but, argued Quilligan and Paul, this would be offset by financial savings – estimated at $2 billion – in the long-term care of brain-damaged children if their numbers were to be halved by foetal monitoring technology.9
Throughout the 1970s, obstetricians, convinced by these compelling arguments, introduced foetal monitoring on a wide scale, only to elicit a strong backlash from the ‘natural childbirth’ movement representing the interests of pregnant women. The problem was that no matter how plausible the arguments might be in its favour, foetal monitoring has a seriously adverse impact on many women’s experience of labour. The mother’s mobility has to be severely restricted for the monitor readings to be reliable, requiring her to lie prone on her back for long periods. Meanwhile she might have one arm connected up to an intravenous drip, while a cuff is strapped to the other to keep an eye on her blood pressure. She is in effect immobilised. Such irksome restraint imposed by foetal monitoring is also unphysiological and, by denying the mother the opportunity to move around freely and adopt different positions, prolongs labour unnecessarily.
And so to the crucial question, did it work? Yes, claimed Quilligan and Paul, markedly reducing complications during labour, albeit at the cost of a considerable increase in the numbers of births by Caesarean section, as the monitor tended to be ‘oversensitive’, producing readings suggesting the baby was in distress when it was not.10
The more that time passed, the less convincing these results seemed to be. Foetal monitoring was not quite the exact science its protagonists had claimed, failing (it emerged) to detect 84 per cent of the babies who suffered some degree of oxygen deprivation during birth, while ‘conversely most of the infants who were thought to be in foetal distress were vigorous’. By the early 1980s the British Medical Journal, in marked contrast to its enthusiastic endorsement of the aspirations of foetal monitoring a decade earlier, had become disillusioned by its many technical difficulties. ‘The foetal heart rate pattern correlates poorly with the acid-base balance (the acidity of the blood obtained through the scalp needle) . . . foetal outcome depends not only on the correct interpretation of data but also on appropriate action by the staff in the obstetric unit.’11
The vogue for foetal monitoring would, like other medical fashions, probably have slowly withered away, were it not for the intervention of the lawyers. The drawback of foetal monitoring, which was not well appreciated when it was first claimed to prevent ‘adverse outcomes’ such as cerebral palsy, is that when children are born so affected it is ‘not unreasonable’ for the parents to assume negligence on the part of the obstetrician for failing to act on the evidence of an ‘abnormal’ heart reading (and in court virtually any reading, in the hands of a hostile expert witness, could be shown to be ‘abnormal’, undermining the original claims that it provided an objective assessment of the child’s progress).
In Britain between 1983 and 1990 the number of cases where such negligence was alleged tripled, as did the scale of the financial compensation paid out, an average of £700,000 per case. Litigation against obstetricians, who constitute only 2.5 per cent of medical practitioners, now accounts for 30 per cent of the legal costs and damages sustained by the profession.12
This is clearly a most invidious situation. The birth of each and every ‘less than perfect’ child can, with the help of a clever lawyer, be blamed on the negligence of the obstetrician in charge. Their only defence is to deny the rationale upon which foetal monitoring had originally been conceived, that oxygen deprivation at birth is a common and preventable cause of brain damage – which it is not. While the maternal and foetal mortality rates have fallen continuously from the 1950s onwards, the number of cases of cerebral palsy has remained virtually unchanged. This can only mean that the majority of cases – probably 90 per cent – of cerebral palsy cannot result from events occurring during childbirth, but must be caused by some abnormality of the development of the brain much earlier in pregnancy. The whole episode had been ‘a catastrophic misunderstanding’, according to one obstetric journal, where the expectation that foetal monitoring could prevent brain damage in children was based on ‘false analogy and assumptions’. Obstetricians had ‘shot themselves in the foot’.13
The most curious aspect of this saga is that right from the beginning dispassionate observers had warned obstetricians of the ‘false assumptions’ behind foetal monitoring, and indeed these should have been clear to obstetricians themselves. They would have known from their personal experience that not all babies consequently shown to have cerebral palsy had experienced particularly difficult or complicated labours; but the profession was seduced into thinking otherwise by the promise of the power of technology to provide solutions.14
Technology and the High Cost of Dying
The third and most significant type of misuse of technology is the use of life-sustaining technologies to prolong the process of dying. The principles of intensive care pioneered by Dr Bjorn Ibsen in the Copenhagen polio epidemic of 1952 to keep children alive long enough for the strength of their respiratory muscles to recover may save thousands of lives a year but they had also, by the mid-1970s, become diverted into a means of prolonging – at enormous cost – the pain and misery of terminal illness. Thus a United Press Agency bulletin describing General Franco’s final illness in 1975 reported:
At least four mechanical devices are being used in the battle for General Franco’s survival. A defibrillator attached to his chest shocks his heart back to normal when it slows or fades; a pump-like device helps push his blood through his body when it weakens; a respirator helps him breathe and a kidney machine cleans his blood. At various times in his 25-day crisis General Franco has had tubes down his windpipe to provide air, down his nose to provide nourishment, in his abdomen to drain accumulated fluids, and in his digestive tract to relieve gastric pressure. The effort in itself is remarkable considering he has had three major heart attacks. He has undergone emergency surgery twice, once to patch a ruptured artery to save him from bleeding to death, the second time to remove most of an ulcerated and bleeding stomach for the same reason. He has taken some four gallons of blood transfusion. His lungs are congested . . . his kidneys are giving out and his liver is weak. Paralysis periodically affects his intestines . . . he suffers occasional rectal bleeding. Blood clots have formed and spread in his left thigh. Mucus accumulates uncontrollably in his mouth.15
General Franco, being an important man, might have been expected to have received preferential treatment, but this account of his dying days is little different from that of thousands of patients who have had the misfortune to spend their last moments on a modern-day intensive-care unit, where, as one organ system fails after another, its function must be taken over by some technological means in the increasingly unlikely anticipation of eventual recovery. This is a costly business. By 1
976 one-half of medical expenditure in the United States was incurred in the last sixty days of a patient’s life. ‘The furore over the high economic costs of dying parallels concern over its high emotional cost,’ observed Muriel Gillick of the Hebrew Rehabilitation Center for the Aged in Boston, commenting on a report in the New York Times that showed ‘a significant segment of the public believes that doctors cruelly and needlessly prolong the lives of the dying [for reasons] of avarice and a passion for technology, which leads them to use procedures to excess, unmindful of the suffering they may inflict on patients’.16
The fault was certainly not all on the side of the doctors, who, pressurised by relatives or fearful of subsequently being charged with negligence, felt they had little alternative other than to demonstrate that ‘no stone had been left unturned’. Paralleling the Church’s last rites, medicine too now had its last rite – the compulsory period on the ventilator without which a patient was not allowed to die in hospital. Thus an analysis of the outcome in almost 150 patients severely ill with cancer who had been admitted to the intensive-care unit of one hospital in southern Florida over a two-year period found that more than three-quarters of those who had survived to go home had died within three months.17
Such misuse of intensive-care facilities is a telling sign of the degree to which medical technology has spiralled out of control. There was nothing that could be done about it. By 1995, twenty years after General Franco’s grisly demise, expenditure on intensive care in the United States had escalated to $62 billion (equivalent to 1 per cent of the nation’s GNP), one-third of which – $20 billion – was being spent on what had euphemistically come to be known as PIC or potentially ineffective care. The consequences for those on the receiving end of PIC, who were ‘hopelessly entrapped by machinery more sophisticated than the ethics governing its use’, is poignantly illustrated by the parental description of the six months spent by a premature baby, Andrew, in a paediatric intensive-care unit:
The Rise and Fall of Modern Medicine Page 27