Book Read Free

The Rise and Fall of Modern Medicine

Page 43

by James Le Fanu


  The most reassuring of the possible explanations is that this represented a ‘catch-up’ phenomenon, where the superiority of Prozac over earlier types of antidepressant encouraged doctors to prescribe them more widely to those whose depression in the past might have gone untreated. They were further encouraged to do so by an officially sanctioned change in the classification of depression, considerably widening the scope for those who might benefit from this class of drugs. Prior to 1980 psychiatrists distinguished between ‘melancholia’, a severe, protracted, often lifelong – if fluctuating – gloominess of spirit, and the much commoner ‘reactive depression’, with similar but less severe symptoms, usually brought on in reaction to an adverse life event – unemployment, marital breakdown, bereavement and so on. This reactive depression clearly warranted support and sympathy, with the general expectation that it would resolve in time without the need for any specific medication. But then the American Psychiatric Association, during one of its periodic revisions of the diagnostic criteria of mental illness, abolished the distinction between ‘melancholia’ and ‘reactive’ depression in favour of conceiving of depression as a continuum from ‘major’ to ‘minor’ – with ‘minor’ being defined as a dysphoric mood lasting for more than a fortnight with sleep disturbance, loss of appetite, fatigue, crying and ‘feeling sorry for oneself’. These symptoms are so common that virtually anybody could be diagnosed as suffering from depression warranting medical treatment at some stage or other of their lives.13

  The covert ‘branding’ of psychological traits as psychiatric illnesses (shyness as social phobia), as outlined above, was a further contributory factor, as psychiatrist William Appleton describes): ‘Those with eating, sexual and posttraumatic stress disorders [became] candidates for Prozac . . . and the timid, those with low energy and low self esteem, those who are irritable, or perfectionists, or suffering from a general malaise or unhappiness; in short, anyone – sick or not – may benefit from the civilizing effects of Prozac.’14

  The proposition that so wide a spectrum of psychological disturbances might be treatable with the same drug captures the brilliance of the two-pronged marketing strategy that lay behind Prozac’s success. This entailed narrowing the wide range of mental afflictions to a single cause – that they are all due to the same disturbance of brain chemistry, a deficit of the neurotransmitter serotonin – and simultaneously expanding the numbers of those who might benefit from having that deficit corrected by taking Prozac or its equivalent. Thus a widely publicised and seemingly authoritative survey in the 1990s claimed that about one in three adults in the US was suffering from mental illness warranting treatment.15 The message for family doctors attending industry-sponsored meetings and symposia could not have been simpler – the problem of mental illness is much more prevalent than previously supposed and they would be doing their patients a favour by identifying those who might benefit from being on medication.

  There were dissenters, of course. ‘Many of those with depression are not depressed at all,’ observed Alastair Santhouse, consultant in psychological medicine at London’s Guy’s Hospital, writing in the British Medical Journal: ‘more detailed questioning reveals a familiar pattern in which the patient lacks a sense of purpose in life with no goals or aspirations.’16 And while no doubt many are grateful to Prozac for helping them through an emotional crisis in their lives, it is almost absurd to suppose that so wide a range of mental states can have a single cause – or indeed that it is possible to prescribe brain-altering chemicals to people without running into trouble. And here two problems in particular have emerged in the last decade: they can induce a severe ‘withdrawal syndrome’, and increase (paradoxically) the risk of suicide, especially in adolescents.

  The massive popularity of Prozac (and its several equivalents) is predicated on the assumption that they can be taken for six to nine months during an episode of depression and then discontinued safely. For some this may result in a resurgence of psychological symptoms, usually attributed to a resurgence of their previous depressive illness indicating the need to continue taking their medication. This sounds plausible, except that there are those for whom it clearly does not apply, as no matter how long they have been on medication they find it very difficult to ‘come off’. This graphic account of a woman trying to ‘come off’ the antidepressant Seroxat after five years is strongly suggestive of some form of withdrawal syndrome: ‘I was in physical and emotional turmoil. The nausea returned along with flu-feelings, aches, blinding dizziness, exhaustion, rapid and painful successive electric shocks . . . Most disturbing was the onset of suicidal thoughts and violent nightmares in which I saw members of my family hurt. For weeks I was unable to leave my bed.’17

  The manufacturers, GlaxoSmithKline, always insisted that this type of withdrawal syndrome was extremely rare, pointing to the findings of clinical trials which showed an incidence of about 1 in 500. Subsequently, in response to much adverse publicity, they were obliged to acknowledge this underestimated its frequency 125-fold and this withdrawal syndrome actually affected one in four of those taking the drug.18

  Next there is the powerful impression of parents whose children tragically take their own lives soon after starting anti-depressant medication that it might have exacerbated their emotional distress. This, for obvious reasons, is almost impossible to prove for any individual case, but when the regulatory authorities investigated this matter further it emerged that previously unpublished findings from clinical trials confirmed a small increased chance of suicidal thoughts – prompting them to withdraw approval for their use in adolescents, other than in the most severe cases.19

  These cautionary lessons illuminate how the phenomenal success of Prozac and similar drugs concealed the sheer implausibility of supposing that all manner of mental problems – from minor to major, in teenagers and in adults – could warrant the same treatment. And, further, how readily the findings of clinical trials can be creatively presented to conceal the harmful consequences of such a proposition.

  But for all that, there is no disputing that for the vast majority Prozac is highly effective – or is it? In 2007 psychologist Irving Kirsch invoked the Freedom of Information Act in the United States to compel the pharmaceutical companies and the drug regulatory authorities to release the data of all the trials conducted. This predictably revealed a series of dodgy practices – withholding the results of those trials that failed to produce the correct answer, together with the self-explanatory practices of ‘salami slicing’ and ‘cherry picking’. ‘My colleagues were led to the inescapable conclusion that these are no more than active placebos with very little therapeutic benefit.’20

  There are of course counter-arguments, of which the commonest is to acknowledge that the scientific evidence of all those clinical trials may indeed be flawed and biased but to admit nonetheless that ‘everyone knows they work in clinical practice’. And some of those with depression are indeed highly sensitive to these drugs in ways that would support the hypothesis that depression is primarily due to a chemical imbalance of serotonin in the brain. But the substantial point remains that the Prozac saga illustrates the powerful influence of Big Pharma in persuading psychiatrists and doctors into supposing they understand much more about the workings of the mind than they really do, and that whatever is amiss can be readily fixed.

  Statins for All

  It will be recalled how, following the protracted effort to implicate ‘high fat’ meat and dairy foods as the main cause of the epidemic of heart disease (as described in ‘Seduced by the Social Theory’), the drug companies had ‘snatched victory from the jaws of defeat’. Certainly, the effort to encourage tens of thousands of people, in the largest and most costly trials in the history of medicine, to adopt a ‘healthy’ diet had zero effect in reducing their subsequent chances of a heart attack. But the central principle endured, that raised cholesterol levels in the blood predisposed to heart disease which could be prevented by taking potent cholesterol-lowering me
dicines such as cholestyramine. This led to the notion of ‘the lower the cholesterol the better’, where millions of people whose cholesterol level was just ‘above the average’ would also benefit from having it lowered. But this would require a drug that, like Prozac, was easy to take and with a ‘favourable side-effect profile’.

  That drug was lovastatin, the first of the statins, introduced in 1987, prompting the medical experts of the day involved in the National Cholesterol Educational Programme to advise that everyone should ‘know their number’ – visit their doctor to have their cholesterol level determined and, where appropriate, take medication for life to lower it.

  By the mid-1990s lovastatin, together with the several ‘me-too’ variations of its competitors, was generating revenues of $3 billion a year. Fifteen years later that had mushroomed to $26 billion – making statins by far the single most profitable class of drugs ever discovered. Thus the rhetoric of the Social Theory, that people could ‘take control’ of their health by their own efforts and thus avoid having to take drugs or undergo bypass surgery for heart disease, had transmuted into its antithesis, where vast numbers who were otherwise healthy were now committed to taking potent drugs indefinitely.

  Big Pharma may have turned the ‘cholesterol consciousness’ generated by the diet-heart thesis to its advantage, but two further factors would drive the ascendancy of the statins. First, several drug companies, recognising the bounteous potential of lovastatin, promptly came up with half a dozen sequels and, in the subsequent scramble to secure a share of the market, ingeniously transformed the clinical trials necessary to demonstrate their efficacy into a marketing strategy. The trials were given catchy names such as Excel, Prosper and Care and organised on a massive scale involving up to 10,000 patients recruited from many different academic centres. This had the advantage of encouraging brand loyalty, where cardiologists, generously rewarded for participating in one or other of the trials, might reasonably be expected to carry on prescribing them – and encourage others to do likewise.21

  The cumulative effect, with the favourable results of the trials being announced with great razzmatazz at medical conferences attended by tens of thousands of specialists, generated an almost evangelical enthusiasm for the statin project. And as time passed the more impressive, strikingly so, the findings of these trials proved to be: by 2004 the Heart Protection Study had ‘overturned conventional wisdom’ to demonstrate that statins worked for everyone – young and old alike, men and women, those with normal and raised cholesterol and so on. These findings, claimed Professor Rory Collins of Oxford University, suggested that tripling the numbers of those taking statins in Britain from 1 to 3 million would save 10,000 lives a year.22

  The further, yet more influential factor in the propagation of the doctrine of salvation through statins would be the drive to establish ‘clinical practice guidelines’ where panels of ‘experts’ deliberate together to determine the optimal treatment for any condition. The guidelines then become ‘official policy’, with the possibility of financial penalties for doctors who fail to adhere to them. Here the question, on which the fortunes of the statin class of drugs turned, was whether their value in those with high cholesterol levels could be extended to the vastly greater numbers of those whose cholesterol was above ‘normal’ – and the cut-off point for initiating treatment.

  Perhaps predictably, successive sets of guidelines forced the level of a ‘normal’ cholesterol ever downwards, resulting in 2001 in a quantum leap in those eligible for treatment with statins in the United States – up from 13 million to 30 million.23 Two years later a further revision would increase that figure to 40 million. This generated some controversy when it was pointed out that the relevant panel of experts had failed to report any potential ‘conflicts of interest’ – for good reason, as it subsequently transpired that six of the nine experts on the panel had received research grants or consultancy fees from at least three of the drug companies involved in manufacturing statins.24,25

  In Britain comparable guidelines required that those dropping in to see their family doctor for any reason, and irrespective of their age, would have their cholesterol checked and commence treatment where it was deemed ‘appropriate’. This was perhaps a step too far, as when statins are routinely prescribed to the fit and healthy and things ‘go wrong’, it is easy to make the connection. Those previously accustomed to taking regular daily exercise were struck down by muscular aches and pains, reducing them to a state of decrepitude. Meanwhile the mentally alert suffered memory lapses, loss of concentration and depressed mood of such severity as to suggest they might be developing incipient dementia.26,27,28 Then once the penny dropped and the statins were discontinued, within a few weeks the decrepit regained their mobility and the incipiently demented their minds.

  The dramatic accounts implicating statins in this pattern of sudden physical or mental decline and seemingly miraculous recovery attracted a lot of attention, raising the question of how many of the chronic and insidious symptoms experienced by those on long-term statins might similarly be attributed to this class of drug.29 Curiously or not, there is scarcely a hint of such problems in the findings of the clinical trials, where less than 1 per cent of the participants report nerve or muscle complaints. This would seem to be a considerable underestimate and would be contradicted by the subsequent finding of, for example, a twenty-six-fold increased risk of nerve damage (polyneuropathy) in those treated with statins for two or more years.30,31

  And so the bottom line beckons. Two decades have elapsed since the launch of lovastatin, so what is the payback for that annual $26 billion expenditure on statins? There are strong theoretical grounds for supposing that the statins are not nearly as effective as portrayed, not least because the pattern of the rise and fall of heart disease over the past fifty years is strongly suggestive of an underlying (and as yet unknown) biological cause. For the vast majority of those taking statins, the 75 per cent who are otherwise healthy but designated as ‘high risk’ because of their ‘raised’ cholesterol levels or associated risk factors, the largest ever review, examining the results of eleven controlled trials, published in 2010, concludes perhaps surprisingly that statins do not prolong life; that is, they have no effect on ‘all-cause mortality’.32 So whatever small advantage there might be in reducing the chances of a heart attack is offset by the increased risk of dying from other causes. This verdict is unlikely to be reversed. By contrast, for men aged less than seventy who have a previous history of heart problems, then the most favourable of the clinical trials reveal that statins do indeed ‘save lives’, reducing the risk of dying by almost a third. Put another, less dramatic way, for one hundred of those in this category taking statins, ninety-two will still be alive five years later, compared to eighty-eight of those in a control group taking a placebo.33,34

  Currently the prospects of the pharmaceutical industry could scarcely seem rosier, with the prodigious wealth generated by the blockbuster phenomenon providing the financial muscle – and influence – to shape the priorities of medicine to its own advantage. And there is no reason, one would suppose, why that situation should not prevail for the foreseeable future. But that is certainly not how it appears to those who must take a realistic view of the industry’s prospects in the long term – its potential investors. So while those annual revenues have soared ever upwards over the past ten years, simultaneously and paradoxically share prices have halved, slashing $850 billion from the stock market value of the top fifteen companies.

  The most important of the several reasons for this pessimistic interpretation of Big Pharma’s future is that some of its most profitable blockbusters are set to ‘fall off a cliff’ over the next few years as they come off patent and are replaced by generic equivalents. This could result, it is estimated, in the drug companies losing a quarter of their annual revenue, $200 billion worth, with no further statin-type blockbusters in the pipeline to plug the gap.35

  Then there is the undesirable comb
ination of escalating research costs and declining (or static) drug innovation. So while the collective investment in research and development (R&D) across the industry of $2 billion spent in 1980 had risen to $43 billion by 2006, the number of new drugs approved by the regulatory authorities remains roughly the same, at around twenty a year – most of which are me-toos, with less than one-fifth designated as new molecular entities (i.e. genuinely novel compounds).

  This ever widening discrepancy between the scale of R&D and its ‘returns’ is due to the same combination of factors as outlined in ‘The Dearth of New Drugs’. These include the ‘low-hanging fruit problem’, where the easy therapeutic advances have already been made; and the ‘better than the Beatles problem’, where it is difficult to improve on the efficacy of those drugs already discovered; and the ‘cautious regulator problem’, where the regulatory authorities, in the aftermath of the thalidomide tragedy and similar more recent episodes, have imposed ever stricter (and more costly) criteria for drug approval.36

  Thus in retrospect it appears that the blockbuster era, for all the phenomenal revenues it generated, could offer only a temporary reprieve from these deep-seated structural issues. Or, as a senior researcher at Eli Lilly observed in 2010; ‘We may be moving closer to a pharmaceutical “ice age” and the potential extinction of the industry, at least as it exists today.’37

  Nonetheless there is no reason in principle why the potential of medicinal chemistry should be exhausted and, contrary to such gloomy predictions, Big Pharma has recently found a way to circumvent the implications of the falling-off-the-cliff of those patent-expiring blockbusters. The calculation is simple and the implications for its future are profound: rather than creating a market for relatively costly drugs (such as Lipitor at £35 for a month’s treatment) that will be taken by millions, switch to producing very expensive drugs (at £20,000 for a course of treatment) that will be taken by tens of thousands.

 

‹ Prev