The Rise and Fall of Modern Medicine
Page 45
Non-steroidal anti-inflammatory drugs (NSAIDs)
The serious side-effects associated with steroid treatment encouraged the drug companies to search for a safer compound, resulting in the non-steroidal anti-inflammatory drugs: phenylbutazone, indomethacin and ibuprofen.
Phenylbutazone: The powerful and widely used analgesic amidopyrine was found to have the potentially lethal side-effect of markedly reducing the white blood cell count, exposing those taking it to the danger of serious infection. The drug company Geigy sought to minimise this side-effect by making the drug in an injectable form on the grounds that smaller doses would be safer. As amidopyrine was insoluble, it had to be coupled with a solvent, the most effective of which was found to be an acidic analogue – phenylbutazone. When it was subsequently discovered that the blood levels of the solvent were much higher than those of the active ingredient amidopyrine, an astute research chemist speculated whether the solvent might be an effective anti-inflammatory drug in its own right. This was duly investigated, leading Geigy to market the solvent on its own under the trade name Butazolidine.5
Indomethacin and ibuprofen: These two drugs – the prototypes of a vast family of NSAIDs – were discovered as part of a ‘blind’ screening programme for anti-inflammatory drugs. The starting point was two chemicals believed to be involved in inflammation – serotonin and carboxylic acid. Hundreds of analogues were made and tested against their ability to reduce the amount of swelling of a rabbit’s paw injected with an irritant material. Indomethacin was the most potent of 350 indole compounds and introduced in 1963, and ibuprofen, following the screening of 600 compounds, was introduced a year later.6
Hydroxychloroquine
While on National Service, Dr Francis Page was posted to the tropics, where he observed that the skin of patients afflicted with the skin disorder discoid lupus erythematosus appeared to improve when they started taking the anti-malarial drug mepacrine. In England he used the drug in eighteen patients, two of whom also suffered from rheumatoid arthritis and reported a marked improvement in their symptoms. Thus it was only logical to formally test anti-malarial drugs in the treatment of rheumatoid arthritis, the most effective of which turned out to be hydroxychloroquine.7,8
Penicillamine
In 1963 an American scientist observed that D-penicillamine, an agent related to penicillin and used for the removal of copper from the tissues of patients with Wilson’s disease of the liver (caused by copper toxicity), also separated out the components of rheumatoid factor – the immunological test for rheumatoid arthritis. It was believed at the time (incorrectly) that rheumatoid factor was directly involved in the disease process, so a trial of penicillamine was duly instituted. As with the introduction of gold, the rationale of the treatment may have been spurious, but the drug nonetheless was shown to effectively modify the disease.9
Methotrexate
Methotrexate was first used as an anti-cancer drug as it was structurally related to, and therefore an inhibitor of, the vitamin folic acid, which plays an essential role in cell metabolism. This led to its use not only in cancer but also the proliferative skin disorder psoriasis, where it was also noted to improve the arthritis that may accompany the condition. By analogy it was thought it might reduce the pain in joints that was associated with rheumatoid arthritis, and this was confirmed in 1962.10
Allopurinol
Allopurinol was originally intended to increase the potency of the anti-cancer drug 6-mercaptopurine, preventing its breakdown into inactive metabolites by blocking an enzyme that converts the chemical xanthine to uric acid. This suggested an alternative use for the drug as the treatment for gout, because the acute painful swelling of the joints characteristic of the condition is caused by deposition of uric acid crystals. As allopurinol reduces the level of uric acid in the blood, it should theoretically prevent attacks of gout – as indeed it does.11
In summary, despite the vast corpus of science encompassing immunology and genetics that underpins our knowledge of the rheumatological disorders, ultimately all the useful drugs were discovered either by mistake, serendipity or following ‘blind’ screening.
APPENDIX II
THE PHARMACOLOGICAL
REVOLUTION IN PSYCHIATRY
The pivotal year in the history of psychiatry was 1952. In Paris the French psychiatrists Jean Delay and Pierre Deniker reported the response of Giovanni A. – a 57-year-old labourer with schizophrenia – to chlorpromazine, who, after a mere three weeks, was well enough to be discharged from hospital.1 The same year in Britain a young German psychologist, Hans Eysenck, published The Effects of Psychotherapy: An Evaluation, in which he drew attention to the complete absence of any objective evidence for the therapeutic efficacy of Freudian psychoanalysis.2 The historian Edward Shorter comments:
If there is one central intellectual reality at the end of the century, it is that the biological approach to psychiatry – treating mental illness as a genetically influenced disorder of brain chemistry – has been a smashing success. Freud’s ideas which dominated psychiatry in the first half of the century are now vanishing like the last snows of winter.3
Other drug discoveries, in addition to chlorpromazine, contributed to the ‘smashing success’ of post-war psychiatry. This period also saw the decline of psychoanalysis and its eclipse by the ‘talking therapy that works’ – cognitive therapy.
In the post-war years the treatment of mental illness was revolutionised by four groups of drugs: chlorpromazine for schizophrenia (see pages 74–83), lithium for manic depression, the antidepressants for depression, and benzodiazepines such as valium for anxiety. The discovery of each was entirely fortuitous and unrelated to any understanding of the underlying mechanisms of mental illness.
Lithium
John Cade first described the value of lithium in manic depression in the Medical Journal of Australia in September 1949. His first patient, Mr W. B., was ‘a male aged fifty-one years who had been in a state of chronic manic excitement for five years – restless, dirty, destructive, mischievous, generally regarded as the most troublesome patient on the ward. After the start of treatment in March 1948 . . . he settled down and left hospital three months later on indefinite leave with instructions to take his medication twice daily. He was soon back working happily at his old job.’ Mr W. B. subsequently became ‘lackadaisical’ about taking his medication and became steadily more irritable and erratic, requiring his readmission to hospital where again he ‘settled down’ within a fortnight of restarting lithium.4
Cade traced the origins of his discovery of the effects of lithium to the three and a half years he had spent as a Japanese prisoner of war where, he observed, the psychiatrically ill among his fellow captives ‘appeared to be sick people in the medical sense’. Might manic depression result from intoxication of the brain by high levels of some chemical? If so, what might it be? With the end of the war he returned to Australia to become medical superintendent of the Repatriation Hospital in Bundoora, an outer suburb of Melbourne. There in his laboratory – ‘the pantry of a still vacant ward . . . with a bench, sink, a few jars of chemicals and guinea pigs that were looked after as family pets’ – Cade started by injecting the urine taken from schizophrenic and manic patients into the abdomens of the guinea pigs, hoping that he might be able to identify some abnormality in the urine of the severely psychiatrically ill that would upset the psyche of his guinea pigs. Regrettably the guinea pigs all died. This placed something of a blight on his primitive research programme, so Cade turned to an investigation of the various components of urine – urea, uric acid, creatinine – to see which might be responsible. One of these, uric acid, is relatively insoluble and unsuitable for injections, so Cade substituted its more soluble salt, lithium urate. At one point he decided to inject the lithium alone into the guinea pigs with the following result:
After a period of about two hours the animals, although fully conscious, became unresponsive to stimuli . . . Those who have experimented with guinea pig
s know to what extent a ‘startle’ reaction is part of their makeup. It was even more startling to the experimenter [Dr Cade] to find that after the injection of a solution of lithium they could be turned on their backs and that, instead of their usual frantic behaviour they merely lay there and gazed placidly back at him.
After a fortnight’s self-administration to investigate its potential toxicity, John Cade gave the drug to nineteen patients – ten with mania, six with schizophrenia and three with psychotic depression. It had no effect on the depressives and slightly calmed the schizophrenics, but it had an extraordinary effect in mania, as described in the case of Mr W. B.5
Lithium was the first ‘miracle’ drug for the treatment of mental illness. And a miracle it remains, as fifty years on its mode of action is no clearer now than it was when Cade first discovered it. Its introduction into psychiatric practice was delayed for the best part of two decades for several reasons. First, the Medical Journal of Australia, in which Cade’s report was published, was not widely read. This is how British psychiatrist David Rice first came across it:
It was about 1952/53, when I was in charge of Graylingwell Hospital, Chichester. I had at that time two particularly difficult and overactive patients with long manic illnesses. In those days our pharmacological armamentarium was pretty limited . . . I would have liked to have given each of these chaps ECT but the relatives wouldn’t allow it. We were pondering on what we should do when an Australian registrar produced a scruffy crumpled sheet from the Journal of the Australian Medical Association with Cade’s article in it. I felt we had nothing to lose so decided to try it.6
Second, lithium had a reputation for being highly toxic. It had been widely used for several years in the United States as a salt substitute in the treatment of patients with raised blood pressure, until reports in the Journal of the American Medical Association in 1949 indicated serious and indeed lethal side-effects. John Cade was apparently unaware of these developments, luckily as it turned out, as otherwise it is unlikely he would have given lithium to his manic patients. The notion that lithium was dangerous certainly discouraged its general acceptance.7
From 1952 onwards the benefits of lithium were championed by a young Danish psychiatrist, Mogens Schou, who also had a personal interest in the treatment, as he subsequently recalled: ‘Perhaps more than most scientists I have been granted the privilege of reaping the fruits of my labour. A number of family members have been treated with lithium with signal effect [Schou himself was among them]; they might have been hospitalised or dead if lithium treatment had not come round.’ Lithium was finally given a licence for use in the United States in 1970, twenty years after Cade’s original description of its effect on Mr W. B.’s mania.8
Antidepressants: Tricyclics, Selective Serotonin Reuptake Inhibitors (SSRIs) and Monoamine Oxidase Inhibitors (MAOIs)
The first antidepressant – imipramine – arose directly out of the research programme that had led to the discovery of chlorpromazine. Roland Kuhn – a 38-year-old psychiatrist (and a disillusioned psychoanalyst) at Munsterlingen Hospital, Switzerland – requested from the drug firm Geigy supplies of imipramine, one of the drugs that had been synthesised as part of the research programme that had led to the discovery of chlorpromazine, intending to see whether it might be similarly – or more – effective in patients with schizophrenia. Regrettably it seemed to make many of them worse, ‘converting quiet, chronic patients into agitated whirlwinds of energy’. Sometime in 1955 the decision was made to do the logical thing and give the same drug to patients with depression to see whether its energising properties might make them a bit more cheerful. The results were dramatic. In Kuhn’s words: ‘The patients became generally more lively, their low depressive voices sounded stronger. They appeared more communicative. If the depression had manifested itself in a dissatisfied, plaintive or irritable mood a friendly, contented and accessible spirit comes to the fore.’ At visiting hours the patients’ relatives were astonished at the change, declaring they hadn’t seen them this well for a long time.9
In the spring of 1958 Geigy launched imipramine as Tofranil, the first of many tricyclic antidepressants (so called because of their three-ringed chemical structure, which differs by only two atoms from chlorpromazine). And just as the mode of action of chlorpromazine in blocking the dopamine receptors only became clear ten years after its introduction, so the mode of action of imipramine – that it blocked the receptors of the neurotransmitter 5HT – was not made till 1960, a full five years after Kuhn’s original observations.10
In the 1980s the popularity of the tricyclics was eclipsed by the Selective Serotonin Reuptake Inhibitors or SSRIs such as Prozac (fluoxetine), which had fewer side-effects. Their manner of discovery was, however, exactly the same as that of the tricyclics, having been identified as part of a screening programme of the antihistamine-type drugs that gave rise to chlorpromazine.11
The tricyclics and SSRIs were an accidental spin-off from a programme where drugs were first synthesised and then tested for possible therapeutic efficacy. By contrast, the MAOIs arose – like chlorpromazine – from a chance felicitous clinical observation that a drug used in the treatment of one condition, in this case tuberculosis, had side-effects that might be put to good use in another.
In 1944 the Germans had used a new type of fuel – hydrazine – to propel their V2 rockets over southern England. Come the end of the war hydrazine thus became available relatively cheaply, so pharmaceutical companies bought it up to use as a starting material for investigation of its possible therapeutic properties, even though it was not an easy compound to work with, being flammable, caustic, extremely poisonous and explosive. At the time drug companies routinely tested all chemicals to see if they might be effective against tuberculosis, and two of the hydrazine derivatives – isoniazid and iproniazid – were found to be so. When eventually they were introduced as a treatment for tuberculosis, iproniazid was observed to have the side-effect in some patients of inducing euphoria, or, as it was put colourfully at the time, patients ‘danced in the halls though there were holes in their lungs’. One 29-year-old woman ‘started to notice unusual energy and marked increase of appetite at the end of her second week of treatment. This condition lasted several weeks; and in one of the interviews during this period she said, “The day is not long enough for all the things I want to do.”’ The therapeutic potential of iproniazid as a treatment of depression was not appreciated initially. It was only later that it was found to inhibit one of the neurotransmitter chemicals in the brain – monoamine oxidase – and an American psychiatrist, Dr Nathan Klein, conducted the definitive study which led to its marketing as the antidepressant Marsilid.12
Benzodiazepines
The fourth pillar of the pharmacological revolution in post-war psychiatry was formed by the benzodiazepines, of which Valium (diazepam) is the best known. These are known as the ‘minor’ tranquillisers, to distinguish them from the ‘major’ ones such as chlorpromazine, so effective in controlling the agitation associated with schizophrenia. Symptoms of ‘minor’ anxiety are a common reason for seeking medical attention and the extraordinary success of benzodiazepines, leading to their massive overprescription in the 1960s and 1970s, lay in the fact that, unlike the barbiturates that they superseded, they did not have a strong sedative action and were very safe. They could thus be prescribed with impunity for the ubiquitous mild psychological symptoms that people brought to the surgery.13
These, the commercially most successful drugs of all time, only narrowly avoided not being discovered. Inspired by the success of chlorpromazine, Leo Sternbach of Hoffmann La Roche decided to try and find a completely new type of tranquilliser and began with a class of compounds which he had synthesised twenty years earlier as part of his postdoctoral studies at the University of Cracow in Poland. He synthesised compounds structurally related to the antihistamine group of drugs from which chlorpromazine had been derived, none of which had any special tranquillising effect. In 1957 it wa
s decided to close down the research programme:
The laboratory benches were covered with dishes containing crystalline samples . . . the working area had shrunk almost to zero and a major clean-up operation was in order. My coworker, Earl Reeder, drew my attention to a few hundred milligrams of two products which had not been submitted for pharmacological testing at the time so we submitted them for pharmacological evaluation. We thought the expected negative result would complete our work . . . Little did we know that this was the start of a programme which would keep us busy for many years.14
After a few days Sternbach was rung by his pharmacologist to be informed ‘that the compound possessed unusually interesting properties in the tests for the preliminary screening of tranquillisers’. This last-minute discovery generated much excitement and raised the question why only this drug seemed to work as a tranquilliser. Its structure was duly reanalysed and turned out to be not what had been anticipated. Rather, while sitting on the bench it had been transformed into an entirely different type of chemical altogether – a benzodiazepine. Its precise mode of action remained unclear for a further twenty years until 1977, when benzodiazepine receptors were found in the brain which, it is thought, influenced the action of the neurotransmitter GABA.15