Written in Bone
Page 6
The middle ear, deep within the temporal bone, extends from the eardrum to the wall of the inner ear. Across this space, three small bones (ossicles) work together as a mechanism to transmit the vibrations from the eardrum to the inner ear. If the tiny joints between each of the three bones (malleus, incus and stapes—hammer, anvil and stirrup) are not functioning, again, the individual will be deaf. Fusion of the foot of the stirrup to the wall of the inner ear is another indicator of deafness. There are of course many other reasons why someone may be deaf, but this is some of the anatomical evidence we can read in the skull.
Deafness as a result of inner ear malformation (within the petrous part of the temporal bone) is trickier to identify and requires the anthropologist to be prepared to literally drill down into the very dense bone that developed around the embryonic otic capsule, the precursor of the inner ear. This is a fascinating little area of bone that is already formed to adult size at birth and, it is believed, does not remodel thereafter. The otic capsule is a gem for stable isotope analysis—analysis of the levels of elemental isotopes such as oxygen, nitrogen and phosphorus that can produce an elemental signature in our tissues. As this tiny bone is laid down from the building blocks of the mother’s diet when she was pregnant, it can give scientists information about the food she was eating and the source of the water she was drinking at the time when her baby’s inner ear was forming, which may in turn point them to whereabouts in the world she was living.
If a solitary skull turns up unexpectedly, regardless of how blindingly obvious it may be that it is human, the police must have this confirmed by a qualified expert before they can decide how to proceed. We were once sent a photograph of a skull that had been found by police on some wasteland. It was a very good copy, but it was apparent from the teeth that it was a cast. The fact that it turned up in November, shortly after Hallowe’en, may have been a clue to what it was doing there.
However, it is not unusual for isolated heads or skulls to be dredged up by fishing boats. When this happens it presents the skipper with a difficult decision, because if human remains are found in a catch, the whole catch has to be disposed of. So there are serious implications for their livelihood. For this reason, I am sure many finds must go unreported.
When a skull (minus its mandible) was noticed sitting in plain sight on a harbour wall on the west coast of Scotland, it was clear that a compromise had been found by one captain. It had obviously been brought up from the sea—there were barnacles attached to the surface—and placed there deliberately so that someone would report it to the authorities. The skull was photographed by a police officer, who sent us the picture requesting confirmation that it was human, which, of course, it was.
We were then asked to date the skull (in terms of estimating how long the individual had been dead), to define any distinguishing characteristics and to take a sample of bone for DNA analysis. That the skull belonged to a male was evident from the well-developed ridging above the eyes, the size of the mastoid processes and the prominent external occipital protuberance at the back. We believed he had been in his late teens to early twenties as his teeth showed limited wear. There was no dental work. The sutures had not started to close and, at the base of the skull, there was a gap still visible between his sphenoid and occipital bones. This gap is called the spheno-occipital synchondrosis (one of my favourite anatomical names) and it closes by around eighteen years of age in males.
The labs were unable to obtain a DNA profile from the bone. All things considered, our suspicion was that this was not a recent death. We sent a section of bone for C14 radiocarbon dating and it was returned with an estimation that this man had been dead for six to eight hundred years. Whoever he was, bless him, he was not of forensic relevance. It was likely that coastal erosion had uncovered an ancient grave and the bones had been washed out to sea, only to be returned to shore via a fishing net.
Skulls that come in with the tide or are recovered from a catch of fish are often represented by the neurocranium alone. The facial bones are more delicate and tend to be damaged by dredgers or as they bounce around on the sea bed. A skull cap may be all we have, but there is still a lot we can tell from a brain box.
2
The Face
Viscerocranium
“The face is a picture of the mind with the eyes as its interpreter”
Cicero
Statesman, 106–43BC
There are two parts of our bodies that we are generally quite happy to have on public display at all times: our hands and our faces, both of which we use to express ourselves and to communicate. But it is the face on which we focus, and that we converse with, and therefore the face by which the majority of us instantly recognize each other.
However, in cultures where the face may be habitually covered, or when, for whatever reason, we are accustomed to concentrating on a different part of the body, it is interesting that our means of identifying our fellow humans adapts accordingly. Recently an oncology nurse told me that she had spent so much time over the years trying to find the veins on the backs of patients’ hands that she had come to recognize them by their hands and jewellery as much as by their faces.
Not long ago I was invited to a conference in Riyadh by the Saudi Arabia Society for Forensic Science. This was my first visit to that part of the world. I was told it wasn’t necessary for me to wear a burka, niqab or gloves, but out of respect for the local customs I donned the abaya, the traditional black women’s robe, and shayla, or scarf, thereby courteously covering both my body and my hair but leaving my face and hands visible.
I actually found it quite comfortable to be dressed in the same way as the other women—it was like being part of a sisterhood, almost—and being virtually inconspicuous to the men at the conference. One of the western attendees had chosen not to adopt the country’s dress code and, even though she was perfectly modestly attired, she received some quite hateful and vicious comments from fellow male delegates in the corridors of the conference hotel. They would hiss at her that she was a disgrace and she should cover her hair.
This was probably the first time I was ever personally conscious of gender hierarchy on a cultural scale. I have been very fortunate in my career to have been largely oblivious of any sex or gender discrimination. I put this down to the fact that my parents never told me I was a girl. Yes, my father expected me to be able to bake a good rhubarb crumble, but he also expected me to be capable of helping him French polish a dining table and shoot, gut and skin rabbits.
In the worlds of the military and the police, often viewed as misogynistic, I can honestly say I have never been aware of having been treated differently because of my double X chromosomes. It may just be that I am sufficiently bombastic or heedless of it not to have noticed, or perhaps I have just been lucky. The only two occasions when I suspected that my involvement was merely a nod towards “EDI”—equality, diversity and inclusion—were in fact both in academic settings. I handled those in a manner that ensured the two male senior managers concerned never gave me any further trouble. It helps being an anatomist: you can legitimately use terminology that is normal in our line of work but quite discomfiting for others. In both meetings, when it became clear that a question was being asked of me simply because I was the only woman in the room, I inquired, very politely, whether they were interested in my response or simply in the presence of my uterus. Of course they were mightily embarrassed, and assured me it was my opinion they wanted to hear. But interestingly, neither of them ever asked me a question in that way again.
At the conference in Saudi Arabia, women were required to sit on one side of the lecture theatre and men on the other, with a very clear demarcation between the two. It was here that I noticed something quite remarkable as I observed the interpersonal relationships between the women who chose to wear the niqab, and who were thus completely obscured except for their eyes. When they entered the room, I was surprised to see that they were able to recognize their friends from a co
nsiderable distance away, even though they were sitting down, their faces were covered, and they were all dressed in the same black clothing with no distinctive jewellery on show. I commented on this to a male Saudi colleague of mine, who could not explain how they were able to identify each other so easily. He invited me to his home to meet his wife and ask her.
My colleague’s wife confirmed that she, too, had no trouble recognizing her niqab-clad friends but, as is often the case with skills we develop in infancy and take for granted, she could not pinpoint how she did it. We could only do what all good scientists do when they encounter something they can’t explain: investigate. My male friend and I got a group of Saudi female scientists together and began to design an in-country experiment that would analyse the ability of Saudi women to distinguish between friends and strangers dressed in the full niqab.
Their first challenge was to assemble a large enough sample size. Even though the research team was entirely female, the culture of distrust among the potential participants hampered progress. Despite the team’s adherence to all the research ethics, and the reassurance that the images required would be destroyed at the end of the study and that no third party would be given access to them, many of the women approached were nervous about having their photographs taken for identification purposes.
Using eye-tracking software, we wanted to analyse what women were looking at as they encountered other women in full veil, some of them known to them and others not, as a means of establishing the cues they were seeking to capture. We know from existing research that we identify familiar uncovered faces by focusing on the inverted triangle that delineates the eyes, nose, mouth and chin. Our group, however, had only the eyes, the overall shape and size of a person and their gait on which to base an opinion. It seems that when the face is covered, it is not just the eyes that are important triggers for identification, but also the imperfect ways in which we sit, walk or gesture.
As the study is still ongoing we don’t have a definitive answer as yet, but if we get to the bottom of it, understanding and learning how to use this skill could prove extremely useful to organizations such as the security services.
The face, or viscerocranium, the smaller of the two parts of the skull, consists of three regions: the upper region for the forehead and eyes, the middle for the nose and cheeks and the lower area for the mouth, teeth and chin. The viscerocranium is where the tissues associated with many of our senses are housed, including sight, hearing, taste and smell. As these are formed before we are born, there is a controlled amount of growth associated with their development. The eye sockets are already large at birth because, as discussed in Chapter 1, the eyes form as a direct outgrowth from the brain and so mature very early.
The different working parts of the middle and inner ear are virtually adult-sized by the time we are born and our sense of smell is very well developed, although the collecting chamber for odours and aromas, the nose, will keep growing, like the external bits of our ears, throughout our lives. That’s why old men seem to have such large ears. But the biggest growth occurs around our mouth, as most (though not all) babies are born without teeth.
By and large we are all pretty good at recognizing the faces of people we know, but research shows that we are largely rubbish at recalling the face of a stranger we have met only fleetingly. I am the constant butt of my family’s humour as I frequently fail to remember people I have met many times. The most infamous example was at an office-warming for our lawyer’s firm, where I introduced myself to one of the partners, only to be told that he had been a dinner guest at our house.
But even that pales in comparison to my legendary faux pas following my return from my second mission to Iraq. With Aberdeen airport fogbound, my plane was diverted to Edinburgh and my husband decided to drive there to pick me up. As I strode purposefully across the concourse, two excited little blonde girls came running towards me shouting, “Mummy! Mummy!” which, thankfully, was enough of a clue for me to swiftly recognize them as my children. Their father, however, was nowhere to be seen. He was, in fact, standing behind me, hands on hips, shaking his head in disbelief because I had just walked right past him. It is relevant to the extent of my embarrassment that by this time, my husband and I had known each other for over twenty-five years. I hadn’t recognized him because he was sporting a goatee beard he hadn’t had when I’d last seen him, which I have to say rather suited him.
I spend conferences staring at people’s chests (not clever) trying read their name badges and I am sure there are people who must consider me a dreadful snob in the mistaken belief that I have deliberately ignored them. Such ineptitude is not only embarrassing but can only be seen as a significant failing in someone whose career is based on the identification of the human, or what remains of them. What can I say? Names stick in my mind, not faces.
There is a select group of individuals, to which it goes without saying I will never belong, who have well above-average ability to remember and recognize faces, even if they have only encountered them once. Most of us can remember about 20 per cent of the people we meet, but these “super-recognizers” can recall in the region of 80 per cent. Such an innate skill is, not surprisingly, in high demand in the intelligence and security world and also in the commercial market for private clients ranging from casinos to football clubs. The day may come quite soon when this human talent is replaced by automated facial recognition technology but until then, super-recognizers have proved hugely valuable to the police in cases as diverse as gang violence and sexual assault. Recently super-recognition was used to help identify the men behind the poisoning in Salisbury of the former Russian military intelligence officer Sergei Skripal and his daughter Yulia. The classification of super-recognizers emerged from an entirely different field of research: a clinical psychology experiment which was studying the opposite end of the spectrum: prosopagnosia. This is a clinical condition, sometimes described as face blindness, where people have extreme difficulty identifying faces. It can be enormously debilitating. A parent may not be able to pick their child up from school because they cannot recognize their offspring. Some sufferers cannot even recognize their own face on being shown a photograph of themselves. Prosopagnosia is an inherited condition; it can also be acquired through stroke or traumatic brain injury. You can take a quiz online to see where you lie on the prosopagnosia–super-recognizer spectrum. Most of us will be somewhere in the middle, with the vast majority proving better at recognizing their husband than I am.
However good or otherwise we are at recognizing our fellow humans, we can be briefly wrong-footed by natural changes in their appearance caused by ageing, or weight gain or loss, or by deliberate superficial alterations. Genetics, of course, play a significant role in determining how we look throughout our lives, but most of us modify our appearance to some degree on a fairly regular basis. We might swap our glasses for contact lenses, put on make-up, grow a beard or moustache or transform our hair colour. But these temporary cosmetic adjustments do not fundamentally alter the underlying structure of our faces. As a general rule, few of us will change to such an extent that we cannot be recognized by those who have known us intimately in the past. However, when we start to modify the framework, by, for example, shaving off the point of our chin, or acquiring cheek implants or veneers on our teeth, recognition can become more challenging. Such extreme forms of disguise have been integral to the plot of many a Hollywood movie.
Face transplants, once the domain of science fiction, are now a reality, if still a very rare procedure. Patients who have suffered severe disease, injuries or burns may be offered skin graft transplantation using tissue from a donor (including muscle, skin, blood vessels, nerves and, in some cases, even bone). In this surgery two fundamental alterations collide, with the creation of a new scaffold supporting somebody else’s face producing something of a chimera. The operation neither restores the individual to their former appearance, nor bestows the appearance of the donor on the recipient. The r
esult is a mix of the two, with the surgical process itself contributing significant additional alterations.
Such cutting-edge surgery is considered only when all other avenues have been exhausted. It carries a significant risk of rejection, which means that the patient must remain on immunosuppressants for the rest of their life, and involves many ethical, psychological and physical issues that will affect not only the recipient but also the donor’s family and friends.
Face transplants are still a very new field—the first successful partial transplant was performed in France as recently as 2005 and the first successful full face transplant five years later in Spain—and to my knowledge none of these patients have to date come to the attention of forensic anthropologists. But it can only be a matter of time. It is just one more example of how crucial it is for us to remain open to the myriad possibilities surrounding successful identification and to approach each case free of preconceptions.
A disfigured face is extremely debilitating and isolating in a society that sets so much store by how we look. Anaplastology, the branch of medicine concerned with prosthetics, has been addressing more localized facial disfigurements since it developed as a specialism in the aftermath of the First World War in response to the need to help injured servicemen reintegrate with society. Replacement noses were probably the earliest prosthetic, required to repair faces ravaged by either warfare or syphilis. Prosthetics were originally carved out of inert materials, including ivory, metal and wood, which were gradually replaced by more realistic plastic and then latex alternatives.
Today, the sophistication of artificial eyes, noses and ears is exceptional. Noses can be designed to closely replicate the damaged version (unless the patient takes the opportunity to go for a new shape) and eyes and ears are painstakingly constructed to mirror the patient’s other eye or ear, so that their face remains relatively unaltered and symmetrical.