The Madness of Crowds
Page 14
Just one casualty of this is that it has become very nearly impossible to sustain principles in public. For unless a principle works identically well for everybody all the time, there are going to be some people who benefit from it and some who are comparatively disadvantaged by it. Where those at a disadvantage may once have been somewhere in the ignorable distance, today they can always be there right in front of you. To speak in public is now to have to find a way to address or at least keep in mind every possible variety of person, with every imaginable kind of claim – including every imaginable rights claim. At any moment we might be asked why we have forgotten, undermined, offended or denied the existence of a particular person and others like them. It is understandable that the generations now growing up in these hyper-connected societies worry about what they say and expect other people to be equally worried. It is also understandable that before the critical potential of an entire world, an almost limitless amount of self-reflection – including weighing up your own ‘privileges’ and rights – might appear one of the very few tasks that could be successfully attempted or achieved.
Difficult and contentious issues demand a great amount of thought. And a great amount of thought often necessitates trying things out (including making inevitable errors). Yet to think aloud on the issues which are most controversial has become such a high risk that on a simple risk/reward ratio there is almost no point in anyone taking it. If someone who is a man says that they are a woman and would like you to refer to them as a woman, then you can weigh up your options. On the one hand you could just pass the test and get on with your life. On the other hand you could get labelled a ‘phobe’ and have your reputation and career destroyed. How to decide?
Although a variety of thinkers have set a certain amount of the weather, the ferocious winds of the present do not come from academic philosophy or social science departments. They emanate from social media. It is there that assumptions are embedded. It is there that attempts to weigh up facts can be repackaged as moral transgressions or even acts of violence. Demands for social justice and intersectionality fit fairly well into this environment, for no matter how recherché the demand or cause, people can claim to be seeking to address them. Social media is a system of ideas that claims to be able to address everything, including every grievance. And it does so while encouraging people to focus almost limitlessly upon themselves – something which users of social media do not always need to be encouraged to do. Better still, if you feel at any point anything less than 100 per cent satisfied with your life and circumstances, here is a totalistic system to explain everything, with a whole repository full of elucidations as to what in the world has kept you back.
Silicon Valley is not morally neutral
As anybody who has spent any time there will know, the political atmosphere in Silicon Valley is several degrees to the left of a liberal arts college. Social justice activism is assumed – correctly – to be the default setting for all employees in the major companies and most of them, including Google, put applicants through tests to weed out anyone with the wrong ideological inclinations. Those who have gone through these tests recount that there are multiple questions on issues to do with diversity – sexual, racial and cultural – and that answering these questions ‘correctly’ is a prerequisite for getting a job.
It is possible that there is some guilty conscience at work here, for the tech companies are rarely capable of practising what they are so willing to preach. For instance, Google’s workforce is only 4 per cent Hispanic and 2 per cent African-American. At 56 per cent, whites are not over-represented compared to the wider population. But Asians make up 35 per cent of Google staff and have been steadily reducing the number of white employees despite accounting for just 5 per cent of the US population.5
Perhaps it is the cognitive dissonance this creates which makes the Valley wish to course-correct the world since it can’t course-correct itself. The major tech companies now each employ thousands of people on six-figure salaries whose job is to try to formulate and police content in a way which is familiar to any student of history. At one recent conference on Content Moderation leading figures in both companies suggested that Google currently has around 10,000 and Facebook as many as 30,000 people employed to moderate content.6 And these figures are more likely to grow than to remain static. Of course this is not the task that Twitter, Google, Facebook and others particularly expected to perform when they were started. But once they found themselves having to perform such tasks it is unsurprising that the presumptions of Silicon Valley began to be imposed on the rest of the world online (other than in countries like China where Silicon Valley realizes that its writ does not run). But otherwise on each of the hot-button issues of the day it is not local custom or even the most fundamental values of existing societies that are being driven, but the specific views that exist in the most social-justice-obsessed square miles in the world.
On each of the maddening issues of our time – sex, sexuality, race and trans – the Valley knows what is right and is only encouraging everyone else to catch up. It is why Twitter is capable of banning women from its platform for tweeting ‘Men aren’t women’ or ‘What is the difference between a man and a transwoman’.7 If people are ‘wrong’ on the trans issue in this way, then Silicon Valley can ensure that they do not have a voice on their platforms. Twitter claimed that the above tweets, for instance, constituted ‘hateful conduct’. Meanwhile accounts which attack so-called ‘TERFS’ (trans-exclusionary radical feminists) are allowed to stay up. At the same time as the feminist campaigner Meghan Murphy was ordered by Twitter to delete the two tweets above, Tyler Coates (an editor at Esquire magazine) had no problem getting thousands of re-tweets for a tweet simply saying ‘Fuck Terfs!’8 In late 2018 Twitter’s ‘hateful conduct policy’ changed so that Twitter could permanently ban people from the platform if they were found to have ‘deadnamed’ or ‘misgendered’ trans people.9 So the moment that a person says that they are trans and announces a change of name anybody who calls them by their previous name or refers to them by their previous gender has their account suspended. Twitter has decided what does and does not constitute hateful conduct, and has decided that trans people need protecting from feminists, more than feminists need protecting from trans activists.
The tech companies have repeatedly had to come up with jargon to defend decision-making which is political always in one particular direction. The funding website Patreon has a ‘Trust and Safety team’ which is supposed to monitor and police the suitability or otherwise of ‘creators’ using Patreon as a crowd-funding resource. According to the company’s CEO, Jack Conte:
Content policy and the decision to remove a creator page has absolutely nothing to do with politics and ideology and has everything to do with a concept called ‘Manifest, Observable, Behaviour’. The purpose of using ‘Manifest, Observable, Behaviour’ is to remove personal values and beliefs when the team is reviewing content. It’s a review method that’s entirely based on observable facts. What has a camera seen. What has an audio device recorded. It doesn’t matter what your intentions are, your motivations, who you are, your identity, your ideology. The Trust and Safety team only looks at ‘Manifest, Observable, Behaviour’.10
It is a ‘sobering responsibility’ according to Conte, because Patreon are aware that they are talking about taking away an individual’s income when they ban them from using Patreon. But it is one that his company has exercised repeatedly, and in each known case against people who are believed to have the ‘wrong’ manifest, observable behaviour by being on the wrong side of the Valley on one of the new dogmas of the day. The tech companies can constantly be caught displaying such dogmas – often in the most bizarre ways imaginable.
Machine Learning Fairness
In recent years the Valley has not just adopted the ideological presumptions of intersectionalists and social justice warriors. They have embedded them at a level so deep that this provides a whole new layer of madness in a
ny society which imbibes them.
In order to correct bias and prejudice it is not enough simply to go through the procedures outlined in the ‘Women’ chapter. Unconscious bias training may be able to make us distrust our own instincts and may even show us how to rewire our pre-existing behaviour, attitudes and outlook. It may make us pay attention to our own privileges, check them against the privileges or disadvantages of others and then choose where we can legitimately place ourselves in any and all existing hierarchies. Paying attention to the intersections may make people more aware of when they need to be silent and when they may be allowed to speak. But all of these are only corrective measures. They cannot start us off from a place of greater fairness. They can only correct us once we are on our error-strewn way.
And that is why the tech companies are putting so much of their faith in ‘Machine Learning Fairness’ (MLF). For Machine Learning Fairness doesn’t just take the whole process of judgement-making out of the hands of prejudiced, flawed, bigoted human beings. It does so by handing judgement over to the computers which it ensures cannot possibly learn from our own biases. It does this by building into the computers a set of attitudes and judgements that have probably never been held by any human being. It is a form of fairness of which no human being would be capable.Yet it is only since users started to notice that something strange was going on with some search engine results that the tech companies have felt the need to explain what MLF is. Understandably they have tried to do so in as unthreatening a manner as possible, as though there is nothing much to see here. Whereas there is. An awful lot.
Google has intermittently posted, removed and then refined a video attempting to explain MLF in as simple a way as possible. In Google’s best shot to date at laying out what they are doing a friendly young female voice says ‘Let’s play a game’, then invites viewers to close their eyes and picture a shoe. A sneaker, a smart gentleman’s brogue and a high-heeled shoe all come up on the screen. Although we may not know why, the voice says that all of us are biased towards one shoe over the others. If you are trying to teach a computer how to think of a shoe, this is a problem. And the specific problem is that you may introduce the computer to your own shoe biases. So if your perfect shoe is a high heel then you will teach that computer to think of high heels when it thinks of shoes. A complex web of lines alerts the viewer to how complicated this could all get.
Machine learning is something that helps us ‘get from place to place’ online. It is what allows an internet search to recommend things to us, advise us on how to get somewhere and even translate things. In order to do this, human beings used to have to hand-code the solutions to problems which people were asking to have solved. But machine learning allows computers to solve problems by ‘finding patterns in data’:
So it’s easy to think there’s no human bias in that. But just because something is based on data doesn’t automatically make it neutral. Even with good intentions it’s impossible to separate ourselves from our own human biases. So our human biases become part of the technology we create.
Consider shoes again. A recent experiment asked people to draw a shoe for the benefit of the computer. Since most people drew some variation of a sneaker, the computer – learning as it went along – did not even recognize a high-heeled shoe as a shoe. This problem is known as ‘interaction bias’.
But ‘interaction bias’ is not the only type of bias about which Google are worried. There is also ‘latent bias’. To illustrate this, consider what would happen if you were training a computer to know what a physicist looks like and in order to do so you showed the computer pictures of physicists from the past. The screen runs through eight white male physicists, starting with Isaac Newton. At the end they show Marie Curie. It demonstrates that in this instance the computer’s algorithm will have a latent bias when searching for physicists, which in this case ‘skews towards men’.
A third and final bias (for the time being) is ‘selection bias’. The example here comes if you are training a computer model to recognize faces. We are asked, ‘Whether you grab images from the internet or your own photo library, are you making sure to select photos that represent everyone?’ The photos which Google presents are of people in headscarves and people who are not, people of all skin colours and people of very different ages. Since many of the most advanced tech products use machine learning, the voiceover reassures us, ‘We’ve been working to prevent that technology from perpetuating negative human bias.’ Among the things they have been working on is tackling ‘offensive or clearly misleading information’ from appearing at the top of search results and providing a feedback tool for people to flag ‘hateful or inappropriate’ autocomplete suggestions.
‘It’s a complex issue’ we are reassured, and there is no ‘magic bullet’. ‘But it starts with all of us being aware of it so we can all be part of the conversation. Because technology should work for everyone.’11 Indeed, it should. But it is also giving them a very predictable set of Silicon Valley’s own biases.
For instance, if you search for Google’s own example (‘Physicists’) on their image search, there is not much that can be done about the lack of female physicists. The machine appears to have got around this problem by emphasizing other types of diversity. So although the first image to come up on Google when searching for ‘physicists’ is of a white male physicist using chalk on a blackboard at Saarland University, the second image is a black PhD candidate in Johannesburg. By photo four we have got onto Einstein and photo five is Stephen Hawking.
Of course there is something to be said for this. Very few people would want any young woman to think that she couldn’t become a physicist just because historically there has been a predominance of men in the field. In the same way that very few people would want a young man or woman of one race or another to think that a particular field was closed to them because people of their skin colour had not been dominant in a field before. However, on any number of searches what is revealed is not a ‘fair’ view of things, but a view which severely skews history and presents it with a bias from the present.
Consider the results of a simple search such as one for ‘European Art’. There are a huge range of images that could come up on any Google Images search for those words. And it might be expected that the first images to come up would be the Mona Lisa, Van Gogh’s Sunflowers or something similar. In fact the first image that comes up is by Diego Velázquez. This may not be so surprising, though the specific painting chosen might be. For the first image to come up on ‘European Art’ is not Las Meninas or the portrait of Pope Innocent X. The Velázquez portrait that comes up as the first painting offered to someone searching for ‘European Art’ is his portrait of his assistant, Juan de Pareja, who was black.
It’s a tremendous portrait, but perhaps a surprising one to put first. Skipping further along the first row of images the other five are all of the type you might be hoping to get if you have typed in this term, including the Mona Lisa. Then we have a Madonna and Child (the first so far), and it is a black Madonna. Then next there is a portrait of a black woman from something called ‘people of colour in European art history’. The line she is on finishes with a group portrait of three black men. Then another line with another two portraits of black people. And then a painting by Vincent van Gogh (his first appearance so far). And so it goes, on and on. Each line presents the history of European art as consisting largely of portraits of black people. Of course this is interesting, and it is certainly ‘representative’ of what some people today might like to see. But it is not remotely representative of the past. The history of European art is not a fifth, two-fifths or a half about black representation. Portraits by or of black people were very unusual until recent decades when the populations of Europe began to change. And there is something not just strange but sinister in this representation of the past. You can see how in the mind of a machine taught to be ‘fair’ it could seem that this would constitute adequate representation of different grou
ps. But it is simply not a truthful representation of history, Europe or art.
Nor is this a one-off with Google. A request to find images relating to ‘Western people art’ offers a painting of a black man (from ‘Black people in Western art in Europe’) as the first picture. And from there the dominant selection is paintings of Native American people.
If you tell Google that you would like to see images of ‘Black men’ the images that come up are all portrait photos of black men. Indeed, it takes more than a dozen rows of images before anybody who isn’t black comes up in the images. By contrast a search for ‘White men’ first throws up an image of David Beckham – who is white – but then the second is of a black model. From there every line of five images has either one or two black men in the line-up. Many of the images of white men are of people convicted of crimes with taglines such as ‘Beware of the average white man’ and ‘White men are bad’.
As you begin to go down this rabbit hole the search results become increasingly absurd. Or at least they are absurd if you are expecting to get what you wanted from your search, though you can very soon work out which directions the misleading goes in.
If you search on Google Images for ‘Gay couple’, you will get row after row of photos of happy gay couples. They are handsome, gay people. Search for ‘Straight couple’ by contrast and at least one to two images on each line of five images will be of a lesbian couple or a couple of gay men. Within just a couple of rows of images for ‘Straight couple’ there are actually more photographs of gay couples than there are of straight ones, even though ‘Straight couple’ is what the searcher has asked for.