Program or Be Programmed
Page 4
The analog recording is a physical impression, while the digital recording is a series of choices. The former is as smooth and continuous as real time; the latter is a series of numerical snapshots. The record has as much fidelity as the materials will allow. The CD has as much fidelity as the people programming its creation thought to allow. The numbers used to represent the song—the digital file—is perfect, at least on its own terms. It can be copied exactly, and infinitely.
In the digital recording, however, only the dimensions of the sound that can be measured and represented in numbers are taken into account. Any dimensions that the recording engineers haven’t taken into consideration are lost. They are simply not measured, written down, stored, and reproduced. It’s not as if they can be rediscovered later on some upgraded playback device. They are gone.
Given how convincingly real a digital recording can seem—especially in comparison with a scratchy record—this loss may seem trivial. After all, if we can’t hear it, how important could it be? Most of us have decided it’s not so important at all. But early tests of analog recordings compared to digital ones revealed that music played back on a CD format had much less of a positive impact on depressed patients than the same recording played back on a record. Other tests showed that digitally recorded sound moved the air in a room significantly differently than analog recordings played through the same speakers. The bodies in that room would, presumably, also experience that difference—even if we humans can’t immediately put a name or metric on exactly what that difference is.
So digital audio engineers go back and increase the sampling rates, look to measure things about the sound they didn’t measure before, and try again. If the sampling rate and frequency range are “beyond the capability of the human ear” then it is presumed the problem is solved. But the problem is not that the digital recording is not good enough—it is that it’s a fundamentally different phenomenon from the analog one. The analog really just happens—the same way the hands of a clock move slowly around the dial, passing over the digits in one smooth motion. The digital recording is more like a digital clock, making absolute and discrete choices about when those seconds are changing from one to the next.
These choices—these artificially segmented decision points—appear very real to us. They are so commanding, so absolute. Nothing in the real world is so very discrete, however. We can’t even decide when life begins and ends, much less when a breath is complete or when the decay of a musical note’s echo has truly ended—if it ever does. Every translation of a real thing to the symbolic realm of the digital requires that such decisions be made.
The digital realm is biased toward choice, because everything must be expressed in the terms of a discrete, yes-or-no, symbolic language. This, in turn, often forces choices on humans operating within the digital sphere. We must come to recognize the increased number of choices in our lives as largely a side effect of the digital; we always have the choice of making no choice at all.
All this real and illusory choice—all these unnecessary decision points—may indeed be a dream come true for marketers desperate to convince us that our every consumer preference matters. But it’s not their fault. They are merely exploiting digital technology’s pre-existing bias for yes-or-no decisions.
After all, the very architecture of the digital is numbers; every file, picture, song, movie, program, and operating system is just a number. (Open a video or picture of a loved one in your text editor to see it, if you’re interested.) And to the computer, that number is actually represented as a series of 1’s and 0’s. There’s nothing in between that 1 and 0, since a computer or switch is either on or off. All the messy stuff in between yes and no, on and off, just doesn’t travel down wires, through chips, or in packets. For something to be digital, it has to be expressed in digits.
It’s in that translation from the blurry and nondescript real world of people and perceptions to the absolutely defined and numerical world of the digital where something might be lost. Exactly where in the spectrum between yellow and red is that strange shade of orange? 491 terahertz? A little more? 491.5? 0.6? Somewhere in between? How exact is enough? That’s anyone’s call, but what must be acknowledged first is that someone is, indeed, calling it. A choice is being made.
This isn’t a bad thing; it’s just how computers work. It’s up to the cyborg philosophers of the future to tell us whether everything in reality is just information, reducible to long strings of just two digits. The issue here is that even if our world is made of pure information, we don’t yet know enough about that data to record it. We don’t know all the information, or how to measure it. For now, our digital representations are compromises—symbol systems that record or transmit a great deal about what matters to us at any given moment. Better digital technology merely makes those choices at increasingly granular levels.
And while our computers are busy making discrete choices about the rather indiscrete and subtle world in which we live, many of us are busy, too—accommodating our computers by living and defining ourselves in their terms. We are making choices not because we want to, but because our programs demand them.
For instance, information online is stored in databases. A database is really just a list—but the computer or program has to be able to be able to parse and use what’s inside the list. This means someone—the programmer—must choose what questions will be asked and what options the user will have in responding: Man or Woman? Married or Single? Gay or Straight? It gets very easy to feel left out. Or old: 0–12, 13–19, 20–34, 35–48, or 49–75? The architecture of databases requires the programmer to pick the categories that matter, and at the granularity that matters to his or his employer’s purpose.
As users, all we see is a world of choice—and isn’t choice good? Here are one hundred possible looks for your mail browser, twenty possible dispositions each with twenty subsets for you to define yourself on a dating site, one hundred options for you to configure your car, life insurance, or sneaker. When it doesn’t feel overwhelming, it feels pretty empowering—at least for a while. More choice is a good thing, right? We equate it with more freedom, autonomy, self-determination, and democracy.
But it turns out more choice doesn’t really do all this. We all want the freedom to choose, and the history of technology can easily be told as the story of how human beings gave themselves more choices: the choice to live in different climates, to spend our time doing things other than hunting for food, to read at night, and so on. Still, there’s a value set attending all this choice, and the one choice we’re not getting to make is whether or not to deal with all this choice.
Choice stops us, requiring that we make a decision in order to move on. Choice means selecting one option while letting all the others go. Imagine having to choose your college major before taking a single course. Each option passed over is an opportunity cost—both real and imagined. The more choices we make (or are forced to make) the more we believe our expectations will be met. But in actual experience, our pursuit of choice has the effect of making us less engaged, more obsessive, less free, and more controlled. And forced choice is no choice at all, whether for a hostage forced to choose which of her children can survive, or a social network user forced to tell the world whether she is married or single.
Digital technology’s bias toward forced choices dovetails all too neatly with our roles as consumers, reinforcing this notion of choice as somehow liberating while turning our interactive lives into fodder for consumer research. Websites and programs become laboratories where our keystrokes and mouse clicks are measured and compared, our every choice registered for its ability to predict and influence the next choice.
The more we accept each approximation as accurate, the more we reinforce these techniques from our machines and their programmers. Whether it’s an online bookstore suggesting books based on our previous selections (and those of thousands of other consumers with similar choice histories), or a consumer research firm using
kids’ social networking behaviors to predict which ones will someday self-identify as gay (yes, they can do that now), choice is less about giving people what they want than getting them to take what the choice-giver has to sell.
Meanwhile, the more we learn to conform to the available choices, the more predictable and machinelike we become ourselves. We train ourselves to stay between the lines, like an image dragged onto a “snap-to” grid: It never stays quite where we put it, but jerks up and over to the closest available place on the predetermined map.
Likewise, through our series of choices about the news we read, feeds to which we subscribe, and websites we visit, we create a choice filter around ourselves. Friends and feeds we may have chosen arbitrarily or because we were forced to in the past soon become the markers through which our programs and search engines choose what to show us next. Our choices narrow our world, as the infinity of possibility is lost in the translation to binary code.
One emerging alternative to forced, top-down choice in the digital realm is “tagging.” Instead of a picture, blog entry, or anything being entirely defined by its predetermined category, users who come upon it are free (but not obligated) to label it themselves with a tag. The more people who tag it a certain way, the more easily others looking for something with that tag will find it. While traditional databases are not biased toward categorizing things in an open-ended, bottom-up fashion, they are capable of operating this way. They needn’t be limited by the original choices programmed into them but can be programmed instead to expand their dimensions and categories based on the tags and preferences of the people using them. They can be made to conform to the way people think, instead of demanding we think like they do. It’s all in the programming, and in our awareness of how these technologies will be biased if we do not intervene consciously in their implementation.
Meanwhile, we are always free to withhold choice, resist categorization, or even go for something not on the list of available options. You may always choose none of the above. Withholding choice is not death. Quite on the contrary, it is one of the few things distinguishing life from its digital imitators.
IV. COMPLEXITY
You Are Never Completely Right
Although they allowed us to work with certain kinds of complexity in the first place, our digital tools often oversimplify nuanced problems. Biased against contradiction and compromise, our digital media tend to polarize us into opposing camps, incapable of recognizing shared values or dealing with paradox. On the net, we cast out for answers through simple search terms rather than diving into an inquiry and following extended lines of logic. We lose sight of the fact that our digital tools are modeling reality, not substituting for it, and mistake its oversimplified contours for the way things should be. By acknowledging the bias of the digital toward a reduction of complexity, we regain the ability to treat its simulations as models occurring in a vacuum rather than accurate depictions of our world.
Thanks to its first three biases, digital technology encourages us to make decisions, make them in a hurry, and make them about things we’ve never seen for ourselves up close. Furthermore, because these choices must all be expressed in numbers, they are only accurate to the nearest decimal place. They are approximations by necessity. But they are also absolute: At the end of the day, digital technologies are saying either yes or no.
This makes digital technology—and those of us using it—biased toward a reduction of complexity.
For instance, although reality is more than one level deep, most of our digital networks are accessible with a single web search. All knowledge is the same distance away—just once removed from where we are now. Instead of pursuing a line of inquiry, treading a well-worn path or striking out on an entirely new one, we put a search term in a box and get back more results than we can possibly read. The pursuit itself is minimized—turned into a one-dimensional call to our networks for a response.
On the one hand, this is tremendously democratizing. The more accessible information becomes in a digital age, the less arbitrary its keepers can be about who they let in, and who is kept out. Many playing fields are leveled as regular people gain access to information formerly available only to doctors, physicists, defense contractors, or academics.
It’s not just that the data is in unrestricted places—it’s that one no longer needs to know quite how to find it. The acquisition of knowledge used to mean pursuing a prescribed path and then getting to the knowledge desired when the path reached there. The seeker had to jump through the hoops left by his predecessors. Now, the seeker can just get the answer.
And in some cases—many cases even—this is a terrific thing. A cancer patient doesn’t need ten years of medical training to read about a particular course of chemotherapy, a citizen doesn’t need a law degree to study how a new tax code might affect his business, a student doesn’t need to read all of Romeo and Juliet to be able to answer questions about it on a test. (Well, at least it feels like a great thing at the time.) We only get into trouble if we equate such cherry-picked knowledge with the kind one gets pursuing a genuine inquiry.
In today’s harried net culture, actually sitting down to read an entire Wikipedia article on a subject—especially after we’ve found the single fact we need—seems like a luxury. We hardly remember how embarrassing (and failing) it was to be discovered to have used an encyclopedia article as the source in a paper as early as middle school. It’s not just that teachers considered using encyclopedias and plot summaries cheating. Rather, it was generally understood that these watered-down digests of knowledge deny a person the learning that takes place along the way. Actually reading the scenes in a Shakespeare play, or following the process through which Mendel inferred genetics from the variations in his garden pea plants, promotes experiential learning. It re-creates the process of discovery, putting the researcher through the very motion of cognition rather than simply delivering the bounty.
Is this an old-fashioned way of acquiring knowledge? Indeed it is. And it’s not essential for every single fact we might need. Figuring out the sales tax rate in Tennessee needn’t require us to revisit the evolution of the state’s tax code. Thank heavens there’s an Internet making such information a single search away.
But not everything is a data point. Yes, thanks to the digital archive we can retrieve any piece of data on our own terms, but we do so at the risk of losing its context. Our knee-jerk, digital age reaction against academic disciplines finds its footing in our resentment for centuries of repressive hierarchies. Professors, gurus, and pundits made us pay for access to their knowledge, in one way or another. Still, although they may have abused their monopolies, some of the journeys on which they sent us were valid. The academic disciplines were developed over centuries, as each new era of experts added to and edited the body of what they considered to be essential knowledge. By abandoning the disciplines—however arbitrarily they may have been formulated—we disconnect ourselves from the multigenerational journey leading up to this moment. We are no longer part of that bigger project, or even know what it is we are rejecting.
In the more immediate sense, facts devoid of context are almost impossible to apply sensibly. They become the fodder for falsely constructed arguments of one side or other of the social or political spectrum. The single vote of a politician is used to describe his entire record, a single positive attribute of caffeine or tobacco receives attention thanks to public relations funding, and a picture of a single wounded child turns public opinion against one side in a conflict rather than against war itself.
Both sides in a debate can cherry-pick the facts that suit them—enraging their constituencies and polarizing everybody. In a digital culture that values data points over context, everyone comes to believe they have the real answer and that the other side is crazy or evil. Once they reach this point, it no longer matters that the opposing side’s facts contradict one’s own: True believers push through to a new level of cynicism where if the facts are contradi
ctory, it means they are all irrelevant. The abundance of facts ends up reducing their value to us.
As a result, we tend to retreat into tribes, guided primarily by our uninformed rage. And we naturally hunger for reinforcement. Television news shows rise to the occasion, offering shouting matches between caricatured opposites competing for ratings. Elected officials are ridiculed as “wonks” for sharing or even understanding multiple viewpoints, the history of an issue, or its greater context. We forget that these are the people we’re paying to learn about these issues on our behalf. Instead, we overvalue our own opinions on issues about which we are ill informed, and undervalue those who are telling us things that are actually more complex than they look on the surface. They become the despised “elite.”
Appropriately used, however, the data-point universe can yield uniquely valuable connections and insights. Thousands or millions of amateurs, working through problems together or independently, can link to one another’s results and sources. Instead of depending on a top-down academic discipline (which may be more committed to the preservation of old heroes than the solving of new problems), researchers can discern which sources are most valuable to their peers right now. Information can be structured and restructured in real time, catered to new challenges unforeseen by yesterday’s academics. Physics and biology no longer need to live in different departments of the university, and James Joyce can appear on the same virtual library shelf as a text on chaos math. In a hypertext-driven information space, everybody’s library can look different every day.
To exploit the power of these new arrangements of data, we must learn to regard them as what they are: untested models, whose relevancy is at best conditional or even personal. This is your brain’s way of organizing some pieces of information for a very particular task. It is not a substitute for knowledge of that realm. It is just a new entry point. Which is not to suggest this way of approaching information isn’t quite novel or even powerful.