Book Read Free

The Transparent Society

Page 39

by David Brin


  • Sandia National Labs are developing robots the size of large cockroaches, called MARVs (miniature autonomous robotic vehicles). These could be used to inspect nuclear power plants or enemy lines on a battlefield.

  • In 1995, an Advanced Research Projects Agency (ARPA) study group proposed the idea of “surveillance dust,” where each particle would contain tiny sensors with a miniature parachute, microphone, and infrared detector. Sprinkled over a battlefield or disaster zone, this dust—or micro-electro-mechanical system (MEMS)—would float for four or five hours to transmit information on enemy locations. In a separate endeavor, ARPA suggested developing a hand-sized micro-unmanned-aerial-vehicle (UAV) which could fly for an hour and for distances of up to 16 kilometers, using microturbine engines which have already been developed.

  • Researchers at Tokyo University and Tsukuba University plan to implant microprocessors and microcameras into living cockroaches, to help search for victims in earthquake rubble.

  • At the Institute for Microtech in Mainz, Germany, researchers developed a 1-inch microhelicopter, which weighs one-hundredth of an ounce. Alan Epstein of MIT fabricated a jet engine the size of a shirt button for the U.S. Navy.

  So if gnat cameras are not yet a proved technology, they certainly seem plausible at this point. Should we try to limit such developments? Remember, keeping such tools out of the hands of your neighbors will not prevent the military and other influential power centers from using them.

  Such developments may prompt the free market to develop antignats, designed to seek and destroy those little flying (or crawling) interlopers. Antignats would have a simpler mission—homing in on gnat cam traces such as sound vibrations or radio emissions—but they will have to patrol relentlessly. Therefore, they must be cheap and far more numerous. Another defensive technique will be to sweep rooms with electromagnetic pulses, aimed at disabling any unshielded trespassers, until the gnats are made pulseproof, that is. A captured gnat cam might also be hacked (electronically dissected) to learn its programming and/or point of origin, and possibly even be turned against its former masters. (There is an old saying in spy circles: “He who goes into enemy territory is forever tainted for having been there.”)

  Similarly, in the software world, watcher agents will be dispatched to spy on opposing companies, nations, or individuals; but there will be danger that other software entities might hack into the watchers and control what they take back to their masters. In effect, watchers must contain their own encrypted “genetic code” to check and make sure they have not been tampered with.

  At first sight, this ebb and flow of spying and counterspying resembles war, but on another level it begins to sound like the world of parasitism in nature! In the long run, might the result be more like an ecosystem, mimicking the biological world as time goes on? Gnats, preyed upon by mites, which are attacked by amorphously programmed amoebae, which are plagued by viruses.... How long before such a rapidly evolving world sloshes out of the narrowly programmed confines that we designed, spreading in ways that we never intended, or imagined?

  Perhaps it would be best to tread along this brave new path slowly and cautiously. To accomplish this, we might pass laws against gnat cams. That approach will fail. Alternatively, we might try to limit this micro-arms race by encouraging people to feel less paranoid in the first place. That can happen only if each person feels he or she already knows most of what is going on.

  Stepping back from far-out speculation, we have already seen some transparency-related tools coming of age. Take the Witness Program that we mentioned back in chapter I—a project of the Lawyers Committee for Human Rights, conceived by rock star Peter Gabriel and funded by Reebok Foundation. Its aim, you will recall, is to improve the documentation and communication of human rights conditions around the world. The program offers private rights groups the tools of mass communication, such as handheld video cameras and fax machines. Other organizations have begun supplying cameras to groups within the United States, so they might form neighborhood crime watch teams, or else hold the police accountable within their communities. Even if the cameras don’t shrink to gnat size, we might as well face the fact that they are here, in droves, in swarms, and here to stay.

  Cameras aren’t the only kind of surveillance technology to loom ominously. In addition to developments cited in chapters 1, 2, and 3, consider: • A computer can be tapped by tracing the electro-magnetic radiation given off by its video monitor, using the building’s water pipes as an antenna. (“Tempest” is the terminology used for such techniques a spy can use to eavesdrop on electronic activities inside a building, even through walls, even when the system is not linked to the outside world.) Future computers are supposed to be “tempest safe.” But only a fool would declare this arms race over.

  • A “micropower impulse radar,” developed at Lawrence Livermore National Laboratories (to measure fusion reactions caused by the lab’s powerful Nova laser), may soon retail for as little as $25, finding applications in burglar alarms; automobile obstacle detectors; automobile air-bag deployment; and pipe wire, or sewer line detection through walls or soil. It will also add to the list of “eyes” that can peer at us, even through fog or gloom of night.

  • So you use anonymous remailers to reconvey all your messages, so that nobody (except the remailer owner) can trace your identity? Better be careful. Experts at linguistic analysis are developing effective ways to appraise and detect spelling and grammar patterns that are unique to each individual. According to one prominent cypherpunk, “even today, where people use anonymous e-mail, analysis of style and word usage could probably identify many of the authors, if people cared enough to look.”

  Here is another thought-provoking piece of news. A new kind of artificial nose has been developed by MIT professor Nathan S. Lewis, tracking electrical resistance across an array of polymer sponges—each one absorbant to a different range of molecular types—to detect, recognize, or classify an almost infinite variety of odors, each with its own unique spectral response. Such devices will cheaply and tirelessly monitor home or office for air quality or watch out for the telltale scent of drugs, perfumes, or possibly even a person’s characteristic aroma. Tools like these, if monopolized by some government agency, corporation, or secretive clique, would make ludicrous any chance of walking about anonymously or unrecognized.

  Recall that some strong privacy advocates want to apply controls on companies that collect information on vast numbers of people for commercial purposes. They suggest this may be achieved by establishing “ownership” rights for information of, by, and about each individual. I have already predicted failure for this effort, because it flies in the face of the basic human drive to gather as much knowledge as we possibly can. And now technical means seem to be coalescing that will allow information collection and dispersal free of interference by any law. There are reports of “floater” information sets, semiamorphous data clusters that drift among many different memory loci at any given time, allowing each system operator to deny local responsibility or ownership. These techniques are still tentative, but if they achieve full potential, it may become impossible to outlaw the “possession” of contraband information, since anyone will be able to read (or add to) such databases without accepting liability or responsibility for them. In a sense, the knowledge will have a life of its own, in cyberspace.

  Whether or not this dodge ultimately proves practical, there will always be a fallback position for “data impresarios.” They can turn to the banking (and now information) havens—countries that specialize in selling confidentiality to anyone who can afford the price. All that a data-hungry consortium would have to do is hire some consultants under the table, to do the actual fact collecting, while the corporation itself preserves deniability. Cypher-enthusiast Eric Hughes calls this “crypto arbitrage”—moving secret transactions to sites with the fewest regulatory impediments. Hughes depicts the phenomenon as intrinsically liberating for individuals who want to wh
isper to each other at great distance without being overheard. But the real winners will be massive institutions. Once a computer in the Cayman Islands has everybody’s SSN, or bankruptcy records stretching back more than seven years, or copies of “protected” medical records, how will the strong privacy crowd hope to get such information back? What is to stop government agencies from using the same dodge as corporations: hiding their darkest secrets overseas, out of reach of inspection by society’s agents of accountability?

  If encryption becomes a universal norm, how will we know if the databases are there at all?

  How ironic that many strong privacy advocates support the national money-laundering cartels as refuges against government tyranny. Yet, until transparency floods through Berne and Vaduz, the concept of data privacy is guaranteed to be a pathetic joke. You can shout that “personal information is personal property” until the Moon spirals away toward the Milky Way, but that won’t change a thing as long as big shots can shelter their databases beyond reach of all civilized norms.

  Every technological advance we have talked about so far in this section is fairly mundane and predictable. (Yes, including gnat-sized cameras.) But now let’s ponder some that are more speculative and potentially disturbing. For instance, what conceivable breakthrough might turn out to be the ultimate transparency tool of all time?

  How about a truly effective lie detector?

  I’m not talking about the ill-famed “polygraph.” In 1997 the U.S. Supreme Court began hearing yet another round of arguments that this venerable device was at last ready to take its place among respectable tools of jurisprudence, despite a spotty record giving the whole concept an aura of crackpot magicianship, a reputation that deterred many futurists from considering what might happen if a truly effective technique were ever found, allowing people reliably to separate fact from fabulation.

  Clearly, this is one of the most tantalizingly difficult problems faced by humanity. Trying to distinguish truth from deceit may be one mental activity to which we devote even more gray matter than making guesses about the future! Each of us can recall many painful episodes in life when we pondered worriedly about another person’s veracity, or else sweated it out while someone else paid us the same acute scrutiny.

  I would never underplay the difficulty of inventing a truly effective lie detector. Scientists have found that human beings are especially talented prevaricators. I mentioned earlier how Robert Wright’s book The Moral Animal lays out the current theory that deception became a crucial skill in the human mental inventory as we evolved. Moreover, it has been demonstrated that an important part of this skill is deftness at fooling ourselves! In other words, we can lie much more convincingly if we somehow manage to create a vivid set of realistic supporting images and feelings in our minds, almost as if the whopper we just told were actually true. Making up tales—and believing in them, at least temporarily—is a favorite human pastime.

  This makes the task of designing an effective lie detector challenging, to say the least. And yet, there appears to be some growing enthusiasm for the idea. As researchers sift and parse the brain’s workings down to neuronby-neuron analysis, who can guarantee that they won’t discover some “verity locus” that we all share, or some set of telltale autonomic signs that might even be detected and read from afar?

  If such a device is ever developed, there will certainly be a variety of reactions. The late columnist Mike Royko once wrote an essay titled “Let’s Give Lying the Respect It Deserves,” claiming that without lies we would have chaos, rioting, and a collapsed economy. If our leaders told the truth, they would most likely all be out of jobs, and we would all be nervous wrecks. Stressing the diametrically opposite perspective in Radical Honesty: How to Transform Your Life by Telling the Truth, Brad Blanton says, “Lying is the major source of all human stress.” Blanton recommends eliminating even the “little white lies” that seem to smooth life’s daily encounters at work, on the street, or at the breakfast table. For most of us, a less radical midway opinion may be more typical. We want to catch evil, but to leave enough slack for the little prevarications that ease life along.

  In any event, the point here is not that a foolproof lie detector is desirable, or even that it is likely, only that it may be plausible. If a truth machine ever did appear, it would have tremendous potential, either for beneficial use or wretched abuse.

  If restricted to the hands of just a few, it could be a tyrant’s dream come true.

  If distributed instead to the world’s billions, it would surely be a bloody damned nuisance, one that we’d all need time to adjust to.

  But we would adapt. And the machine would then be any despot’s worst nightmare.

  Related to lie detection, but even more disturbing, is the concept of proclivities profiling. Suppose it became possible to combine a suite of factors, from both nature and nurture, and from them draw a statistically valid inference regarding which individuals in society are likely, or predisposed, to commit crimes?

  If the very idea of this question scrapes a raw nerve, good! It ought to. History shows that people have an ancient and pervasive habit of judging others by whatever standards are fashionable at a given moment, quickly pigeonholing them into categories, then shunting their destinies toward peasantry, isolation, persecution, or even death. Most of us will admit falling all too often for this nasty temptation to stereotype those around us in daily life. And yet, some progress can be seen in the fact that our official morality at last rails against it. According to modern sensibilities in the neo-West, it is sinful to prejudge a person merely for being a member of some group, even if there is a statistical correlation suggesting he or she may be somewhat likely to commit transgressions in the future.

  But think about it. Whether you believe that behavior is influenced most by genes or by upbringing, or whether (like most of us) you figure both play a complex, synergistic role in the mystery of individualization, who would be willing to bet their life savings that researchers won’t find sets of highly significant correlations in the near future?

  It could be a combination of blood type with an inherited allergy, a history of beatings in the home, plus having watched too many Chuck Norris movies while eating a particular brand of cheese puffs.... The important point is that, in a world filled with curious and inventive sleuths, equipped with fantastic laboratory and computational tools, any such correlations that do exist will be ferreted out during the next few decades. Even if some connections later prove spurious—lacking true underlying causality—they will draw lots of attention, so we had better be ready for the howling storm of public opinion when the announcements come.

  How should we react, when some group of researchers announce that they can profile people according to their proclivities toward violence, or other unsavory behavior? One supposes that this analysis would not be the same as the Love, American Style dating skit, in which boys were judged according to their actual past behavior. Rather, these profiles may stick to people who have, as yet, done nothing wrong!

  An automatic (and in many ways admirable) response would be to forbid proclivities profiling from the very start. But that opens up a real can of worms, including issues of free speech and squelching open scientific enquiry. Any high moral position that rests on suppressing knowledge stands on shaky ground.

  Moreover, if violence is sickness, might we somehow help the predisposed in advance, offering special training to control their inner drives? What happens when a set of profiles results in the actual avoidance of a certain number of crimes, sparing some potential victims from injury or death? In fact, let’s take another look at Megan’s Law, and related codes that require the registration of sex offenders’ histories and addresses, warning neighbors if a former sexual predator starts living in their midst. Surely such laws fall into the category we are talking about, branding people because they manifest a higher likelihood of committing future crimes. In this case, the increased danger is demonstrable, based on past tr
ansgressions, and potentially severe enough to merit strong measures. But isn’t that exactly what you’d expect to see, the first time such profiling procedures work their way into law? Once the basic principle is established, we are only arguing over details.

  This issue used to lurk at the fringes of science, promoted by enthusiasts with marginal credentials, whose agendas were sometimes redolent of racism. But the field has progressed considerably to the extent that Psychiatric Annals devoted an entire volume to “Psychiatric Aspects of Wickedness.”

  Let me emphasize that my feelings about this are just as mixed and muddled as any reader’s. The potential for abuse is horrifying, yet it is tempting to imagine how many of the harmful people one sees in the news—from spouse abusers to serial killers—might have led better lives if they had been offered a choice of eclectic and caring medical helpers, long before they proceeded to wreak havoc around them. On the one hand, proclivities testing could be used as a powerful tool for the suppression of diversity. And yet, the most harmful possible outcome isn’t always what happens when we stumble into a new and unavoidable branch of science.

  Again, I find this prospect unnerving, even terrifying. Nevertheless, it’s clear that such tools will only be made far worse if held closely by a secretive cabal. If proclivities profiling is inevitable, we may be better off sharing responsibility for it, arguing passionately over what the techniques imply, and reflecting on how best to minimize the harm they may do.

 

‹ Prev