Smart Mobs
Page 14
Pervasive media bring the power of surveillance together with the powers of communication and computation. Other people will be snooping on those who use mobile and pervasive media. In some cases, the snooping will be consensual and mutually beneficial. In other cases, it will be everything feared by Orwell and more—tele-torture, to pick one horrible possibility (combining the satellite-tracked ankle cuffs used on some offenders today with the remotely controllable electrical shocking device used in some dog collars). There are important questions about pervasive surveillance:
• Who snoops whom? Who has a right to know that information? • Who controls the technology and its uses—the user, the government, the manufacturer, the telephone company? • What kind of people will we become when we use the technology?
The kind of world we will inhabit for decades to come could depend on the technical architecture adopted for the emerging mobile and pervasive infrastructure over the next few years. For example, if the power to encode information as a shield against surveillance is vested in billions of individuals and literally built into the chips, the situation that arises is radically dif- ferent from a world in which a few have the power to snoop on many. That power is what is at stake in political conflicts over encryption laws.
Although the issue is most often cast as “privacy,” arguments over surveillance technology are about power and control. Will you be able to use the capabilities of smart mob technologies to know everything you need to know about the world you walk through and to connect with those groups who could benefit you? Will you be allowed to cooperate with anyone your wearable computer helps you choose? Or will others know everything they need to know about you through the sensors you encounter and information you broadcast? Different answers to those questions lead to different kinds of futures. The answers will be determined in part by the way the technology is designed and regulated in its earliest stages.
A few years after I encountered VR at NASA, and a few miles north, I met a fellow who thought about the opposite of virtual reality. He wanted to make computers, not the real world, disappear. His name was Mark Weiser, and although he built on the work of predecessors, he is acknowledged for asking the first critical questions about the technology he was helping bring into existence: “If the computational system is invisible as well as extensive, it becomes hard to know what is controlling what, what is connected to what, where information is flowing, how it is being used, what is broken (versus what is working correctly, but not helpfully), and what are the consequences of any given action (including simply walking into a room).”6
“Here, carry this pad,” Weiser said when I visited him in 1994, handing me an artifact I didn’t understand at the time. It fit nicely in the palm of my hand and had a small screen. As we walked from room to room in the Xerox Palo Alto Research Center, fabled birthplace of the personal computer, a large screen in the rooms we entered showed our location and the location of other researchers around the lab.
Weiser smiled frequently. He wore red suspenders. I quickly learned that any computer we approached in any part of the laboratory displayed his personal computer files when he walked up to it and put his PARCpad down. “Ubiquitous computing,” or “ubicomp,” Weiser called it, “is invisible, everywhere computing that does not live on a personal device of any sort but is in the woodwork everywhere.”7 The notion of a future where every desk, wall, home, vehicle, and building possesses computational powers was radical for 1988, when the research effort started. Weiser insisted that the implications of such a future were serious enough to consider long in advance.
Weiser, former chief technologist at PARC, died in 1999, on the cusp of the era he had foreseen. In 1991, Weiser declared: “The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.”8 Knowing that it would take decades for technology and economics to catch up with his extrapolations, Weiser asked in a provocative Scientific American article how our lives might change if every object and environment contained microchips that could communicate with each other and with mobile devices.9
When Weiser and I performed the primary social ritual of the modern worker—the consumption of caffeinated beverages—he pointed out PARC’s online coffeepot. The communal coffeepot has played a historic role in the development of pervasive computing: A PARCpad affixed to the coffeepot signaled other people nearby via the local network whenever a fresh pot was ready. The introduction of this simple sensor catalyzed coffeepot conversations among the researchers. There are now uncounted thousands of webcams in the world. The first one was aimed at a coffeepot. Researchers at the University of Cambridge wanted to see if a fresh pot was ready without walking down the hall, so they aimed a digital camera at the coffeepot and rigged it to send periodic snapshots. Because they sent the pictures via the Web, the Cambridge researchers also made the coffeepot visible to anyone else in the world who cared to look.10 Millions of people did. Since then, webcams have proliferated. The first network-connected examples of ambient intelligence were associated with a social networking ritual. Online coffeepots were early smart mob technologies.
Weiser forecast that computers of the twenty-first century would become invisible in much the same way electric motors did in the early 1900s: “At the turn of the century, a typical workshop or factory contained a single engine that drove dozens or hundreds of different machines through a system of shafts and pulleys. Cheap, small, efficient electric motors made it possible first to give each machine or tool its own source of motive force, then to put many motors into a single machine.”11 For the better part of a century, people have lived among invisible electric motors and thought nothing of it. The time has come to consider, as Weiser urged us to do, the consequences of computers disappearing into the background the way motors did.
It wasn’t until I started looking into smart mobs that I saw the connection between ubicomp and two of the characters I had encountered ten years ago, when I wrote about the emerging field of virtual reality. The central idea of VR—that computer graphics and sensor-laden clothing would enable people to immerse themselves in lifelike artificial worlds— fascinated even those who didn’t care about computers. It was a metaphor for the way computers and entertainment media were surrounding people with artificial worlds. The past ten years of VR have not been as exciting as the original idea was or as I had thought they would be. Whether or not researchers ever succeed in creating truly lifelike worlds, many of the technologies, capabilities, and issues that grew out of VR research contributed to the development of smart mob components such as pervasive computing and wearable computers. Sometimes, a technological development appears to dead-end, when it is really in the process of sidestepping.
When I first looked into the origins of virtual reality, I came across a curious book that was part science, part art, part futurist manifesto. In 1991 I took a train and then a bus to the University of Connecticut at Storrs to see what the book’s author, Myron Krueger, had built with analog electronic circuits and video cameras in a room behind the university’s natural history museum.12 He had been working on something he called “artificial reality”—the title of his 1983 book—since the late 1960s.13 The enabling technologies to manifest his ideas properly wouldn’t come along for decades. As an artist and an engineer, he was able to look beyond the horizon of what new media would do for people to glimpse what it might do to people. In 1977, he wrote something about “responsive environments” that speaks directly to those who attempt to build “smart rooms” and pervasive computing:
We are incredibly attuned to the idea that the sole purpose of our technology is to solve problems. It also creates concepts and philosophy. We must more fully explore these aspects of our inventions, because the next generation of technology will speak to us, understand us, and perceive our behavior. It will enter every home and office and intercede between us and much of the information and experience we receive. The design of s
uch intimate technology is an aesthetic issue as much as an engineering one. We must recognize this if we are to understand and choose what we become as a result of what we have made.14
I was reminded of another VR researcher when I started rethinking pervasive computing: Warren Robinett, a soft-spoken fellow with a touch of a southern drawl who proposed that head-mounted displays could be used to extend human senses instead of immerse them in an artificial environment. Robinett had designed the software for NASA’s VR prototypes. One evening in 1991, when I was visiting the University of North Carolina VR lab in Chapel Hill, Robinett asked, “What if you could use VR to see things that are normally beyond human perception?” At that time I was editor of the Whole Earth Review, so I commissioned Robinett to write an article. While I was researching smart mobs, I was surprised to find Robinett’s article cited as one of the first descriptions of what is now known as “augmented reality.”15 Robinett proposed connecting the head-mounted display to a microscope, telescope, or a video camera equipped with gear that could make infrared, ultraviolet, or radio frequencies visible.
Today’s research on “smart rooms” and “digital cities” uses computation and communication to extend the idea of “responsive environments,” as Krueger forecast. Today’s “wearable computing” addresses Robinett’s proposal to use computer-aided media to extend human capabilities. These different technical approaches have radically different political consequences.
Alex Pentland, now Academic Head of MIT Media Lab, directed research into both the responsive environment (Krueger) and extended senses (Robinett) approaches when he directed both the “Smart Rooms” and “Smart Clothes” projects, which he described as examples of “The Dance of Bits and Atoms”:
There is a deep divide between the world of bits and the world of atoms. Current machines are blind and deaf; they are unaware of us or our desires unless we explicitly instruct them. Consequently, only experts use most machines, and even they must spend most of their time battling arcane languages and strange, clunky interface devices.
The broad goal of my research is merge these worlds more closely and intimately, primarily by giving machines perceptual abilities that allow them to function naturally with people. Thus machines must be able to recognize people’s faces, know when they are happy or are sick, and perceive their common working environment. I call this Perceptual Intelligence, a type of situation awareness, Roughly, it is making machines know who, what, where, when and why, so that the devices that surround us can respond more appropriately and helpfully.
To develop and demonstrate this idea, my research group and I are actively building Smart Rooms (i.e., visual, audio, and haptic interfaces to environments such as rooms, cars, and office desks) and Smart Clothes (i.e., wearable computers that sense and adapt to the user and their environment). We are using these perceptually-aware devices to explore applications in health care, entertainment, and collaborative work.16
“Bits and atoms” is a major theme at MIT’s Media Lab. Ivan Sutherland started it in 1965 with his dramatic statement that “the ultimate display would, of course, be a room within which the computer can control the existence of matter. A chair displayed in such a room would be good enough to sit in. Handcuffs displayed in such a room would be confining, and a bullet displayed in such room would be fatal.”17 While others at Media Lab work in the “Things That Think” or “Tangible Bits” programs—ways to create Sutherland’s chair, if not the hypothetical bullet—Pentland and his colleagues built the first smart room in 1991.18
Cooltown and Other Informated Places
On a warm October day in 2001, I drove to the end of a country road. Atop a hill, past the “caution livestock” sign, behind a security gate, I found a bright green, fragrantly fresh-cut lawn surrounded by unpopulated, summer-brown foothills stretching to the horizon. Except for owning their own hill, IBM’s Almaden Research Laboratory was a low-key affair. Jim Spohrer signed me in and escorted me into what one journalist had called “Big Blue’s Big Brother Lab.”19 We talked on our way to his office.
Spohrer had taken a sabbatical from Apple Computer’s Learning Communities group in 1994 with the intention of finding something new to work on. He was particularly interested in the future of education. Walking on a trail, he asked a fellow hiker the name of a plant. “The hiker said that he didn’t know, but his friend probably did. While I waited for the friend to come down the trail, I realized that I had a cell phone and a computer. It occurred to me that if I could add a global positioning system, then the person who knew the plant could geo-code the message. Why not make the entire world into a geo-spatial informational bulletin board? I got back to Apple and started building prototypes.”20
What emerged was a proposed infrastructure called WorldBoard. In 1996, Spohrer wrote:
What if we could put information in places? More precisely, what if we could associate information with a place and perceived the information as if it were really there? WorldBoard is a vision of doing just that on a planetary scale and as a natural part of everyday life. For example, imagine being able to enter an airport and see a virtual red carpet leading you right to your gate, look at the ground and see property lines or underground buried cables, walk along a nature trail and see virtual signs near plants and rocks.21
Spohrer raised the bar for technical difficulty by wanting to see the information in its context, overlaid on the real world. WorldBoard, Spohrer noted, combined, and extended the ideas of Ivan Sutherland, Warren Robinett, and Steven Feiner. Sutherland had invented computer-generated graphics in his MIT Ph.D. thesis in 1963.22 Computer graphics came a long way in forty years, from Sutherland’s first stick figure displays to today’s computer-generated feature films. Another prototype that Sutherland developed in the 1960s, the “head-mounted display,” took a less dramatic development path.23 Sutherland realized that synchronized computer displays, presented optically to each eye, yoked to a device for tracking the user’s location and position, could create a three-dimensional computer graphic either as an artificial world or as an overlay on the natural world.
One of Sutherland’s prototypes used half-silvered mirrors that enabled the computer to superimpose graphical displays on physical environments. While most VR researchers pursued “immersive” VR, Steven Feiner at Columbia University continued the line suggested by Sutherland’s semitransparent mirrors. The Columbia group in the early 1990s worked on models of an office of the future in which head-mounted displays superimposed information on physical components. A repair technician, for example, could use such a system to see a wiring diagram projected on the machine, or a plumber could see through a wall to the location of the main pipes.24
Spohrer set out to assemble “the technology of augmented reality, the art of special effects, and the culture of the information age” to make a “planetary chalkboard for twenty-first-century learners, allowing them to post and read messages associated with any place on the planet.”25 It is not hard to imagine a server computer storing information associated with every cubic meter of the earth’s surface; computer memory is cheap. Geographic positioning systems could make handheld or wearable devices location-aware. Wireless Internet access would mean that a user could access the server computer and add or receive information about specific geographic locations.
World Board servers would define computer codes that could be used to associate information of all kinds with the six faces of a virtual cube, one meter on a side. A user’s device would combine the coordinates of the cube’s location and one of the cube’s faces with a channel number, along with a password, then transmit or receive information about that place through a mobile device. That transmitted information could be spatial coordinates for projecting a virtual overlay onto an object in space, or an animation, text, music, spreadsheet, or voice message. The client software that runs on users’ devices would include “a mobile capability to author and access the information associated with places on a planetary scale. A location-aware
device with navigation, authoring, and global wireless communication capabilities would be needed.”26
When I started looking for similar research, I found it everywhere. In 2001, researchers at the Social Mobile Computing Group in Kista, Sweden, presented their GeoNotes system, which enables people to annotate physical locations with virtual notes, to add signatures, and to specify access rights.27 Jun Rekimoto and his colleagues at Sony described in 1998 “a system that allows users to dynamically attach newly created digital information such as voice notes or photographs to the physical environment, through mobile/wearable computers as well as normal computers. . . . Similar to the role that Post-it notes play in community messaging, we expect our proposed method to be a fundamental communication platform when mobile/wearable computers become commonplace.”28 It isn’t clear yet which standard will dominate, but it is clear that first-rate scientists and major institutions all over the world are working on ways to link information and places.
After the global geo-coding infrastructure and the client software, the third element of Spohrer’s vision was “WorldBoard glasses,” which would make it possible to perceive information in place, “co-registered” with the physical environment so that it would look like a perfect overlay. When Spohrer moved to IBM’s research laboratory, he brought his vision with him. Several professional-quality posters on the walls of his office illustrated different Almaden research initiatives into “Digital Jewelry,” “Location- Based Services,” “WorldBoard,” and “Wearable Computing.” We walked down the hall to the office of Ismail Haritaoglu, who handed me the prototype of what he called the “InfoScope: Link from Real World to Digital Information Space.”29