Book Read Free

Smart Mobs

Page 15

by Howard Rheingold


  Haritaoglu gave me an off-the-shelf hand-held computer with an off-the-shelf digital camera attachment, connected to a stock model digital cellular phone. Haritaoglu pointed out some signs on the wall outside his office. I picked one in Chinese, which I don’t read. Following his directions, I pointed the lens of the device in my hand at the sign on the wall, clicked the shutter, pressed some buttons on the telephone, and in a few seconds, the English words “reservation desk” appeared on the screen of the Info-Scope. “We use computer-vision techniques to extract the text from the sign,” Haritaoglu explained. “That requires processor power.” The telephone sent the picture to a computer on IBM’s network, which crunched the numbers to parse the characters out of the image, crunched the numbers to translate the text, and sent it back to the device in my hand. In the near future, there will be sufficient processor power to enable the device itself to crunch the translation, but that won’t matter so much when all the processing power you want is available online, wirelessly.

  After we left Haritaoglu’s office, Spohrer told me about research into “attentive billboards”—display screens that use optical recognition techniques to learn where people are looking and to detect characteristics of the people who look at the billboard: “There’s a display at the checkout counter for people waiting in line,” Spohrer explained. “The billboard looks back at them as they gaze at ads and news, extracts information about their sex, age, and race, and adjusts the display accordingly. When grandma walks up, it can show the knitting advertisement, and when you walk up in your leather gear, it can show a motorcycle. Attentive billboards can recognize where you are looking and even extract your facial expression to guess whether you are happy or sad.”

  When I started investigating the combination of mobile communication and pervasive computing, it didn’t take long to discover the R&D hotspots; they are always the places where the authors of the most interesting papers work. I began to believe that a new technological infrastructure really is in the process of emerging when I saw how IBM, Hewlett-Packard (HP), Nokia, Ericsson, Sony, and DoCoMo conduct similar R&D. From Al-maden, the trail led to Silicon Valley’s Cooltown, Helsinki’s Virtual Village, Stockholm’s HotTown, and a couple of labs in Tokyo.

  CoolTown, HP’s pervasive computing effort, is built around the Web as the universal medium for linking physical and virtual worlds. CoolTown is in the same building where Bill Hewlett and David Packard’s offices are enshrined as they left them. After a ritual pilgrimage to the eerily time-frozen temples of the founders, I came to a door marked by a highway sign that said “CoolTown City Limits.” Gene Becker, a strategist for HP’s Internet and Mobile Systems Lab, a maestro of the studiously casual demo, welcomed me into what looked like an ordinary meeting room.

  Becker pointed his modified Kyocera Smartphone at the projector, and the room’s Web page popped up on the wall screen. “We call it ‘e-squirting’ when we transmit URLs from our personal devices to another device in the environment,” he explained. The projector and printer each had radio-linked Web servers built into them. “Imagine walking into any meeting room in this building, or the world, and displaying your presentation on the screen, or printing documents on the local printer. CoolTown is a test bed for a future in which every person, place, or thing can be connected wirelessly, anywhere in the world, through the Web.”

  I asked Becker about the neo-retro aluminum radio console on a table in the corner. Becker pointed his phone at it, and music started playing. “You can play your own music from any radio that’s equipped to communicate with you. Stick a Web server inside a device, and suddenly the Web and browsers become your universal remote control for that device.”

  CoolTown researchers use barcode readers, radio-frequency identity tags, wireless Internet links, Web servers on chips, infrared beams, handheld computers, and mobile telephones to create an ecology of “Web-present objects.” Although the original ubicomp researchers knew it would take a decade for the price of chips to drop low enough, they didn’t know in 1988 that the Web would come along to provide a worldwide infrastructure. By assigning URLs and wireless Web servers to physical objects, HP researchers are looking at what happens to life in a city, a home, and an office when the physical world becomes browsable and clickable.

  Think of all the public places where inexpensive chips could squirt up-to-the-second information of particular interest to you—such as the time your flight leaves and animated directions to your destination in an unfamiliar city—directly to your phone. “You could look through a physical bookstore, tune into ‘virtual graffiti’ associated (through the Web) with every book, and read reviews from your book club or see how people who like the same books you like rate this one,” said Becker. Point your handheld computer at a restaurant, and find out what the last dozen customers said about the food. Point your device at a billboard, and see clips of the film or music it advertises, and then buy tickets or download a copy on the spot. Not only will products and locations have Web sites, but many will have message boards and chat rooms.

  Recognizing that it’s impractical to put signal beacons everywhere in the world, and cognizant of the privacy implications of location-based services, CoolTown researchers came up with virtual beacons called Websigns.30 Websigns are a combination of information and geo-coded coordinates, stored in a database available through the Web, like WorldBoard except your mobile device doesn’t interact directly with a world map on a server in real time. Instead, you download the entire database of all the local Web-signs to your mobile device. You could easily store information about tens of thousands of up-to-date locations for an entire city on a handheld device. Your device knows where it is located at all times. It looks at the database, and without signaling anyone but you, it tells you what virtual beacons are available. Nobody but you knows exactly where you are when you query the database because it’s in your hand, not out on the Web. CoolTown’s use of “semantic location” for Websigns is an existence proof that privacy protection can be designed into potentially intrusive technology.

  Who owns access to your devices, either to push information at you or to pull information from you? Some of the answers will emerge from political processes, but many of them are sensitive to technical design decisions. In that regard, the designs that dominate early in the growth of a technology can have disproportionate power over the way the technology will affect power structures and social lives. What control will you have over whose sensors and beacons can talk or listen to your device? Who will have the right and the power to leave messages in places? HP asserts that using the Web as a standard for connecting mobile and pervasive technologies is essential for maintaining open and affordable access.

  Becker was candid: “We don’t want any company to gain unfair architectural control over how the physical and virtual worlds are connected. That’s one reason why we’re moving some of the software we’ve developed into an open source development community. We want to see a world like the early days of the Web, where anybody with the skill and interest and some ideas can create novel applications for themselves or their friends or make a business out of it.” If today’s mobile telephone morphs into something more like a remote control for the physical world, social outcomes will depend on whether the remote control device’s software infrastructure is an open system, like the Web, or a closed, proprietary system.

  My previous exploration of VR research made it easier to identify the most interesting recent explorations of the “magical glasses” Robinett and Spohrer dreamed of using. I discovered that in 1997, a group of computer scientists at Columbia University, led by Steven Feiner, collaborating with Anthony Webster of Columbia’s Graduate School of Architecture, Planning, and Preservation, made a navigable virtual model of the Columbia campus, “a prototype system that combines the overlaid 3-D graphics of augmented reality with the untethered freedom of mobile computing.”31 Wearing the proper apparatus enabled users to access information about specific places as they s
trolled around the campus. The real-time position-sensing capabilities necessitated headgear and a backpack full of equipment. I learned that at Keio University outside Tokyo, Professor Scott Fisher of DoCoMo’s mobile communications laboratory had assembled a similar backpack and headgear to create an immersive experience of place-specific information.

  Fisher’s “Wearable Environmental Media” platform is what led me to walk around the Keio University campus with a headful of gear and a heavy backpack.32 Precise visual co-registration of virtual images on the physical world requires knowing not only where the user is located to within a few millimeters but also where the user’s eyes are directed. This makes for a complicated and heavy prototype. It’s a strange experience the first time you put on a helmet that covers your eyes and then watch the world around you through binocular television screens. I took a step, and the co-registration wasn’t millimeter-perfect, so the lawn I saw wasn’t exactly where my foot felt it to be. The sense of encapsulation, of being able to see the world well enough to navigate it, but viewing it only through the intermediation of cameras, is key to the experience of wearable computers.

  If a user doesn’t require magical glasses but accesses WorldBoard or CoolTown by glancing at the screen of a handheld device, prototypes can be built from off-the-shelf components today. Without the requirement that the experience of information in places be immersive, the investigation shades into the slightly different research field of context-aware mobile phones. Context-aware device research is part of a multi-industry effort to anticipate a market for location-based services via mobile telephones. We begin to move out of the world of dreamy-eyed futurizers and into the product cycle.

  Location, Location, Location

  Knowing our exact geographic location is one form of context awareness in which machines are better than humans. Location-aware services have been growing since NTT launched DoCo-Navi in 1999, providing real-time maps and directions on handheld devices. By mid-2001, DoCo-Navi users in Japan were generating between 500,000 and 800,000 daily mapping requests.33 As for location services in the United States, according to an August 2001 story in the Washington Post, twenty-year-old Joe Remuzzi has a global positioning system (GPS) with 2 million points of interest programmed into it, which not only lets him check restaurants in his vicinity but groups them by cuisine: “Especially when I’m going to concerts far, far away, I’m almost like a local,” Remuzzi said. “Like it shows where the Cajun restaurants are.”34 GPS navigators also became available in most high-end U.S. rental cars by 2002.

  A form of location awareness is built into cellular phone systems. When you turn on your mobile telephone, it transmits a radio signal with an identifier. Cellular antennae located every few miles listen for these signals and thus are able to relay calls to the proper recipients. When you move out of range of one cell, your call is transferred to the control of another cell. By triangulating the signals from nearby cells, it is possible to locate a telephone within a few hundred feet in cities. In other words, every cell phone generates a record of where it has been.

  More accurate positioning than cell triangulation is possible, to a range of ten to fifteen meters, through the use of global positioning system chips. The U.S. government developed GPS, which triangulates radio signals from twenty-nine orbiting satellites. Until recently, the U.S. military introduced errors into the data to prevent anyone but U.S. military from using GPS to obtain locations closer than one hundred meters. In May 2000, the U.S. government ended GPS scrambling, and a civilian market for GPS started to blossom. The U.S. government has ordered all telephones sold in the United States to become location-aware by 2005, for the purpose of improving emergency services. In 2002, Japan’s KDDI and Okinawa Cellular announced plans to market a GPS-equipped telephone that can sense which direction the device is being pointed, as well as its location.35

  When I zoom back to a wide view of an urban area in the age of mobile and pervasive technology, I can envision meshes of private and public devices, beacons, kiosks, appliances, place-based information sources and bulletin boards, traffic sensors, and transit services—citywide systems, some designed from the top down, others grown from the bottom up. Cities are places of massive information flows, networks, and conduits and myriad transitory information exchanges. Enthusiasts of “digital cities” are trying to understand the dynamics of computationally pervasive cities populated by mobile communicators in order to consciously design architectures that promote conviviality as well as safety and convenience.36

  I came across several different flavors of urban virtualization in Helsinki: the grassroots, open-source-oriented Helsinki Arena 2000, the top-down Helsinki Virtual Village, and the social networks of four Internet dudes who called their project Aula. Risto Linturi described a project supported by HP Bristol Labs, Helsinki Telephone, and a company called Arcus, a system to integrate mobile location data in real time. The system is envisioned as

  a distributed messaging environment where all moving vehicles such as buses and taxis could be shown as corresponding avatars with their links in the model. . . . In the virtual Helsinki you can meet your friends as avatars just as you meet them in the real Helsinki. In the virtual world you just do not have to leave home when it is raining or snowing heavily. You can use the same popular meeting points, such as in front of Stockman’s warehouse or at the Lasipalatsi Clock Tower. You may even experience the same crowds together and possibly get to know some other people in these crowds.37

  More recently, the state of California created the Center for Information Technology Research in the Interest of Society (CITRIS) to design “pervasive, secure, energy-efficient, and disaster-proof information systems, delivering new kinds of vital data that people put to use quickly . . ., highly distributed, reliable, and secure information systems that can evolve and adapt to radical changes in their environment, delivering information services that adapt to the people and organizations that need them. . . . We call such systems Societal-scale Information Systems.”38

  The attacks of September 11, 2001, stimulated new directions in “intelligent city” design:

  The key lies in developing and deploying technologies that will tie infrastructure components together into a system that’s far smarter and more self-aware than anything we have today. Engineers, security consultants and authorities on counterterrorism are working hard to weave together the threads of this technological fabric, which will be pervaded by instruments that can sense harmful chemicals in a reservoir, relay critical data about a damaged building’s structural integrity to rescue workers, help map escape routes or streamline the flow of electricity in a crisis. These high-tech networks— joined with simulation tools, enhanced communications channels, and safer building designs—could go a long way toward creating an “intelligent city,” where danger can be pinpointed and emergency response directed.39

  To get a sense of the scope of smart mob infrastructure, zoom from the citywide view to a close-up of the objects, buildings, and vehicles in the city. The growing ability of mobile devices to read barcodes and to communicate with the coming generations of radio chips that will replace barcodes is making it possible to click on the real world and expect something to happen.

  The Marriage of Bits and Atoms

  The barcode—that enigmatic band of stripes printed on most manufactured products—was an early bridge between physical and virtual worlds. The idea originated in the 1930s with a Harvard business student who invented an “automated grocery system” using punch cards. His idea did not catch on.40 The modern barcode dates to 1949 and was developed by Norman Woodland, a graduate student and teacher at the Drexel Institute of Technology. The technology lay dormant until 1973, when Norman Woodland’s design for IBM corporation was chosen by the grocery store industry and later named the Universal Product Code. In 1981, the U.S. Army began using it to label its equipment. Today, Federal Express is the world’s largest user of the barcode. Five billion codes are scanned every day in 140 c
ountries.41

  Among the many changes made possible by barcodes was a transformation of manufacturing worldwide from a warehouse system to a “just-in-time” system; as automobiles and other component-based systems (including grocery store inventories) are assembled, barcodes and data networks coordinate the manufacture and shipment of future components in tightly synchronized streams. Wal-Mart achieved dominance largely through its global, instantaneous inventory management system.

  When you add a barcode scanner or a radio frequency identity tag reader to a handheld device, it becomes easy to link a Web page or other online process to a tag that is physically associated with a place or object. Today, people can point a reader at an object and view relevant content on the screen of a pocket computer or hear spoken information by means of text-to-speech through a cell phone. A company called Barpoint allows users of existing cell phones, pagers, and wireless computers to swipe a barcode with a portable reader or use a telephone to call an automated service and enter the barcode of any item through the keypad.42 The Barpoint service then provides pricing information and offers to complete an electronic order for the item. This simple capacity might set the stage for significant shifts in power between consumers, retailers, manufacturers, and online merchants. For example, widespread use of wireless handheld devices could turn every bookstore on earth into a showroom for Amazon.com.

  Barcodes require line of sight for laser readers, must be read one at a time, and the information they encode cannot be changed dynamically. In the 1980s, researchers started looking at radio frequency identity (RFID) tags as electronic successors to the barcode. RFID tags store, send, and receive information through weak radio signals. Active tags contain tiny batteries and send signals up to more than one hundred feet, depending on power and radio frequency. Because of the batteries, active tags are the more expensive kind and are used today for tracking cattle, merchandise in stores (those bulky plastic anti-theft devices contain small RFID tags, and the gates near store exits are tag readers), and in automatic toll systems for automobiles.

 

‹ Prev