Pixels and Place

Home > Other > Pixels and Place > Page 14
Pixels and Place Page 14

by Kate O'Neill


  Perhaps a lesson to take away is that there might be opportunity in exposing the customer experience to a little randomness, as long as it doesn’t interfere with the customers’ intentions. A little unexpected cross-sell of something charming, a quirky-but-fun site feature, something surprising and fresh—these types of experiments with commercial randomness might be worth trying in your environment, to see how customers respond. Because with all of the filtering we’re presented with, the savvy shoppers out there might be picking up on the sometimes heavy-handed crafting of custom-tailored experiences. And maybe, just maybe, we’re all overdue for a little serendipity anyway.

  The overemphasis on personalization just may be making serendipity more interesting. Showing a customer a little randomness alongside their personalized recommendations serves two purposes: First, it provides a little relief to the person seeing the recommendations, since they’re so used to being creeped out by hyper-targeted content. Second, it offers the chance of a spark of inspiration: something the person may have meant to look up and forgot. And for you as a marketer, it serves as an interesting baseline. It’s a control, and it can help you gauge how effective the personalization is.

  It is often said that we are drowning in information. (We’re drowning in misinformation, too.44) But I think we are adapting more quickly than we realize. We are developing increasingly sophisticated filters to tune out what we don’t want to see, hear, or know. But why are we susceptible to being in a bubble? Precisely because we are not seeing the meaning of what we take in.

  Humanlike Nuances

  There are subtleties about being human and dealing with humans that we sometimes underestimate.

  In the age of machine learning/artificial intelligence/cognitive computing, we have to ask what it means to be human, and what is uniquely human as opposed to a learnable response.

  More and more often, machines are accomplishing feats of intelligence and reasoning once thought to be human-dominated, from winning a game of Go with its exhaustive possibilities that seem to rely on something like intuition to evaluate,45 to extrapolating patterns from a single example46, and more. There’s something that’s going on there that will definitely inform the way we richly experience our surrounding environments.

  Empathy, for example, seems as if it would be one of those traits, but there are examples of AI being taught to show empathy, or at least make decisions on factors that resemble intuition and human insight. A machine-learning algorithm has been able to identify tweets sent under the influence of alcohol.47 Google has been feeding romance novels to its artificial intelligence engine to give it a framework for the nuances of emotion and empathy, in hopes that this will make it more conversational.48

  Logical analysis has limits. We rely on complex systems of intuition, empathy, and insight in making ethical choices, and passing these burdens to artificial intelligence doesn’t yet seem possible49. But that doesn’t mean it won’t happen, and it doesn’t mean there aren’t developments meanwhile using humanlike interactions in technology that can aid our human experience.

  For example, someone built an app that gives computer-generated directions the way a human would.50 Or might.

  Image via Walc

  Walc is an app that uses nearby landmarks to orient the user throughout the directions. Take a left after the Apple Store, walk towards Macy’s, and so on.

  It’s not necessarily context-aware, but the app was built with an awareness of the context a human might be in when needing directions—navigating through an unfamiliar part of town, or looking for cues that they’re on the right track.

  The evolution of integrated experiences will inevitably involve algorithms and artificial intelligence, but there’s still something to be said for weaving in those qualities we consider to be uniquely human, even as some of those characteristics become less unique to us.

  The Guided Experience Economy: Artificial Experience and Conversational Commerce

  There’s a lot of talk about the “experience economy,” and I do think experiences are going to be increasingly important to design for—hence, this book—but I think there’s a more interesting facet of this overall economy that gets at something more transformative.

  If you combine the notion of the experience economy with what we’re seeing of conversational commerce, then you end up with something a little more like these:

  Voice

  Bots

  Chat

  Social

  Some, like Sarah Guo of Greylock Partners, are calling this combination the “conversational economy.” As she wrote in VentureBeat, the term conversational economy encompasses the growth of:

  1) messaging applications, 2) voice-controlled computing, 3) bots and services that sit—or just start—within messaging apps / voice-controlled hardware, and 4) enabling, picks-and-shovels products.51

  Yet the point isn’t so much the conversation. After all, very few people are excited about having conversations with computers. Instead, what it really speaks to (if you’ll pardon the pun) is the richness and agility of the guidance that can be offered through these interfaces. So I’m calling it the “guided experience economy.” As these lightweight and adaptable forms of interaction develop into mainstream usage, we will see some very interesting opportunities across industries for how the guided experience economy can play out both online and offline.

  Icelandair is providing a very helpful proof of concept of what the guided experience economy looks like in practice offline. PSFK.com reported that for a limited time, “Those embarking on an Icelandair Stopover can request the complimentary companionship of an Icelandair employee, who then serves as a field and cultural guide to the wintry wonderland.”

  That facet of experience economy, offering some physical service that augments the experience of a place, is a placemaking interpretation. But the opportunities are enormous when those placemaking instincts are aided by technology.

  For example, what if you could combine that idea with autonomous/driverless cars and AI developed to personalize recommendations for traveler preferences? You end up with a very interesting beginning to a whole new macro platform that goes beyond delivery services and into AI-guided tourism and more.

  You also end up with some very useful adaptability. During college, when I was a German major, I picked up a job giving German-language guided tours of Chicago. One Saturday morning, I stood in the lobby of the hotel where I was to meet my guests and held up my tour guide sign. It didn’t take long to realize that this particular group was actually Greek, not German. It was probably a simple data entry error on the part of whoever at my employer company took the order—after all, German and Greek were most likely directly next to each other in the list of languages on some tour order database screen. But with my employer’s office closed on Saturdays, it was too late to swap out tour guides; and although no one in the group spoke English, one happened to speak German. So I gave the tour in German, and he translated, and we all had a great time.

  It’s a fun story to be able to tell, although not all such stories would have charming outcomes. Perhaps an intelligent assistant, with the ability to deliver information in a variety of languages on a self-driving tour bus, would improve the guests’ experience. Although perhaps it ended up being surprisingly more fun than if the tour guide had been correctly assigned.

  It’s also a delightful example of how the story of technology’s role in human life is so often about being helpful until it’s spectacularly and wildly unhelpful. Perhaps our role as designers of experience is to anticipate as much as we can of the process so we can alleviate as much unhelpfulness as possible.

  ***

  As some designers have been saying, the new UI is no UI. Or it’s at least a user interface that differs significantly from the windows-based screen interfaces we spent most of the ’80s, ’90s, and 2000s thinking of as “computers.” Although touch, gesture, and haptic feedback in general have been around in various forms for decades (w
ho had one of those vibrating game joysticks back in the day?), their uses are becoming far more common since the popularization of touch screen devices like the iPhone.

  That’s all part of the shift, but aside from the evolution toward touch-based and haptic interfaces (especially in VR), voice is natural and faster than other modes of interaction and navigation for everyday needs. Voice-based devices like Amazon Echo and Google Home are the beginning of what is likely to be a wave of adaptive assistants for home and beyond in the coming years. And chat-based interactions are mobile-friendly and well suited to user conditions where voice might be a more socially challenging means of interacting.

  Now what to do with all of that in placemaking?

  An app-driven experience in a place that people visit only rarely, like a museum, a zoo, or a ballpark, can be cumbersome to initiate. These places often rely on QR codes that link the visitor to a download of the app, and then the visitor has to fumble through remembering their device password, setting up an account, and so on.

  Photos I took at the Irish Hunger Memorial

  As a straightforward example, here’s an opportunity I encountered recently where conversational interaction, or something other than a QR code leading to an app, could have been useful. I happened upon the Irish Hunger Memorial while walking through New York City’s Battery Park City area, and since the Irish famine was part of my ancestry’s experience, I wanted to find out more about the memorial and the famine it commemorates. Attached to the wall of the memorial was a laminated sign that offered a downloadable app, complete with QR code linking to the download.

  There were several problems with this: One, I’d deleted the QR code reader app off of my phone some time ago due to lack of use and lack of relevance, so I didn’t have the wherewithal to first download a reader, then scan, download, and set up a dedicated museum app. Most visitors to moments, memorials, and even museums are just not going to do it. Asking visitors to download a dedicated app is not a proportional level of involvement and commitment for what is most likely going to be a one-time visit. Secondly, I didn’t have a very strong cell signal, and my battery was getting low; so doing all that downloading would have been too resource-intensive for my circumstances.

  On the other hand, a chat-based interface would have been ideal. The sign could have offered a short code visitors could text to initiate, with prompts to receive more information. Even directing me to social media accounts where perhaps there might be some explanatory multimedia would have been helpful and less resource-intensive than downloading an app (or two apps, in situations like mine). And that might have earned the accounts an ongoing follower. (For more on museum opportunities, see “Museums and Interpretation: Holding Space for an Idea.”)

  For retailers, it’s abundantly clear. (See “Retail: Transcending the Transactional and Creating Value Beyond the Purchase” for more on this.) But for other industries, it might take a little more creativity. How will museums make use of conversational chatbots? How might healthcare make use of voice assistants? We’re already seeing an inkling of the answer: Amazon Echo can now give light medical assistance in diagnosing symptoms. “Alexa, I have a headache. What should I do?” “Do you have a fever?” “No.” “Take aspirin.” That may not be exactly how that conversation goes, but it’s the gist. And other industries will follow suit.

  CHAPTER TEN

  Considerations for Meaningful Human Experience Design

  When you’re dealing with a topic that’s changing every single day, it’s a challenge to write a book in a way that will allow it to remain relevant for years to come. But there are elements that are common to the design of meaningful human experiences that I’ve observed and validated over years with a wide variety of clients, and these elements are timeless. So this next section takes a step back from looking at specific examples of technologies and implementations to look instead at some frameworks and tools that can help you and your organization design more meaningful experiences that integrate and transcend physical and digital surroundings.

  Some of the key elements of meaningful human experience design are:

  Intentionality (which you could also call Purpose)

  Dimensionality

  Metaphors and Cognitive Associations

  Value and Emotional Load

  Alignment and Effectiveness

  Adaptation and Iteration

  These elements really come to life when considered alongside the process for cultivating meaning in experience overall. I have developed what I call the Meaningfulness Model to help experience-makers of all kinds (marketers, entrepreneurs, placemakers, etc.) achieve greater meaning through strategy and design.

  The Meaningfulness Model is intended to be iterative, as the arrows suggest. Loosely, you’d start in the top center, with purpose, and work your way around. It’s never that simple or neat, though, because insights often show up announced, out of sequence; and real life doesn’t always stick to the plan. That’s okay—it can still be helpful as a directional tool, as a guide for making good decisions and moving toward greater insights and greater success.

  As you go through the process of experimenting and learning what helps you fulfill your objectives and what doesn’t, what resonates with customers and what doesn’t, what keeps visitors coming back and what doesn’t, and so on, you can adapt the technologies you put into place to integrate physical and digital experience in more and more meaningful ways.

  In the next sections, I’ll look more closely at intentionality, dimensionality, semantic layer and literal messaging, metaphors and cognitive associations, and emotional load. These elements, particularly when explored through an iterative learning approach like the Meaningfulness Model, will help focus your efforts toward achieving memorable and meaningful experiences that integrate the realities of physical surroundings and available technology in seamless ways.

  Intentionality, or Purpose

  One of the key ingredients in meaning-making is purpose, and because we know this, it stands to reason that one of the foundational elements in the Meaningful Human Experience Design framework must be related to purpose. But I like to call this “intentionality” because this element is not only about having a clear sense of what we want to do or what we hope to achieve, but also about the framing that we’re going to be intentional each step of the way in achieving that outcome.

  Even in my strategy workshops, I usually start by asking clients to spend just a few minutes working with me to formulate a statement that encompasses the objective for the day, for the project, or for the campaign. Sometimes that takes the shape of something like: “Define strategy and develop a testing roadmap for the launch of the XYZ product,” or “Identify relevant messages to use in sales calls with different stakeholders.”

  In other words, what would success look like? We’re setting our intention, and that helps keep the meeting on track. That also leads to clarity around whether what we’re discussing now is getting us there. Discussions that might have seemed off-topic can sometimes lead to uncovering roadblocks to the outcomes.

  However you approach this in your environment for your projects, it’s just important that you do it. You might call it purpose; you might just call it strategy. You could do this as a task force or as an individual. You don’t need to stick to any sort of script. What matters is that each time we set out to design the experiences that human beings are going to have with our ideas, our brands, and our places, we are giving honest and thoughtful consideration to the dynamics that are at work or will be at work in the bigger picture and are trying to articulate how we hope to influence the outcome.

  Thinking about the role of place in experience design presupposes some purpose anyway; a bathroom and a kitchen have quite different human experience design considerations.

  Also, the human experience of the place is bound to have some sense of stillness within it or movement through it, as discussed in the earlier section, “Movement Versus Stillness.” I either want to
be there or I don’t, and if I don’t want to be there (such as in a hospital), I need to sense that you empathize with that. I want to feel you’ve designed my experience to be as easy and as painless as possible. That goes into the journey mapping (also discussed earlier in “Maps: Mapping Physical Place Versus Maps of Experience: Empathy Maps, Journey Maps”), the considerations of process and progress and flow and movement through.

  You can design that awareness into the physical elements of the hospital building itself, into the training given to the receptionist, into the website, into the wayfinding signage. But it also goes into the technology considerations, such as whether to use an electronic medical records system that provides patient access to test results and a simple appointment scheduling interface. It goes into the data management decisions that pertain to security and file storage. It goes into the policies that protect the patient data, and when it is done well, it goes into the culture of the hospital staff, too. It goes everywhere, because purpose is at its best when it is more than a mission statement; it is at its best when it provides the foundation for the whole operation. (Operation! No pun intended.)

  There has to be some awareness of purpose in experience design to be able to employ appropriate technology. Otherwise, it’s too easy to get caught up in what’s trendy. You cannot force a solution onto a problem you haven’t taken the time to define, and the result is bound to feel clunky. (See my example in “The Guided Experience Economy: Artificial Experience and Conversational Commerce.”)

 

‹ Prev