• UX Week 2014 Keynote Josh Clark on the Future of Digital Product Design

    Josh Clark’s particular mix of practical realism and thoughtful reflection is an ideal match for UX Week. Josh is the author of the book Tapworthy: Designing Great iPhone Apps, and frequently gives workshops and talks at conferences around the world. In this interview, Josh and I talk about the current trends in digital product design, and where those trends might be heading.

    Jesse James Garrett: Josh, tell me a little bit about how you got to where you are and how your work has evolved in recent years.

    Josh Clark: My career path naturally didn’t begin with internet technology or mobile or multi-device stuff. It’s funny. I have a 14-year-old daughter, and when I was her age, the stuff that we do now didn’t even exist. Which makes me wonder, when she is my age, what new things are going to exist that don’t now. But I started out as a storyteller, which is very much still what I think I do now.

    I began as a journalist, as a documentary producer for public television. One of the great things about that was learning how to tell a formal linear story in dramatic terms. But then the Internet burst on the scene, and all of a sudden, there was this really accessible new form of storytelling, very democratic in terms of access, where the rules hadn’t yet been decided, they were very much evolving. So I really jumped in with both feet.

    I do think we are storytellers; we craft and shape and mold stories and behaviors. I think that those stories have grown especially personal as our devices have grown more personal. So, for the last several years, the experience that’s been exciting me has been the mobile experience, and that’s where I’ve been focusing my efforts. What’s exciting about mobile is how much it has let us start to make digital experiences more physical. We’re etching digital interfaces onto these slabs that we carry out into the world, which has I think led us to what feels like the next great revolution of user experience and digital computing, which is embedded computing in the environment around us. An environment where our interfaces aren’t just screens, but literally all the objects and places we encounter. It’s both exhilarating and terrifying on a lot of levels.

    JJG: It almost seems paradoxical, that as the influence hardware has had over user interaction has been reduced, with the shift to this realm of touch interfaces, it has enabled us to allow software to manifest in more physical ways.

    JC: It really has. I think that one of the big challenges of adapting to touch screens, for example, is that before, we had purely visual screen interfaces, where we were concerned with whether it visually made sense. Then touch suddenly brought in the centuries-old influence of industrial design. It’s not just how your pixels look but how they feel in the hand. So one hand, software is what’s giving hardware and dumb objects superpowers (or promises to), and yet our interactions still have to be physical because these are after all physical objects, and our brains have evolved over millions of years to experience the world through a physical interface.

    JJG: Like you, I have always seen design through the lens of storytelling. One way that I sometimes put it is that every product tells a story: about itself, about the person who uses it, and about the world that the product and the user inhabit. That story comes through in every aspect of the design. How have you seen the stories that you’re telling through your design work change over time?

    JC: The very first generation of the web was still fairly passive. There was interaction, but it was at the level of a Choose Your Own Adventure book. Ultimately the experience is what we prepared for you. But what has become especially exciting and daunting and a little scary is how data has begun to change everything.

    With mobile, we suddenly had this device that was throwing off tons of data about what’s around us, through things like GPS or audio or camera or compass, gyroscope, now fingerprint reader. These devices are just full of sensors that gather data and share it, and suddenly we’re able to get these experiences that are truly interactive and contextual. And if we do it correctly it’s also predictive, which makes us nervous. The idea is that if we give these algorithms enough data, whether that’s from our phones or our search history or someplace else, your product can finally tell a story that the user wants to hear. But I think it’s important that we actually remember who is the protagonist in the story. I think it’s easy as technologists for us to think that our technology is the protagonist, but it’s not just about the story of the technology or the story of the company behind it, but importantly, perhaps I would say most importantly, the story of the user. Are we making choices that address their needs?

    With mobile, I see so often that it’s easy to confuse context with intent. We know where they are. We can even guess what they’re doing. We can even listen in, if the user gives us permission, on what’s happening around them. But it’s dangerous to use those relatively dumb queues to assume intent. It’s important, as we get to this really exciting kind of data driven storytelling, that we honor intent but not assume it.

    JJG: There is definitely a sort of church of big data, which holds that with enough data plus enough computing power plus the right algorithm, all truths with be revealed. I think that there’s a danger there, of people projecting interpretations onto that data that may be unfounded. And then they feel so confident in that simply because of the scale of the data they have. Then they decide that they don’t need to do any qualitative validation to make sure that those interpretations are correct. Or worse, they don’t even necessarily acknowledge that within their algorithms are hypotheses about people’s motivations and the why of what they do.

    JC: Data can lull you into a sense of reassurance that your hypotheses and belief systems are ironclad when it’s not necessarily so, especially when you’re getting into the complexities of large patterns of huge numbers of human beings and their behaviors.

    JJG: I think that an algorithm can have an agenda, and that agenda may not even be something that the people who are responsible for creating the algorithm and interpreting the results that it generates. They may not even be aware of the way in which they have built their biases into the machinery that they use to find these patterns.

    JC: I think it’s really important to remember that the software is ideological. There are politics to software, in terms of the values and biases that you embed into it. And in a business where we design and shape paths of behavior, it’s important to acknowledge what those values are, so we can be conscious of it as much as possible. I think that as much as you can keep your eyes open to understand what those motivations and agendas are, the better served you’ll be. I think there’s a natural human impulse to assume that our initial guess was right, and so we’re sometimes blind to our own agendas or biases.

    JJG: It’s also possible that this kind of bias could emerge from the interaction of different algorithms working together. So you couldn’t even point your finger at a particular piece of code and say here’s where the bias was built into the system. Instead it came out dynamically as a result of running the system.

    Josh: The more complex, the more surprises. Behind all this is also just our enthusiasm for making something that’s new. The other thing I’m seeing is this idea of passive interfaces, where the systems are smart enough to do things on our behalf. Let’s create an automated home that will just do the right stuff for us or Google Now that will just anticipate what we want to do. And it’s something that makes people a little bit anxious. So now the question is, how do we reassure humans that they are in control of this thing as we start to give machines more decision power?

    JJG: I think it’s interesting that the more independent agency these tools and systems seem to have, the more sensitive we become to the personality traits that those systems project. If your system is taking a lot of action on your behalf without your explicit consent, is it better for that to be something that is cute and cuddly, so you feel safer, more in control?

    JC: I think that this idea of creating very anthropomorphized interfaces is the third rail of user experience. It’s super-tricky to get right. The risk is that you’ll make something that is just totally inappropriate in some way or another. And it’s hard especially when you’re addressing large audiences. The personalities that are great for one person irritate another. So, I think that deploying something like a Siri or Google’s Voice stuff, that’s tricky. How do you choose the right voice? How do you choose the right manner? Siri is a little sassy but generally not too much. Those are really tricky, especially if you start to give them actual human attributes like the voice.

    I think an important piece of this emotional context is the idea of simply feeling in control. We need to talk about this stuff because a lot of people, I think probably the majority of people, find it really unsettling and don’t like it. As computers start to have the capability of controlling the most intimate places of our lives or even being embedded in our bodies, we have to be conscious that the ideology we should bring to our software is that humans are in control, that there’s always an off button.

    But there’s also the development of social interfaces, and I mean that in a couple of ways. One is social between devices, where network gadgets can start to share information and buzz about information and make corrections based on their own systems. So they’re talking and doing things on our behalf without having to interfere with our lives or interrupt us with mundane or dull details, they can just talk and have social interfaces with us, but also social interfaces with other machines, so machines themselves become social in some way. What happens when you can follow your house on Facebook or your car on Twitter or your running shoes start following you? What are the possibilities there and how do we talk to those devices? There are already chat interfaces exploring the next gateway, using simple commands that let us talk to them in the way that we talk to one another all the time.

    But with all these interfaces, I don’t think it has to be all or nothing. I think we tend to think in technology circles that this new thing will replace that old thing. But usually, there are these long periods of things coexisting for a long time. It’s not that tablets are going to kill laptops. They’re going to coexist for a long time. We have options, and it makes our job as user experience designers much more complicated.

    JJG: Thanks Josh, and we look forward to your keynote at UX Week in September!

    There is 1 thought on this idea

    1. Jonathan

      Simply wonderful.

    Add a Thought

    Slide to Submit

  • Close
    Team Profile