Steve Fadden is a UX Researcher and a lecturer at UC Berkeley School of Information. He has a wide and varied experience in academics and in the enterprise space having worked for the likes of Lockheed Martin, Peoplesoft, Intel, Dell and Salesforce where he was Director – Analytics UX Research. Steve has recently joined Google as a Research Manager.
We recently caught up with Steve to talk mainly about UX Research, the implications of emerging technologies like AI, AR/VR on UX research and most importantly, the ethical issues that confront UX Designers and the guidelines they need to be aware.
Steve has been a regular at the UX India conference where he is extremely popular with participants because of his accessibility, the zeal to share his experience, and his willingness to mentor.
Abhay: Tell us something about your academic background, your professional background.
Steve: I am a Ph.D. trained engineering psychologist. I study how people think and how people learn, and decide, and perform, and I also think about how systems are designed. It’s really the interaction between systems design both from a hardware and software perspective as well as a cognitive psychology and perception perspective.
I got my Ph.D. back in the 1990’s as the internet was starting to become more well-known, and software was starting to become more important as a function that people – I guess lay people – had access to, in addition to specialists. My initial research was all around aviation and studying the design of a cockpit, and instruments, and how that design influenced performance and perception. And since then, I’ve worked in aerospace, as well as enterprise technology, and consumer technology, and I had a period of time when I worked in consulting, and academia doing mostly grants and funded projects that focused on research and systems to improve the performance of operators so, everything from air traffic controllers, to pilots, to nuclear security plant specialists, all the way to students and staff in terms of helping people learn and helping people train. And then over the last 7 or 8 years, I’ve been back in technology and having a really great time trying to understand how to design systems to be more learnable and understandable for people in analytics, and helping people understand data, understand relationships in the data, also understanding what signals we tend to find from data to give us an understanding of how people are learning, and thinking, and feeling.
I forgot to mention my teaching. I’ll just say that I’ve also been teaching pretty much since I completed my Ph.D. I always made sure that I have taught on the side, and I’ve been fortunate to be able to teach at the University of California Berkeley for the past 4 years, I think.
Abhay: That’s a great introduction.
Steve: Thanks!
Abhay: I’m interested in UX research and one of the questions that come up is: do you think UX research done remotely can be as effective as a face-to-face interaction with users?
Steve: Yeah, it’s an interesting question, and I think it’s really complex, and I would say my short answer to this question and probably every question you’re going to ask me is, it depends. Ultimately, if the research that you’re doing is more focused on ethnography, and understanding culture and understanding situated cognition (how people think in certain contexts), I think there’s a huge opportunity for in-person face-to-face interactions so that you can actually see how the whole person is reacting and also you as a researcher have a much deeper respect and understanding for the context that those people are in.
Humans always work in concert with their environment. So, there’s this idea of distributed cognition, and situated cognition, which we think is based on how the environment and our tools program us to think. So, we don’t necessarily act 100% autonomously, we’re always influenced by our contexts. So, I think for a researcher to have a good understanding of that it’s important to be able to see that interaction face-to-face.
Having said that, if you’re trying to understand, for example, how a person maybe thinks through a task, or maybe how a person performs (usability), or you’re trying to get a better understanding of maybe how a person would react to a system or a proposed concept, I don’t think it’s necessarily critical to be face-to-face. It’s always nice to be face-to-face, but I think you can get 80-90% of what you need through a technology-mediated interaction like this like we’re having right now in an online remote call.
There are also some benefits to not being face-to-face and, Indi Young, the author of Practical Empathy, writes about some of these benefits which include thing like not having to worry about body language, and unintentionally cueing a person for an answer you hope to get (we all have biases and we all have desires to hear certain information more than others). I think when you are in a purely verbal or a channel that’s not completely immersive face-to-face, there’s the opportunity to focus in on a message without worrying as much about biases. Of course, the bias always comes in. You can hear me talking, and hesitating, and struggling to find the right words. But I think when you combine that with face-to-face you might see facial expressions, you might see the way I hold my posture. That might actually distract from the main messages that we’re talking about. Alternatively, if you were looking to understand emotion and you’re trying to understand how I perform with a system, it would be great to see how I do that.
…as companies try to make decisions and organizations try to make improvements faster, I think remote technology mediation really affords us a lot more than it detracts…
So, I think remote research will always have a place, and honestly, as companies try to make decisions and organizations try to make improvements faster, I think remote technology mediation really affords us a lot more than it detracts. Being able to talk to people in different countries all in the same day is something that’s really not possible on a budget. I do think that remote interactions, in general, can be very effective, but again it depends on what you’re hoping to get out of your research.
Abhay: Right! I actually wanted to ask you, does exploratory research or ethnographic research play a part in enterprise projects that are essentially about building back-end software products and services, or do you think mainly it’s more about user testing, i.e. you build prototypes, you build software, and then you basically test them out with users. Which type of research plays a larger part in your day-to-day work?
Steve: Yeah, again, it’s going to be one of those “it depends” types of answers. I think the role of understanding the influence of the environment, the influence of other people, the influence of technology on behavior, and thinking, can’t be understated. So, whenever doing any kind of research, I think there’s always a benefit to an exploratory aspect of it. So, even if you’re doing what many would consider “simple”, and I use quotes to call it simple, if you’re doing like a simple usability study, there’s still so many contextual factors that matter such as – let’s pretend I’m testing an app – when is the app being used? where is the app’s location? if I’m with a customer or a person who’s trying to use an app is that even consistent with the context they usually use it in?
So, I’ve worked in the field of analytics for the last five and a half years in my current role and for many many years before that. If a person is trying to understand data, or understand data visualizations, or understand meaning from data and, they’re doing it at a large desktop computer in an office environment that’s quiet, – that’s a very different context than if they’re trying to understand data in a parking lot with a bright solar glare, so the sun is beaming down and it’s hard to see the mobile device. It’s important to understand those things, and it’s also important to understand if I’m doing a usability study, whether or not the tasks that I’ve contrived for the study, whether those are remotely realistic for the person. So, if I say try to accomplish task a, b, and c and the person that I’m working with has never actually done that, or it’s not accurate completely in terms of how they try to accomplish their work.
It’s much better for me to understand that starting off than just to run through a script and try to do the following things with our tool. I think exploratory research always has a place and really the question becomes a question of how well a researcher convince their stakeholders that doing exploratory research even in a usability study setting will ultimately have a much bigger payoff than not doing it.
I think exploratory research always has a place and really the question becomes a question of how well a researcher convince their stakeholders that doing exploratory research even in a usability study setting will ultimately have a much bigger payoff than not doing it.
It’s really not about gathering data on how many users completed a task or how many seconds it took, or minutes it took, for a transaction. Those metrics might be important for an organization, but if you don’t have that in the context you lose so much valuable information. And so, to anticipate where a version of this question can go, I can say for the most part, in my day-to-day life, and the work that I’ve been doing with various research teams, I would say a majority of our research is really more exploratory.
Some companies I remember back in the 1980’s and 1990’s, we were just fighting for usability to be recognized as something important and a way to identify and assess the usability of a product was through usability testing. Well, now we’re 20-30 years in the future and I think a lot of organizations acknowledge that usability is something that’s just as important as regression testing or as automated F-tests or any kind of test that you would do with software and hardware, and so UX research, I think, has moved on to the next big question which is one of the strategic and exploratory things we should be doing that influences where our companies are going.
Abhay: What about all these tools which are now coming up, like analytics tools which build behavioral models of users as they are interacting with digital touch-points, mobile application etc – there are a number of tools out there. Do you think these tools help? Do you make use of these tools in your work?
Steve: Sure. Yeah, we use them. I would argue that I see them used less by UX researchers and more by product managers and quantitative researchers. So, data scientists but also usage analysts. A well-rounded UX researcher will use all the tools at their disposal to develop a better understanding of the people for whom they’re designing. We use a lot of the analytics tools primarily to understand, I would say, questions of quantity: How many people are doing this? Where do they do this? How often do they do this? As a percentage of the user population, how representative is this kind of behavior if a person clicks on a certain thing or goes down a certain path. And I have a former manager, he had a great saying which was “Analytics is great in terms of telling you the what, but it doesn’t tell you the why” and his phrase was, you need both qualitative and quantitative research to help you fill in the numbers. He called it coloring by numbers, so I always looked back on old coloring books when I was a child where the book has an image may be of a tree and the trunk of the tree might have a number 1 on it and you look at the key and the key says 1 = brown. So, you take the brown crayon, and you color with it.
I think the same things happen with quantitative and qualitative research. We have analytics, but we don’t really understand the color. So, you might have a conventional funnel that says X% of your users are going down this path and Y% complete the path, but you don’t know why? You don’t know why some users complete the path and others don’t. You don’t know why it’s these kinds of users versus another kind of users. So, you need to have the qualitative research and ideally, ethnography, field research, observational research, deep interviewing, listening, and “deep hanging out”, – those types of behaviors to really understand why people are doing what they’re doing. And even if you are only looking at analytics, you will still have a desire to understand the why behind the numbers. I’m fortunate to work for an organization that has a lot of the data embedded so I can see it in aggregate. But I still need to be able to start talking and observing, and working with our people one-on-one to understand why.
Abhay: Do you think that there ever will come a time when these tools will become so sophisticated that they can actually start generating real user personas on their own based on behavioral models? Do you think we are going there?
Steve: I absolutely think that they can do a lot more than they are doing today and, when I take away kind of my needs to defend my job and think about it just rationally, (chuckles) I’m skeptical that a system will be able to make the decisions necessary to reflect what we as people value and see as important. There’s so much research that shows a lot of algorithms, a lot of machine learning systems, are biased. And those biases are just a natural byproduct of building the model. And so, without a person to come in and govern that process, and without a person to come in and start questioning, does this, this system has segmented my users into three groups, and do the groups make sense? I’m doing something as simple as a cluster analysis, I as the researcher still need to make a decision of how many clusters we should have. Should we have three? Should we have four? Should we have five? And typically, what you do is, you look through the data, and you say does three make sense? Does it make more sense to have four? And you start coming up with an underlying model, and a framework from a psychological or behavioral perspective that gives meaning to the data. So, I don’t really believe that analytic systems will do everything we as researchers do today, but I really think that there’s an opportunity for them to be almost more of a partner instead of a tool.
I’m skeptical that a system will be able to make the decisions necessary to reflect what we as people value and see as important.
So one of the things we struggle with is, like you mentioned, personas. When we look at doing personas, there’s a need to segment data. So I could interview let’s say 10 people who fit some kind of target population and start rounding out what I think of those personae. But if I really want to understand these folks, I should come up with a framework that explains why group A is different than group B. And so, one way to do that could be context-of-use. I have one group of people and they use my app on their mobile device in a parking lot. And I have another group of people that use my product on a desktop environment in an office, and they’re always separate like that. That kind of contextual segmentation might make sense. But on the flipside, I may have people who use features a, b, and c, on my product in one context and features d, e, and f, in another context. So, maybe task-based personas make more sense in that world.
And then there’s another way that might be goals – what are people trying to do? So, if you look at an analytics user, maybe I’m trying to get insights about my data. Maybe I’m trying to find patterns that I didn’t know where there. That’s a very different goal than I’m responsible for cleaning the data, I’m just looking for inaccuracies, and I need to clean them to make sure that the patterns and insights or that the graphs and charts that are being created are as accurate as possible. So, cleaning data is different than trying to get insights from data. And that might also be different than using the data to justify it. So, maybe goal-based persona makes more sense.
I don’t know that even some of the most sophisticated machine learning tools, they would be able to figure out should I do a goal-based persona? Should I do context-based persona? So, I think the human will always be needed in that loop, but maybe we’ll be able to make decisions faster and better than having to do so many things manually.
Abhay: Do you think we’ll be able to actually create bots or robots that actually do research or ask questions like humans?
Steve: I think so. At least I think there will be almost like a triage. Like, I have 30 things that I need to do as a researcher, maybe I can get a bot to do 10 of them? For example if you look at a lot of the conversational AI that’s out in the marketplace, like basic bots that are used in service, there’s good examples like if you’re using your phone and you’re talking to a phone tree, it asks you a bunch of questions before you actually talk to a human and, if they’re doing it well, by the time you get to the human, they have a better understanding of you, and the problem that you’re encountering. There are many systems online where you go to a website for a product and there’s a little chat window that shows up and pretends to be a human but is really a bot.
I think we do a lot of things manually that could be handled by bots. So, screening for example. I want to make sure I have the right users in my usability study. I have no doubts that a bot would be able to do that. And even basic questions, you know we have a lot of – even back in the 60’s there was system called ELIZA which was a bot that kind of mimicked the questions that a therapist would ask, how are you feeling?, why do you feel that way?, tell me more…, and it wasn’t getting any direct guidance or feedback but there was at least anecdotal research that people actually felt better after they talked to this bot which they completely knew was a bot. They had no delusion that it was a human. And so, I think that they can get us part of the way, but I don’t think that they can get us all the way, and the parts that are hard are understanding when it’s time to fork when it’s time to – you know like you’re doing as an interviewer. And a bot wouldn’t be able to do that.
Abhay: Yeah, I think ELIZA – at that time they use to call them expert systems.
Steve: Yeah, that’s right.
Abhay: Interesting that you mentioned machine learning because my next question is about Emerging Tech. like AR/VR, AI, Machine Learning, blockchain. Which of these do you think is going to have a maximum impact on the design of human-computer interactions?
Steve: Yeah. That’s a really interesting question and it’s fascinating to me. And the reason why I pause is that a number of students that I’ve worked with are working work in these areas because these are the emerging fields. Again, I think from an HCI perspective the answer is going to be it depends. If we look at AR/VR, there are some really nice proofs of concept and some breakthroughs that have been done in areas of therapeutic application, helping people through post-traumatic stress, helping people with phobias. AR and VR have been wonderful at helping to extinguish things like fear of heights, for example.
Whereas if you look at AI and ML, I think, that field is really wide open. And while AI and ML types of systems have been around for years, I think that we’re really starting to see the ubiquity of them. So, ubiquitous computing is starting to become a thing with AI and ML, and the various sensor technologies that are being used. And so when I think about HCI implications, there’s the question of who’s the person who’s going to be receiving the benefit, or who is going to be interacting with the system directly? So, the AR user, the VR User. Then there’s also the question of who’s going to be governing that? Who’s going to be configuring those systems? Who’s going to be deploying them? Who’s going to be maintaining them? Who’s going to be watching over – hopefully – the ethics of those systems and how they’re used. So, there are lots of new opportunities for new forms of HCI, but I don’t think it will be revolutionary. It feels like it will be more evolutionary. This notion of going to a keyboard, to a mouse pointer device, to swipe and gesture-based interfaces, to maybe completely kinesthetic interfaces that just respond to the way we hold our body. Those don’t really feel revolutionary to me. So, I think HCI will continue to evolve. I don’t know that it’s going to radically evolve as much.
…there are lots of new opportunities for new forms of HCI, but I don’t think it will be revolutionary. It feels like it will be more evolutionary.
And with things like blockchain, I think that’s more like an administrative type of thing like watching the secure digital ledger. I don’t know the degree to that is actually going to have an impact on kind of typical people day-to-day. I absolutely think that will have a revolutionary impact on security and finance and pharmaceuticals and healthcare types of environments. But I don’t know that it’s going to have a huge HCI impact per se.
Abhay: You mentioned ethics – are there any guidelines that designers can follow because, recently there’s been a lot of lapses from big tech companies like Facebook, Google, who are aggregating user’s data and then you know that data leaks out into the hands of people who are not exactly having good intentions. So, do you think user experience has a part to play there, and are there any guidelines which designers can follow?
Steve: Yeah, I think this is a huge area of opportunity. And there are two people I want to plug. One is a co-worker of mine, her name is Kathy Baxter. She works as a research architect at Salesforce. She actually has a great two-part blog article on medium and it’s called “How to build ethics into AI” and I would strongly recommend anybody who’s interested to check out her articles.
While I’m not aware of any standards or, I don’t know how to phrase it, I don’t know of a list of laws saying we should do this in order to ensure ethics. There are lots of ethical frameworks, you know going back in the United States, the Belmont report, which drives how we do research with humans. And it directly influences medical research and clinical trials, but it also influences psychology research and things like usability testing.
There are lots of ethical frameworks, … , the Belmont report which drives how we do research with humans. And it directly influences medical research and clinical trials, but it also influences psychology research and things like usability testing.
So, there are frameworks out there for ethics. In terms of what the implications are for big data, I think that’s emerging. There was a report that recently came out from an organization – I don’t remember where it’s from – I think the Norwegian Consumer Council and it’s called Deceived by Design. And it’s basically talking about the use of dark patterns and how they promote people to engage in activities that are not protecting their privacy. So, they basically give their data up for free. I think the work that Kathy Baxter’s posted, and Deceived by Design are readily raising awareness of a need for a better set of ethics and guiding rules.
Tristan Harris is the other person that comes to mind. I don’t know him, but he’s a former Google design ethicist who basically is evangelizing the need for ethics in the context of not just AI, but how we design systems, especially systems that are driven by data. So, I guess the short answer is I don’t see any one code that researchers and designers can follow, but there are many frameworks – again the Belmont report, various things like APA in the United States, American Psychological Association, UXPA has a code of ethics. I think those are great places to start and there are two principles that I always try to remember, and one is beneficence and the other is non-maleficence – so it’s never trying to do any harm and think about ways that you can actually do good as your design.
I think those are great places to start and there are two principles that I always try to remember, and one is beneficence and the other is non-maleficence – so it’s never trying to do any harm and think about ways that you can actually do good as your design
And I think there’s a risk with designers and researchers designed toward KPI or a metric. And there’s a great book called “Weapons of Mass Destruction” by Cathy O’Neil on how algorithms basically promote inequality and basically institutionalize bias and there are tons of great examples in that book.
I think as we as designers and researchers we need to realize that when we adhere to the design and the search as we strive to support a magic number, maybe it’s adoption, maybe it’s revenue, maybe it’s reduced attrition, if we just blindly follow that we lose sight of the whole human, right?
I think as we as designers and researchers we need to realize that when we adhere to the design and the search as we strive to support a magic number, maybe it’s adoption, maybe it’s revenue, maybe it’s reduced attrition, if we just blindly follow that we lose sight of the whole human…
And so, if I don’t care about anything other than adoption, I can create addictive experiences. If I care only about revenue, I can engage is designs that are deceptive and cause people to pay money when they didn’t intend to, so I think being aware of these is really important.
Abhay: Yeah, I think I’ve heard about the dark patterns, the Norwegian, I checked out the other that you mentioned. Thanks so much, Steve for speaking with me.