Blog

Cognitive Bias in UX Research: A Survival Guide

Alessandra Rodrigues Eismann
Alessandra Rodrigues Eismann
February 16th, 2023

In UX Research, we want to obtain objective insights on the behavior, needs, and motivations of users. UX research is important for UX design and development to ensure that decisions are always made with the user in mind. However, since UX research is done by humans, these insights are always affected by cognitive bias. How can we, as UX researchers, minimize the influence of these biases? And what is cognitive bias anyway?

What is cognitive bias?

A cognitive bias is an unconscious error in thinking that leads to misinterpretation of information of the environment and reduces the rationality and accuracy of decisions and judgements (Ruhl, 2021).

This definition sounds quite abstract at first. A typical everyday example of a cognitive bias is the attribution error (Ross, 1977). It refers to the tendency to attribute others’ behaviors to their personalities and attitudes (“The person in front of me at the checkout was quite curt in his conversation with the cashier, what an unfriendly person.”), while attributing one’s own behaviors to situational factors (“I don’t feel like making small talk with the cashier at all, I’ve had a long and tiring day and I just want to go home to my couch.”). The attribution error can be explained by the fact that we have a lot of information about our own behavior and motivations, while we have much less information about this from other people. It is much easier for us to justify our own behavior situationally than it is to justify the behavior of other people.

In psychological research, many different types of cognitive bias have already been discovered and studied – if you want to be convinced of how many types there are, you can take a look at the Cognitive Bias Codex and discover in how many ways our thinking and reasoning is biased:

The Cognitive Bias Codex

Illustration: Cognitive Bias Codex by John Manoogian (here with more detailed information on the individual bias types)

Looking a little deeper into the Cognitive Bias Codex, one might conclude that humans are fundamentally illogical or faulty in their thinking and reasoning. However, Cognitive Biases are human – they arise from the psyche’s (not necessarily conscious) attempt to keep the cognitive load as low as possible. It can be argued that our psyche avoids thoroughly analyzing a large amount of information over and over again, and therefore makes use of Cognitive Biases (Shatz, 2023) – simply because it is more economical.

Because even though cognitive biases can lead to (minor or even major) misjudgments, they have another crucial advantage: Since our available attention and decision-making capacity are limited, cognitive biases help us to use our attention in a resource-efficient way and to make decisions quickly and easily (Dwyer, 2018). Thus, cognitive biases are not so much biases as they are cognitive shortcuts that allow us to make decisions more quickly and efficiently (Ruhl, 2021). It would simply be too exhausting to have to fundamentally analyze all available information or even seek out new information for every decision. These cognitive shortcuts often lead to helpful and efficient decisions – but often enough to less helpful decisions. And these should be avoided as much as possible in situations where objective facts are important, for example in UX research!

Cognitive Bias in UX Research

As mentioned before: the phenomenon of cognitive bias is human. All stakeholders go into UX research with their personalities, experiences, attitudes, and their own cognitive biases. It can help to be aware of this, but eliminating the influence altogether is not attainable for us (Hall, 2019).

Nevertheless, we should try! In UX Research, we want to gain insights that are as objective as possible. Therefore, through knowledge and by using minor tweaks, we can ensure that the influence of cognitive biases is reduced among UX researchers, test subjects, and other stakeholders. This brings us closer to more objective results in UX research – and to our goal of delivering great usability and user experience. In this section, I will present a few specific types of cognitive bias in UX research and give suggestions on how to reduce their influence.

Confirmation Bias

Confirmation bias (Wason, 1968) refers to the tendency to select and interpret information that confirms our own expectations and assumptions. For example, when we research something on a particular topic, we tend to “cherry-pick” the information that confirms our previous assumptions. This is cognitively resource-efficient – we don’t have to revise and possibly rework our complete assumptions but can simply continue to believe what we already believed before and now feel confirmed by the (distorted) facts. In UX research, it might look like this: A UX Researcher looks at an app before a series of usability tests. He likes certain aspects of the app very much, others he sees as problematic. He goes into the tests with his personal impressions and pays more attention to the aspects that confirm his personal impressions, while he neglects aspects that contradict his personal impressions.

It is therefore very important for UX researchers to go into usability tests as unbiased as possible and to exclude personal opinions as best as possible. It also helps to be aware of confirmation bias and to question whether the testing really confirmed one’s own assumptions or whether perhaps completely different findings have been obtained. Another possibility can also be to bring project independent UX researchers into the project for the next tests, since they may not yet have so many own assumptions about the testing object.

Framing effect

The framing effect refers to the effect that different ways of phrasing and presenting can influence people’s judgment and decision-making behavior (see Müsseler & Rieger, 2017, p. 652). It makes a difference whether one asks:

“What do you find most annoying about the user interface?” or “What would you evaluate as positive about the user interface? What did you consider to be more negative?”. The first question encourages the participant to rate the user interface as negative, while the second question highlights both positive and negative aspects of the user interface.

When presenting results, there is also a difference between saying:

“80% of the users had no problems finding their way around the prototype” or: “A total 20% of the users had problems navigating the prototype!”. With the first statement, listeners might be misled into thinking that the prototype is already quite straightforward and intuitive for the users, while the second statement implies that the prototype is not yet straightforward and intuitive enough – even though the results are the same!

There is a fine line between getting the right information and guiding the participants’ response in a certain direction. To prevent the framing effect, prepare a list of questions before the interview or test. These questions should be phrased as neutrally as possible. If this is not possible, one should highlight both negative and positive sides of a particular aspect. In addition, feedback should be gathered from colleagues to find out whether the questions are formulated specifically but neutrally enough. When presenting the results, UX researchers should make sure to present the findings from the research as objectively and fact-based as possible and always deduce recommendations for further action based on concrete observations.

Primacy-Recency effect

The primacy-recency effect describes the effect that, from a series of information, people remember the information best that is presented at the beginning (primacy effect) or towards the end (recency effect) of the series of information (Stangl, 2023). In UX research, this effect can result in the first and last statements of the interview partner being remembered particularly well after an interview, while the statements in the middle part of the interview are remembered less well. The primacy-recency effect can also occur in a series of interviews or tests: The UX researcher remembers the first and last interview from a series particularly well. This can lead to the first and last impressions from the UX research being particularly influential.

To reduce the influence of the primacy-recency effect, it is important to take detailed notes during the tests or interviews. It is also an option, with the consent of all participants and in consideration of data privacy, to make video recordings of the tests or interviews so that certain sections can be revisited afterwards. Regardless of how the tests or interviews have been documented, it also helps to look at the gathered data in a different order after the tests or interviews have been finished. That way, important aspects that came up primarily in the middle sessions may get more attention.

Social desirability bias

Social desirability bias occurs when participants adapt their responses to sociocultural prevalent norms and expectations due to fear of social condemnation (Bortz & Döring, 2016, pp. 232-233). For instance, let us imagine that person A donates to a charity from time to time, while person B regularly throws trash on the street – how likely is person A to exaggerate the frequency of their donations, while person B is more likely to understate how often they throw trash on the street (or even deny it)? Since humans are social creatures, it is clear that they always want to present themselves in the best light (Hall, 2019).

This type of bias occurs in UX research especially when participants are asked to make statements about socially desirable or undesirable behavior. It can also occur when superiors or other colleagues of the participant are present in the research sessions. Here again, it is particularly important to emphasize that the data collected is anonymous and will be treated confidentially. It should also be emphasized that UX researchers do not aim to judge participants in any way.

Another way to reduce the influence of social desirability is to use control scales such as the SDS-CM (Lück & Timaeus, 1997), which is the German version of the Marlowe-Crowne Social Desirability Scale-C. The scale is used to assess the tendency towards social desirability bias in subjects. Thus, a subject who exhibits a strong social desirability bias would presumably answer the item “I am always polite, even to unpleasant people.” with “Yes” whereas he or she would answer “No” to the item “Sometimes I insist on compensation and cannot forgive and forget.” This control scale makes it possible to factor out the influence of social desirability in quantitative data or to exclude subjects with strong social desirability bias from the analysis. The scale can also be used for qualitative data to estimate the potential influence of social desirability. The use of control scales to record social desirability is valuable if it is already apparent in advance that the UX research will deal with topics that could be distorted by social desirability.

Hawthorne effect

The Hawthorne effect is less a cognitive bias and more a bias that can arise from the social dynamics between researcher and subject. The Hawthorne Effect dates back to experiments conducted by two U.S. researchers in the 1920s and 1930s at the Western Electric Company’s Hawthorne Works in Cicero, Illinois, USA (Roethlisberger et al., 1976). In these experiments, the researchers tried to find out how to increase the work performance of the workers. To do this, they ran through various experimental conditions in which certain factors (including working hours, breaks, pay) were varied. It was found that despite an objective deterioration in working conditions, productivity increased in each condition. This could be explained by the fact that the mere presence and attention of the researchers and the knowledge of being a test subject in an experiment changed the behavior of the workers – this effect is now referred to in psychology as the Hawthorne effect (Stapf, 2021).

The Hawthorne effect is particularly strong in situations in which UX researchers observe people in their everyday (work) life. The subjects know that they are being observed and adjust accordingly. They may joke around less with their colleagues and pay particular attention to working in a concentrated manner and productively carrying out their tasks.

How can you counteract the Hawthorne effect? Transparent communication is important here: it should be clear to participants what the goal of the research is, that the insights gathered are confidential, and that UX researchers are not judging their behavior. It can also help to actively encourage participants to go about their normal daily routines (Hall, 2019). In addition, during the observation, efforts should be made to observe quietly and fade into the background as much as possible. For remote observations, all observers, except the moderator, should join into the meeting with their cameras turned off. The more the observer fades into the background, the more the participants will forget that they are being observed.

Conclusion

I hope I was able to give a brief insight into a couple of types of Cognitive Biases that are important for UX Research. You now know significantly more about the topic but be aware that it takes a lot of practice and repetition to train yourself to think more openly and less biased. And be careful not to be subject to the Bias Blind Spot (Pronin et al., 2002) – the tendency to perceive the presence of cognitive biases much more strongly in others than in oneself and to judge oneself as much less influenced ?

Bibliography

Bortz, J., & Döring, N. (2016). Forschungsmethoden und Evaluation: Für Human- und Sozialwissenschaftler (5. Aufl). Springer.

Hall, E. (2019). Just Enough Research (2. Aufl.). A Book Apart.

Lück, H., & Timaeus, E. (1997). Soziale Erwünschtheit SDS-CM. Zusammenstellung sozialwissenschaftlicher Items und Skalen (ZIS). https://doi.org/10.6102/ZIS170

Müsseler, J., & Rieger, M. (Hrsg.). (2017). Allgemeine Psychologie (3. Auflage). Springer. https://doi.org/10.1007/978-3-642-53898-8

Pronin, E., Lin, D. Y., & Ross, L. (2002). The Bias Blind Spot: Perceptions of Bias in Self Versus Others. Personality and Social Psychology Bulletin, 28(3), 369–381. https://doi.org/10.1177/0146167202286008

Roethlisberger, F. J., Dickson, W. J., & Wright, H. A. (1976). Management and the Worker: An Account of a Research Program Conducted by the Western Electric Company, Hawthorne Works, Chicago. Harvard University Press.

Ross, L. (1977). The Intuitive Psychologist And His Shortcomings: Distortions in the Attribution Process. In Advances in Experimental Social Psychology (Bd. 10, S. 173–220). Elsevier. https://doi.org/10.1016/S0065-2601(08)60357-3

Ruhl, C. (2021, Mai 4). What Is Cognitive Bias? Simply Psychology. www.simplypsychology.org/cognitive-bias.html

Shatz, I. (2023). Cognitive Biases: What They Are and How They Affect People. Effectiviology. https://effectiviology.com/cognitive-biases/#Who_experiences_cognitive_biases

Stangl, W. (2023). Primacy-Recency-Effekt. In Lexikon der Psychologie. https://lexikon.stangl.eu/10733/primacy-recency-effekt

Stapf, K.-H. (2021). Hawthorne-Effekt. In M. A. Wirtz (Hrsg.), Dorsch—Lexikon der Psychologie. Dorsch. https://dorsch.hogrefe.com/stichwort/hawthorne-effekt

Wason, P. C. (1968). Reasoning about a Rule. Quarterly Journal of Experimental Psychology, 20(3), 273–281. https://doi.org/10.1080/14640746808400161

 

We have aroused your interest? Take a look at our services !

UX Research

 

Want to know more about our services, products or our UX process?
We are looking forward to hearing from you.

Senior UX Manager
+49 681 959 3110

Before sending your request, please confirm that we may contact you by clicking in the checkbox above.