
The hype surrounding AI shows no signs of stopping for the time being: according to a study by Bitkom Research, the proportion of German citizens who use tools such as ChatGPT, Microsoft Copilot, or Gemini at least occasionally has risen from 40% in 2024 to 67% in 2025[1]. More and more companies are integrating conversational AI into their products. This makes it increasingly important for UX professionals to understand how users interact with conversational AI and what characterises a good user experience. In this article, I will discuss the factors that influence the user experience of conversational AI and how it differs from other interactive products.
What is conversational AI?
“Conversational artificial intelligence (AI) refers to technologies, such as chatbots or virtual agents, that users can talk to. They use large volumes of data, machine learning and natural language processing to help imitate human interactions, recognizing speech and text inputs and translating their meanings across various languages.” [2]
The two-component model of user experience
To answer the question of what exactly constitutes the user experience of conversational AI, it is helpful to understand what constitutes the user experience of interactive products in general. The quality of the user experience can be divided into what is known as pragmatic and hedonic quality[3].
Pragmatic quality refers to the task-related quality of a product. Can I use this product to complete my tasks? How effectively and efficiently can I complete my tasks with it? Pragmatic quality consists of usability and perceived usefulness. In contrast, hedonic quality refers to the non-task-related aspects of a product. High hedonic quality is present when the product sparks the curiosity of users, is perceived as attractive, or is fun to use[4].
A study on the user experience of ChatGPT[5] showed that users rated pragmatic quality highly when they felt that ChatGPT enabled them to work more efficiently and achieve better results. Pragmatic quality was also rated highly when users found the output to be particularly relevant, useful, accurate, and detailed.
Users perceived the hedonic quality as high when they found the outputs surprising or impressive. The feeling of being supported by ChatGPT in creative activities or entertained by the outputs also had a positive effect on the hedonic quality.
An interesting factor in the hedonic quality of conversational AI is that we can “talk” to it, for example in the form of written chats or spoken dialogues with voice assistants. Since conversational AI often resembles a human conversation partner, we tend to attribute other human characteristics to it. It is precisely this phenomenon, known as anthropomorphism, that I would like to examine in more detail below and explain the role it plays in the user experience.
What is anthropomorphism?
Anthropomorphism can be defined as “the attribution of human experiences or traits to non-human beings”[6]. We tend to anthropomorphize animals, objects, and systems in order to better understand their behavior and integrate them into our own human experience. For example, when we attribute human feelings to animals (“My cat is offended”) or assume that technical devices have a will of their own (“My laptop is trying to annoy me today”).
People tend to anthropomorphize conversational AI rather quickly, as these systems deliberately mimic human communication patterns. They respond continuously and appear attentive, simulating conversational dynamics such as alternating between speaking and listening. They also use social cues: they refer to themselves using “I” and respond positively to user requests. For example, ChatGPT promptly responds to the question “How are you?” with “Thanks for asking! I’m fine—ready to help you. How are you?”
In einer Studie zeigte sich, dass Nutzer*innen AI-Chatbots eher vertrauen, wenn diese menschliche Merkmale aufweisen Neben der Nachahmung von Gesprächsführung und natürlicher Sprache werden Chatbots häufig auch bewusst menschlich gestaltet, beispielsweise durch einen eigenen Namen, ein Icon oder eine zugeschriebene Persönlichkeit. Dies erhöht das emotionale Vertrauen in die Technologie zusätzlich Je mehr solcher menschlicher Merkmale vorhanden sind, desto eher empfinden Nutzer*innen die Antworten der Chatbots als genauer und weniger riskant
A study has shown that users tend to trust AI chatbots more when they exhibit human characteristics[7]. In addition to imitating conversation and natural language, chatbots are often deliberately designed to be human-like, for example by giving them their own name, icon, or attributed personality. This further increases emotional trust in the technology[8]. The more human characteristics are present, the more likely users are to perceive the chatbot’s responses as more accurate and less risky[9].
This additional trust can become problematic if the AI hallucinates, i.e., provides false or overly simplified information. UX professionals should therefore be aware of the influence anthropomorphic cues have on users. At the same time, product teams should focus on transparent communication regarding the weaknesses and limitations of conversational AI, as this can also increase user trust in conversational AI[10].
People differ in their tendency to anthropomorphize things. While some are quick to attribute human characteristics to animals and objects, others do so rarely[11]. In her master’s thesis, my colleague Carla examined how this tendency to anthropomorphize affects how credible we find AI personas and how much empathy we develop toward them. If you would like to learn more about this, please take a look at her blog article about this topic.
Since we tend to anthropomorphize conversational AI and interact with it, communication style becomes a crucial factor for good user experience. It significantly determines how pleasant, understandable, and appealing the interaction is perceived to be.
Communication style
A study[12] examined how users react to different communication styles of chatbots when there are problems or errors in customer service. The results showed that chatbots that used a socially oriented style—rather than communicating in a purely task-oriented manner—increased user satisfaction, trust, and loyalty toward the company. The perceived warmth in the conversation was particularly important, as it mitigated negative emotions caused by the error in customer service.
Another study examined how users perceive empathetic behavior by chatbots. In the experiments that were conducted, empathetic chatbots often triggered a feeling of social presence and increased the perceived quality of the information provided. Overall, this led to greater satisfaction with the chatbot. At the same time, it became clear that empathy was not beneficial in every situation. When under time pressure, the study participants found empathetic responses irritating, which worsened their experience with the chatbot. The study makes it clear that communicative elements such as empathy should be used in a targeted and context-dependent manner in order to achieve positive effects on user satisfaction[13].
It is therefore advisable for UX professionals to consider not only efficiency but also the emotional impact of the communication style when designing interactions with conversational AI. However, exactly how it should be designed depends on the context of use. Therefore, it makes sense to conduct UX research before the design phase to find out who the users are, what goals they want to achieve in their interaction with conversational AI, and in what context they do so. This is exactly what my colleague Carla discusses in her blog article “Chatbot Research 101,” which will be published here on the blog soon.
Conclusion
Wie positiv die Nutzererfahrung mit Conversational AI wahrgenommen wird, hängt von verschiedenen Faktoren ab. Einerseits muss eine hohe pragmatische Qualität gegeben sein, das heißt, Nutzer*innen müssen das Gefühl haben, ihre Aufgaben mit Conversational AI effizient erledigen zu können, und sie müssen die Ergebnisse als relevant und nützlich wahrnehmen. Gleichzeitig spielt auch die hedonische Qualität eine Rolle: Die Nutzererfahrung ist besonders gut, wenn die Interaktion Spaß macht oder unterhaltsam ist. Die hedonische Qualität wird insbesondere dadurch beeinflusst, dass Conversational AI menschliche Interaktion nachahmt, wodurch wir sie leicht vermenschlichen. Deshalb müssen sich UX Professionals bei der Gestaltung von Conversational AI auch darüber Gedanken machen, wie sich anthropomorphe Merkmale und der Kommunikationsstil auf die Nutzererfahrung auswirken.
How positively users perceive their experience with conversational AI depends on various factors. On the one hand, it must offer high pragmatic quality, meaning that users must feel that they can efficiently complete their tasks with conversational AI and perceive the results as relevant and useful. At the same time, hedonic quality also plays a role: the user experience is perceived as particularly good when the interaction is fun or entertaining. Hedonic quality is particularly influenced by the fact that conversational AI mimics human interaction, which makes it easy for us to anthropomorphize it. That’s why UX professionals need to consider how anthropomorphic features and communication style affect the user experience when designing conversational AI.
References
[1] Wintergerst, R. (2025). Künstliche Intelligenz: Der Blick der Deutschen auf die neue Technologie. Bitkom Research. https://bitkom-research.de/node/1192
[2] What is conversational AI? (2021, September 28). IBM. https://www.ibm.com/think/topics/conversational-ai
[3] Hassenzahl, M. (2007). The hedonic/pragmatic model of user experience. Towards a UX Manifesto, 10–14.
[4] Burmester, M., Hassenzahl, M., & Koller, F. (2002). Usability ist nicht alles – Wege zu attraktiven Produkten (Beyond Usability – Appeal of interactive Products). I-Com, 1(1), 32–40. https://doi.org/10.1524/icom.2002.1.1.032
[5] Skjuve, M., Følstad, A., & Brandtzaeg, P. B. (2023). The User Experience of ChatGPT: Findings from a Questionnaire Study of Early Users. Proceedings of the 5th International Conference on Conversational User Interfaces, 1–10. https://doi.org/10.1145/3571884.3597144
[6] Anthropomorphismus. (2000). In Spektrum.de (Hrsg.), Lexikon der Psychologie. Spektrum Akademischer Verlag. https://www.spektrum.de/lexikon/psychologie/anthropomorphismus/1097
[7] Li, J., Wu, L., Qi, J., Zhang, Y., Wu, Z., & Hu, S. (2023). Determinants Affecting Consumer Trust in Communication With AI Chatbots: The Moderating Effect of Privacy Concerns. Journal of Organizational and End User Computing, 35(1), 1–24. https://doi.org/10.4018/JOEUC.328089
[8] Gkinko, L., & Elbanna, A. (2023). Designing trust: The formation of employees’ trust in conversational AI in the digital workplace. Journal of Business Research, 158, 113707. https://doi.org/10.1016/j.jbusres.2023.113707
[9] Cohn, M., Pushkarna, M., Olanubi, G. O., Moran, J. M., Padgett, D., Mengesha, Z., & Heldreth, C. (2024). Believing Anthropomorphism: Examining the Role of Anthropomorphic Cues on Trust in Large Language Models (Version 1). arXiv. https://doi.org/10.48550/ARXIV.2405.06079
[10] Glassberg, I., Ilan, Y. B., & Zwilling, M. (2025). The key role of design and transparency in enhancing trust in AI-powered digital agents. Journal of Innovation & Knowledge, 10(5), 100770. https://doi.org/10.1016/j.jik.2025.100770
[11] Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864–886. https://doi.org/10.1037/0033-295X.114.4.864
[12] Cai, N., Gao, S., & Yan, J. (2024). How the communication style of chatbots influences consumers’ satisfaction, trust, and engagement in the context of service failure. Humanities and Social Sciences Communications, 11(1), 687. https://doi.org/10.1057/s41599-024-03212-0
[13] Juquelier, A., Poncin, I., & Hazée, S. (2025). Empathic chatbots: A double-edged sword in customer experiences. Journal of Business Research, 188, 115074. https://doi.org/10.1016/j.jbusres.2024.115074