Blog

AI Personas: Empathy-Amplifier or false pretences?

Thomas Immich
Thomas Immich
April 18th, 2024

Unlike Amazon Echo and Meta, ChatGPT is highly disruptive. But why is that? ChatGPT was the fastest to reach 100 million users and has no dependencies on other technologies. ChatGPT has also offered surprisingly little “ethical slippery slope” so far. But the main reason, according to Sam Altman himself, seems to be ChatGPT’s fantastic UX. Good news for all UX professionals.

 

„If I had to choose a point in time when AI became truly disruptive, I would choose the release of ChatGPT. However, it wasn’t the underlying AI models that made the difference, but the user-friendliness of the application!“ – Sam Altman, CEO OpenAI, Lex Fridman Podcast

 

Questioning the Old

Looking at the Generative AI wave, we live in disruptive times and must consequently rethink long-unquestioned concepts. This includes established UX methods that have been guiding us well through the product development jungle for years. One of these methods to question is the “persona” method. The hypothesis of the method is that an AI-generated fictional character may allow us to put ourselves into the shoes of the users and with that feel increased empathy for their pains and gains. A persona is and was, primarily, a communication artifact that elevates the team to a common foundation—if the persona enables all team members to adopt its perspective, then all team members will automatically share that very same perspective. This may sound trivial, but it’s a noteworthy effect because channeling discussions and fostering a more united pursuit of a common goal will ultimately lead to better work results.

Critique of the Persona Method

Whether the persona method has worked well so far, I leave to each individual, as I don’t think it can be answered per se. However, there has always been a certain scepticism towards the method during my entire 20+ year career. Especially algorithmically thinking software engineers usually prefer quantitative analyses based on “hard data” instead of settling upon seemingly arbitrarily chosen characteristics of people who don’t even really exist at the end of the day. The funny part of this? Well, in my view, it remains to be clarified whether some software engineers might have more empathy for data than for people 🙂

Skeptischer Look Illustration

Honestly, I assume that the aversion against personas results less from a general lack of empathy but more from the diffuse yet correct feeling of being confronted with only parts of the truth. If a persona is defined as 32 years old and female, a software engineer might rightly wonder: “Would a 28-year-old male user fail using the software just because he has a Y chromosome? Would the user just need to age four years to then understand the software?” Of course not.

Authority and Fallibility of UX Research

Fortunately, such questions will not make a holistically and systemically thinking person lose its faith in the method, as functional clarity and algorithmic processing are not the core reasoning behind empathy and perspective taking. Nonetheless, the uneasy feeling remains: at some point, a UX research professional (or a team thereof) made the decision to distill a certain age and gender from a pool of many questionnaires, many ages and many genders to finally come up with a fixed set of “slices” in the shape of one or more personas. In this case, we encounter a much deeper issue: all design decisions based on a persona ultimately rely on the non-calculable authority and skillset of just a few UX research professionals… who are psychologists, yes. But after all, they are still fallible, if I’m not mistaken.

AI is fallible, too

To connect back to Generative AI, especially Large Language Models (LLMs): actually, there’s no longer a need for user researchers to manually generate personas from a set of user research data with the risk of potentially making mistakes along the way. With the help of Generative AI, we can arrive at a collection of personas without detouring through low-level UX research work. As I dryly noted in my blog post last year: humans are fallible, and so is AI. But: the fallibility of AI, at least in this case, is at least not resulting from opinion biases or complexity overload.

A UX researcher who implicitly wishes to advocate for a more sustainable world will inevitably and perhaps even unconsciously emphasize those persona traits that he or she regards as particularly important for a sustainable design decision. The persona is then biased and “opinionated”.

But who guarantees that AI-generated personas aren’t biased and “opinionated”? I think, in the end, it can be boiled down to: “Prompts and grounding data make the music,” and a prompt or grounding data set that introduces a tendency towards, for instance, sustainability in the input will naturally also consider or even emphasize these topics in the output.

The design leadership experiment

Autumn 2023, me and my team at Centigrade had the opportunity to conduct a highly exciting experiment. We were invited by BOSCH’s design leadership team to spend an entire afternoon giving a workshop on the intersection of Generative AI and Design to their in-house design leadership team. We leveraged a software tool capable of generating usable personas via AI, namely LeanScope AI. Together with our Centigrade partner NUILAND, we built a cabinet similar to a ticketing machine and our Centigrade developers modded a kiosk version of the software. Instead of tickets, however, this machine printed out AI-generated persona posters. Then, we divided the designers into 6 groups of 10 participants each. Each group was allowed to generate its own persona to rethink a BOSCH product of their choice in light of the GenAI wave.

And now for the twist: what the participants did not know was that we had not only visually transformed LeanScope into a kiosk mode but we had also modified the prompts responsible for generating the personas. We overrode the master prompt so that, no matter what persona role was requested by the user, the generated output persona was highly biased towards sustainability. For instance, if you entered “electrician” as a role, then the resulting persona was frustrated about having to throw away old cables that were still usable. Funnily, if you entered “vampire”, the resulting persona was frustrated that he wants to coexist peacefully with all creatures in the world, but unfortunately, his urge for blood often gets the upper hand. After all, a sustainable vampire does not drink more blood than necessary to survive. Comforting to know.

But back to the experiment: one group had the idea to rethink the toaster, as a “breakfast machine”. They defined their requested role as “family father with … children.” This resulted in the following persona:

Persona Product Toaster

The persona makes the design

Due to the sustainably-tweaked master prompt, the family father was naturally frustrated that he has to throw away lots of toast in the morning because he is often distracted by many other tasks while making breakfast. Due to the various needs of his kids (from soft to super crispy), he sometimes gets the settings wrong, and some of the toasts get burnt, which he, as sustainably thinking person, can hardly bear to throw in the bin.

I guess, you can already imagine how this group of designers approached the problem. Neither a faster nor a larger toaster was considered during ideation and brainstorming. It was clear: the toaster had to — by any means — prevent toasts from being burnt… the idea of an integrated brightness sensor in the toaster, triggering an automatic switch-off, was almost a “no-brainer.”

Anti waste Toaster Illustration

What we did at BOSCH during that workshop may have been anything but mainstream and naturally caused a lot of surprise and wonder. But now, just a few months later, the idea of generating personas with the help of AI and talking to them is more tangible than ever.

The role of proper User Research

Along with our experiment, ethical questions arose: is it okay, to manipulate AI-generated personas through “opinonated” master prompts, justified merely by “good will” and the ultimate goal to rescue the planet? In my opinion, what we in fact did was pure “storytelling” with proto-personas to foster inspiration for design thinking workshops. This approach was not even close to valid persona creation accompanied by proper user research. And of course, that’s what we communicated to the workshop members to create full transparency.

In my opinion, now that AI is able to generate relatable and conversable personas, user research becomes more important than ever. Everything that is AI generated needs to be backed with actual research insights from interviews or field studies with real users. This way, wrong AI assumptions can be falsified and true AI assumptions can be reinforced.

Now, you might argue, there are quite a lot of user research insights and the information gathered might be too large and complex to be used in a single prompt. Well, that’s true. However, there is hope, due to a technique that’s called “Retrieval Augmented Generation”.

Retrieval Augmented Generation

For those who have not heard of RAG: it’s a technical approach where you don’t change the underlying LLM to get to domain-specific outputs, but instead, you upload some kind of “knowledge library”, e.g. a set of PDF documents with interview results, from which so-called “embeddings” are being calculated and used as a input for the output evaluation.

Augmented generation using embeddings

With these calculated embeddings (which are basically a set of vectors, and the reason why the NVIDIA stocks are going crazy) you can identify similarities inside your documents — and with them, similarities or differences in the fuzzy statements of interviewed participants.

The Custom GPT playground of OpenAI is already ready for RAG. It allows you to describe “how” your own assistant should act in a small instructional textbox, but in addition lets you upload arbitrary documents to enrich your own knowledge base beyond the 8K-token limit.

Leveraging user research insights

If you model a persona, like the one I have described for the sustainability experiment, having a custom GPT augmented with real user research data you can start chatting with your personas just as if you would do with real users. You have created a “persona agent”. However, the answers of this persona agent may still be flat. If you ask it about its frustrations, you might receive an answer such as: “I do not like to throw away burnt toast.”

Regardless of whether this response is based on facts and proper user research, I believe this statement does little in terms of enhancing empathy. A design professional might be used to thinking deeply about such a statement and might even create mental images of how the poor family father stands stressed in the kitchen in the morning, desperately trying to scrape off the charcoal-like toast with a knife. He might even hear the scratching, scraping sound in his inner ear. But, to make this strong form of perspective-taking accessible to people without experience in this area, we might have to enter the realm of “storytelling,” which involves exaggeration and anticipation to create an “illusion of life”.

Disney’s Illusion of Life

My very first presentation in the UX industry was about motion design and why it helps increase engagement and joy of use. Even then, Disney’s “Illusion of Life” came across my path, as it does now, nearly 20 years later. The book addresses how to enable people to build as much empathy as possible with fictional characters. I only need to say “Bambi,” and it brings tears to the eyes of some listeners ?

Here too, one could argue: creating a fictional character to then evoke “real feelings” in the viewer is a pretense of false facts, just to increase revenue.

The significant difference, however, is that crystal-clear from minute one on, that a fictional character is being presented. Also, nobody has to watch Bambi, so it’s a self-determined act of the viewer – you watch at your own “empathy risk”. It would be more critical if one was claiming that the presented character has existed for real, even if it is not the case.

Illusion of life disney

The “Show, don’t tell” principle

Applied to the world of language, a principle from Illusion of Life is the “show, don’t tell” principle. Instead of listing the frustrations and motivations of the shown character in detail, you describe the effects of these feelings. Visually, you can of course make use of exaggeration in facial expressions and gestures, but on a purely linguistic level, the only way to create an illusion of life is to write down the “inner voice” of your fictional character. Transferred to UX, this means that the persona agent needs to be instructed explicitly to apply the “show, don’t tell” principle.

With this tweak of instructions, our family father agent being asked for his biggest frustration will no longer answer with

“I get frustrated when my toast burns.”

but, amazingly enough, with an entire relatable story:

“I was standing in the kitchen, my gaze fixed on the toast that has turned black and is jumping out of the toaster. The smell of burnt bread filled the room, a smell that felt like defeat. Not again, I thought as I pulled the toast out of the toaster. My fingers slightly trembled with exhaustion — you know, the nights are short and the days are long with seven children in the house.”

Conclusion

Yes, AI is very well-suited as an empathy amplifier and can be optimally combined with the persona method to deep dive into the inner world of users. Whether this is a pretense of false facts, heavily depends on which “source of truth” is the underlying ground. More importantly, the source and assessment of truthfulness should be transparently named, just as a film should not pretend to be based on a true story if it is not. In addition, good user research must be integrated continuously to find out, which assumptions about the actual users are incorrect and which are correct, so that proto-personas become valid personas.

Designers and users of persona agents need to be explicitly aware of the fact, that there is a “connection” and a “detachment” phase you are going through as an agent user. For more psychological details and research insights on this fascinating topic, I recommend reading the blog post “Can AI increase our empathy towards users” of my colleague Carla. Cheers!

 

We have aroused your interest? Take a look at our services!

UX Design UX Research

Want to know more about our services, products or our UX process?
We are looking forward to hearing from you.

Senior UX Manager
+49 681 959 3110

Before sending your request, please confirm that we may contact you by clicking in the checkbox above.