Big Data is considered the trend topic of digitization. Some even claim that data is the new gold. But usage data is a treasure trove that has seldom been used to analyse usability with quantitative methods. What possibilities are there for measurably improving UX with usage data and supporting product owners in their decisions? Get a first overview of the analysis of usage data in my blog article.
What is usage data analysis?
Usage data analysis means that the focus of the analysis is on the software and the use of its features in order to make statements about whether the concept of user flow is successful and thus to draw attention to bugs and superfluous or missing features. Of course, all steps in the UX process are based on the needs of the user group, but not on those of the individual users.
Imagine you have developed a new product. One focus of the development was on UX, because you identified it in advance as a critical success factor. Now a first prototype of the product is ready and you want to know if you have achieved your UX goals. An immediate way to do this is to take a look at the data that is generated when a user interacts with the new product – the usage data. A corresponding analysis and interpretation will give you insight into user behavior, the usage flow. By linking the information from several digital devices you will get a unique overall view.
Usage data analysis and data protection
The issue of data protection is of course also important in this context. Therefore, in the UX area we deliberately talk about usage data analysis and not user data analysis. As in the entire UX process, anonymization and data protection are given top priority. Users are aggregated to personas or user groups. Demographic data is only collected to the extent that it is relevant to the persona and issues. For example, this could be the degree of experience with a software, which can result in different usage behavior.
How does the quantitative approach differ from the previous measurement of the user experience?
Previous evaluation of UX components in products is often limited to qualitative data acquisition. Users are often observed and interviewed while operating the prototype in order to identify potential weaknesses. Sometimes this concept test is supplemented by a questionnaire to have at least a quantifiable measure. A user survey alone can be misleading or incomplete. Questionnaires often show a tendency towards the middle in the answering of the items, so one does not get clear statements. Sometimes users do not know exactly what they want or have difficulty communicating it verbally. When observing the user, it is important to keep an eye on the user and at the same time document where difficulties occur or what is not understood, whereby things can be forgotten quickly. The whole thing is time-consuming and is often limited to a handful of tests.
Usage data as a supplement for a comprehensive evaluation
In this case, usage data can represent an objective and reliable alternative that is also scalable at will. It can be done non-invasively, so the user is not disturbed in his normal operating process and does not feel observed. The corresponding data analyst can access the data at any time and link it to other events or machine data with the help of exact time information. Quantifiable measures can be quickly derived, such as the frequency of occurrence of an error, and thus have a significant influence on prioritization and decision-making with regard to the next development steps.
What are important KPIs of the usage data analysis and how can they be collected?
Key Performance Indicators (KPIs) appear in many areas. Software engineers know them as lines of code or number of bugs, content managers in marketing rather as conversion rates or page views. In the UX area we basically distinguish between two types of KPIS: system-level KPIs and story-level KPIs.
System-level KPIs are KPIs that span over user story and enable strategy decisions, but not specific product decisions. Story-Level KPIs are defined for each individual user story and thus provide a basis for decisions on individual features. The “Time on Task” measures the time a user spends on a story, while the “Task Completion Rate” measures, for example, the percentage of users who have successfully completed a user story. Acceptance criteria can then be defined for each user story, which must be met in order for it to be considered a done. By breaking down the complexity of the application on the user story level and automatically prioritizing problems, it is possible to process them successively and quickly make the further development of a product appear manageable. An example is the appearance of a story in the bug fixing pool. When looking at the usage data, it turns out that this story was not active for any user. Putting work into solving the bug would be a waste of time, this task will not have priority.
There are many ways to collect KPIs. As mentioned above, a logged timestamp provides insight into the intervals between and duration of activities. Logging geographical data allows local tracking, which is of course especially interesting for wearables. All other functions can of course also be anchored directly in the software, for example the scroll depth or clicks on corresponding UI elements.
Challenges of the quantitative method
In the course of a usage cycle, for example of new software, vast amounts of data can be produced. Large amounts of data are generally popular with data scientists, as they represent a potentially large training data set to optimize models. On the other hand, large amounts of data always mean the danger of data dredging – the aimless fishing of large amounts of data that always produce something based on statistical noise, and the time required to extract really useful information, not to mention the computing power required. And when product owners often don’t have something, then it’s time for decisions. Therefore, it is important to avoid collecting too much data by clearly defining the analytical questions in advance.
Data can only be interpreted meaningfully in context
The sole viewing of usage data can quickly lose sight of the actual user and his needs. Just because a flow looks good does not mean that the user feels well taken care of. And what he misses, he cannot tell by clicking. That’s why it always makes sense to conduct a short survey or a usability questionnaire in the prototyping phase in order to have the complete package in mind. It is important to see data analysis as a tool of the UX process that is specifically used to answer questions or support assumptions and is cleverly linked to other tools. Only through this linkage can the interpretation of data become meaningful and provide confidence in decisions that have been made or are to be made.
Usage Data Analysis and Continuous UX
Usage data analysis and interpretation becomes much easier and more precise if it takes place in the context of the previously defined scope. This includes the persona (representative user), the user role, the context of use, the goal and task of the user, and the user needs. Often an application is not in the overall “good” or “bad” state, problems often occur in micro-interactions, which in turn can have a major impact on overall performance. To identify these situations, it is helpful to split them into user stories, which are often only seconds long. These are the central element in the Continuous UX process and serve as the basis for defining the desired UX KPIs. Limited by human nature, users can only be in one user story at a time. An entire usage sequence, the so-called user journey, can therefore be considered a simple, non-nested sequence. Multiple user stories can be viewed in context and interruptions in the user journey can be easily identified. Their visualization is easy to understand for people who have no background in data science but are familiar with the usage context, such as product owners or UX designers.
Overview using the user booklet method
By linking user stories and data in so-called user booklets, product owners are supported in prioritizing different user stories and UX researchers gain more detailed insight into user behavior. At the same time, their insight is much more representative due to the scalability and precision of digital, automated data collection. Patterns that arise across products or user groups can be more easily identified. If similarities are found, design principles for subsequent products can even be derived, which in turn makes life easier for UX designers. In addition, data helps to identify potential bottlenecks in the usage flow more quickly.
Usage data analysis, like all types of data analysis, offers the great advantage of objective findings on the basis of which decisions can be made for the further development process. Quantification or measurability can counteract opinion-based decision making and lead to more rational decisions.