When user testing you’ll normally take either a qualitative, quantitative or mixed methods approach. In this post we’ll dive into the three approaches and highlight their differences.
Qualitative data is normally used to inform the design process of what works, what doesn’t work and more importantly why it worked or didn’t work. Traditional qualitative data consists of observations, quotes from participants, answers to open-ended questions, etc.
Qualitative testing is exploratory and involves a relatively small amount of participants. They are often used formatively. Here you can identify main usability errors and get information on what should be improved. During a moderated user test the facilitator can ask the participant follow-up questions and change the study as they go, in order to get insights of specific areas or issues the participant faces. During an unmoderated user test the participant is asked different questions during the test in order to inform about why things worked or not. With Preely you can collect various self-reported metrics (see blog post about Metrics). Note that the analysis of qualitative data is often affected by the facilitator's or analyst’s prior knowledge of the topic, experience in UX, etc.
Quantitative data provide information on one or more performance metrics, reflecting on whether a task was easy to perform or not. This offers an indirect assessment of the usability of the design.
Quantitative testing is great for summative evaluations, where you seek to obtain numbers on the experience.
In quantitative tests, you have more participants than in qualitative. The experiences will be translated to numbers and it is great for benchmarking or for convincing stakeholders due to the statistical significance. Results from quantitative tests (when analyzed correctly) protect against random noise, which you often see in qualitative tests. Quantitative testing is often task-based, so you should ensure to have a single well-defined answer for each task and define your success criteria prior to the test, so you have clearly defined when a task or test is successfully accomplished or not. Typical quantitative metrics are based on participants’ performance on a given task: task-completion time, success rate, number of errors, etc. (see blog post about Metrics).
It can also be the participants’ perception of usability and UX e.g. satisfaction ratings like System Usability Scale (SUS), Net Promoter Score (NPS), etc. However, it can be hard to interpret them if you do not have a reference. Hence you’ll often see the results being compared to known standards, a competitor’s design or a previous design. This data does however not inform you specifically of which problems the participants encountered nor which changes you should make to the design in order to make it better. Note, that in order to be able to use statistics the test design needs to be consistent regarding the conditions, and adhere closely to the plan, without any changes. Preely is a great tool for exactly this.
Mixed methods are very often used in user testing. This gives you the benefits of both worlds, by having qualitative insights to support your quantitative data. As an example this means that you can have quotes and observations supporting your numbers. Here it is important to notice that you need to follow the paradigm of quantitative tests, be very consistent regarding conditions, adhere closely to the plan and not change your test design. Preely manages all this for you automatically and you’ll not have the bias from a facilitator.
Before choosing which approach to use consider where you are in the development process. Which goals do you want to achieve with the test and which type of data and insights do you need to reach this? To learn more about the recommended amount of participants to each of the study types see how many participants do you need for your user test?
Faster. Better. Cheaper.