Usability and UX Metrics in Preely

Usability and UX Metrics in Preely

Do you know that you can track the most common usability and UX metrics in Preely?

Usability and UX metrics enable you to track progress between releases or between one iteration of a concept to another. You can also use metrics to measure up against success- or acceptance criteria. They are a great tool for benchmarking, and a necessity when communicating usability and UX to stakeholders and upper management.

Usability and UX metrics are by nature quantitative. We know it might be a bit of a challenge to start working with this type of data, if you are used to qualitative research methods and -data. That’s why we at Preely for the next couple of months will share information, tips and tricks on how to work with quantitative data. If you didn’t already know it, we collect the most common usability and UX metrics in Preely.

Which metrics you use, you decide yourself within your organization. In Preely we collect the following metrics:

Performance metrics

Performance metrics measure the participant’s performance on the given set of test tasks. This type of data is sometimes also referred to as behavioral data. In Preely we automatically collect:

  • Success rate: If the participant can perform the task or not.
  • Task time (completion): The time a task requires
  • Error rate: How many errors the participant makes
  • Interaction cost: Paths, clicks, scrolls, typing, and heat maps
  • Learnability: Compare the above metrics in a within subject study over time

 

Self-reported metrics

Self-reported metrics measure the participant’s perception of the prototype or design, and how they feel about it. The data is often referred to as subjective data or preference data. In Preely you have the opportunity of adding questions to your tasks. We provide you with the following options:

  • Rating
  • Likert Scale Read more
  • semantic differential scale. Read more
  • Single Ease Question (SEQ)
  • Multiple choice
  • Open-ended question
  • Net Promoter Score (NPS). Read more

When adding self-reported metrics to the test – be aware of the amount of questions you ask. Often it’s enough to ask simple satisfaction questions.

For instance, when testing formative you could ask the following question and let them rate it:

How satisfied were you with the flow/design/product?
Add a followup question e.g. like the following and give them the option of free response:

Why did you give this score?
This way you’ll have both some quantitative data (the rating score) and some qualitative data (their answer to the question). You can also consider adding the Single Ease Question (SEQ) after each task and ask the participants to explain their score. This way you get both the aspect of ease of use and satisfaction.

When testing summative ask questions about the overall experience. Consider using SUS after the test for usability attributes. If you need to assess the business impact use NPS.

Now you should be good to go – and as always, reach out if you have any questions!