DEV Community

Cover image for Behavioral and Attitudinal UX metrics
Lollypop Design
Lollypop Design

Posted on

Behavioral and Attitudinal UX metrics

UX metrics, also known as usability or product experience (PX) metric, allows you to measure, compare, and track the user experience of a website or app. This enables you to make informed decisions about how to improve the user experience. UX KPIs are generally divided into two categories of metrics: behavioral (what users do) and attitudinal (what users say).

Quantitative behavioral metrics are a type of metric that measures the actions that users take while interacting with a product or service, such as time on task, page views, error rate, and bounce rate.
Qualitative attitudinal metrics gauge user sentiment towards your product or service, including brand association, loyalty, satisfaction, usability, and credibility. These metrics rely on user feedback, such as SUS (System Usability Scale), CSAT (Customer Satisfaction), and NPS (Net Promoter Score).

*Behavioral UX metrics
*

Understanding user behavior and how they interact with a product is a fundamental aspect of user research, in which task-based usability testing is a widely recognized method for collecting this crucial information. Behavioral UX metrics focus on observing and analyzing user actions during interactions with a digital product. These metrics can be collected through lab testing or analytics tools and provide valuable insights into user activities.

Time on task

Time spent on task, also known as task time or time on task, is a metric that measures the average time users spend on completing specific tasks within a mobile app or website, usually in absolute units such as seconds, minutes, or hours. Due to various factors, different users may have different completion times for the same task. In general, the less time a user spends on a task, the better the UX of your product, as the numbers can reveal UX insights about the efficiency and intuitiveness of the design. If your customers spend too much time on tasks in your product, it could indicate that an interaction or function is not properly designed.

Measuring time spent on a task helps you:

  • Identify usability issues to do a design audit on your website or app interface

  • Define the difference between customer lag time and system lag time, especially when a customer perceives time subjectively during task completion

To fill the emotional gap left by time on task, session recordings can add context by showing you individual users' journeys from page to page. You can see how they experience your site and how they behave on tasks, where they get stuck, distracted, confused, or frustrated.

*Pageviews
*

Pageviews are an engagement metric used by marketers to indicate the number of pages a user has viewed on a website within a specific period of time. This metric helps identify what content on the website users are interested in and whether they have difficulty finding certain information.

To gain a better understanding of user behavior, it is recommended to combine pageviews with other metrics to provide more context. For example, in mobile apps, a combination of clicks, taps, number of screens, or steps with pageviews can provide a more comprehensive picture of the meaning of each user activity.

*Task Success Rate
*

The task success rate is one of the most important UX metrics that UX/UI designers should consider when evaluating the effectiveness of their designs. It measures the percentage of participants who successfully complete a specific task or achieve their goals, which can help designers identify areas for improvement.

For example, completing a checkout process or adding a product to the shopping cart are tasks that can be measured using this metric. However, it's important to keep in mind that the task success rate alone doesn't explain how well users perform tasks or the reasons why they fail them.

For example, if you conducted a usability test where participants attempted a task, and out of 50 participants, 29 successfully completed the task, the task success rate would be (29/50) * 100 = 58%

To get accurate results, designers should gather data from as many users as possible. Additionally, it's recommended to track whether users complete the task for the first time, as this can provide valuable UX insights into how their experience changes over time.

*Error rate
*

The error rate is the frequency at which users encounter obstacles when using your product and fail to complete a task they intended to do. It indicates how easy or difficult your product is to use. High error rates suggest usability issues, so it's crucial to determine what actions should be considered errors.

These may be related to UI/UX design, additional features, workflow, and more. For example, users might accidentally select the wrong action in the user interface, or make an unsuccessful attempt to fill out lead forms on a landing page.

*Bounce Rate 
*

When it comes to evaluating user retention, metrics such as time on page are contrary to bounce rate. Bounce rate measures the percentage of users who land on a page and then give up on a task without further interaction. This can affect the way users feel, making them annoyed and less patient than they expected. A high bounce rate may indicate issues with content relevance, user interface design, or usability.

You can collect data for behavioral metrics in web analytics or application analytics. Google Analytics is a well-known tool for web analytics, and Mixpanel is a popular tool for application analytics. These tools can track user sessions, recordings on the site, heatmaps, bug tracking, and more, making it an easy and inexpensive way to start tracking UX metrics.

Alternatively, you can track these metrics using other UX research methods, such as observation, A/B testing, eye tracking, and usability testing.

To fully understand why users bounce, it is important to combine this metric with some attitudinal metrics. This provides a complete picture of why you are getting these numbers.

*Attitudinal UX metrics
*

Attitudinal metrics can help you quantify qualitative data and understand how users perceive your product including what they say and how they feel about it. These metrics can be quantified through various labels, such as adoption (which features are most used), satisfaction (how much users enjoy the product), credibility (how much users trust the service), and loyalty (how likely users are to use the service again).

*SUS (System Usability Scale) 
*

The System Usability Scale (SUS), a widely recognized UX measurement tool developed by John Brooke in 1986, has become an industry standard for assessing users' perceptions of system usability. It involves a questionnaire-based survey with 10 questions, where users rate their agreement on a scale of 1 to 5. SUS scores above 68 are considered above average, while anything below 68 indicates a need for optimization.

UX designers and researchers widely utilize SUS as a valuable tool in their arsenal. The survey-based approach involves presenting users with 10 questions, asking them to rate their agreement on a scale of 1 to 5, from strongly disagree to strongly agree.

While SUS is straightforward to administer and can be applied to small user samples or test cases, it's important to note that the scoring system itself can be complex and lacks diagnostic capabilities. It shouldn't be seen as a replacement for a dedicated user research team, as it doesn't provide in-depth insights into why users may rate a system poorly.

To truly understand and address the reasons behind low SUS scores, it is essential to complement it with data-driven insights from comprehensive user research. This combination empowers designers to uncover the underlying issues affecting system usability and make informed decisions for improvement.

*CSAT (Customer Satisfaction Score)
*

The Customer Satisfaction (CSAT) Score is a key indicator of the overall level of user satisfaction with your product, from its features to its app functionality.

Typically, CSAT is collected through questionnaires or online surveys, where users rate their experience on a satisfaction scale. They are then asked whether they would recommend the brand, product, or service to their friends, family members, or other contacts. This score provides insights into user sentiment and can offer both a general overview and specific details about how well a product or service fulfills customers' expectations at different stages of the customer journey.

The CSAT scale is commonly based on a Likert scale or a smiley face rating system, ranging from 1 (very unsatisfied) to 5 (very satisfied). To calculate the percentage of satisfied users, divide the total number of satisfied users (those who rated 4 or 5) by the total number of respondents and multiply by 100.

*NPS (Net Promoter Score)
*

Net Promoter Score (NPS) is a user loyalty metric that was introduced by Fred Reichheld, the founder of Bain & Company. This score unveils the likelihood of users recommending your company, product, or service to their inner circle. Brace yourself for a journey into customer loyalty and the almighty influence of word-of-mouth referrals.

To track NPS, simply ask users one crucial question: "Would you recommend us to a friend or colleague?" They rate their likelihood on a scale of 0 to 10, from "not at all likely" to "extremely likely.”

Based on their responses, users are classified into three categories: detractors (0-6), passives (7-8), and promoters (9-10). Here's what we can infer from this ranking:

Promoters (score 9-10): These users are your super fans. They shout your praises from the rooftops and bring in new customers. They're like your customer lifetime value, boosting your acquisition and lowering costs.
Passives (score 7-8): These users are content, but not head over heels. They're satisfied, but not fiercely loyal. The most important factors that keep them engaged with your service are irresistible offers and incentives.
Detractors (score 0-6): They are highly unsatisfied users who may have serious issues with your product. Address their concerns promptly and turn those frowns upside down

Top comments (2)

Collapse
 
anshul_dashputre_9bd8a03b profile image
Anshul Dashputre

It's surprising how user-reported satisfaction can sometimes diverge from actual behavior.

Thanks for sharing your valuable experience.

Collapse
 
lollypopdesign profile image
Lollypop Design

Thank you for reading my post