Chicago Designers Share How They Measure UX Success

In order to dig deeper into their respective strategies for measuring UX success, Built In Chicago caught up with leaders from Caterpillar, All Campus and Devbridge. Head of UX Brian Blomer, VP of Creative Design Steve Robinson and Product Design Manager Sarah Santi shared which metrics they’ve found most important for their business, how they balance qualitative and quantitative data and how they translate that data in action. Here’s what they said.

Written by Taylor Karg
Published on Feb. 22, 2021
Chicago Designers Share How They Measure UX Success
Brand Studio Logo
ux designer metrics
shutterstock

When a product is unsatisfying and hard to navigate, users will look to a competitor. So how can UX teams ensure their product is successful in the eyes of the end user? 

Data. At three local companies, the metrics UX teams use to gauge success might differ, but the end goal is always the same: to keep users satisfied. 

At IoT software company Caterpillar, customer satisfaction (CSAT) surveys and task completion are the most important metrics for measuring UX success. College management platform All Campus uses a blend of heat maps, click maps, scroll maps, visitor recordings and more to inform a majority of its UX strategy. And at Devbridge, a software design company, a combination of both qualitative and quantitative data is collected and analyzed to measure UX success. 

In order to dig deeper into their respective strategies for measuring UX success, Built In Chicago caught up with leaders from the above companies to share which metrics they’ve found most important for their business, how they balance qualitative and quantitative data and how they translate that data in action.

 

Brian Blomer
Head of UX, Cat Digital • Caterpillar

According to Brian Blomer, the head of user experience at Cat Digital, Caterpillar’s digital arm, the best way to translate data into action is through iterative prototypes. Based on the validation, they iterate the prototype again or move it into the backlog for development.

 

What UX metrics have you found to be most important for your business and product?

For me, two of the most important UX metrics are customer satisfaction (CSAT) and time to complete a task. CSAT allows us to both capture real feedback from customers regarding which features are working and which new features should be prioritized. In addition to this rich user feedback, CSAT surveys also allow us to quantify these findings, which can lead to better backlog prioritization and overall improved customer experience. When we have real data about what customers want and what frustrates them, we can add value for them immediately. 

Task completion is also critical. Sometimes we are tempted to focus on the number of screens within a flow. At times that may be the right answer, but really what we are looking for is comprehension and speed to complete a specified task. For e-commerce, we may ask, “Do users have a good understanding of the product and are they able to complete the transaction with relative ease and speed?”

 

How do you balance the quantitative data you’re collecting with qualitative data you’re getting directly from users? 

The quantitative data will tell us what the users are doing, but the qualitative tells us why they are doing it. If we’re not careful, we can easily overemphasize the quantitative because it’s so easily available and measurable. However, we should take the data from our quantitative metrics and allow that to drive our UX research discovery, surveys and customer interviews so we can better understand why our customers are behaving the way they are. 

Likewise, hearing a customer tell us what they will do should always be validated through quantitative data. Once a feature is live, users may not always do what they said in a user testing session on a prototype. It’s imperative that we use the right tools and methods to know both what the user will do and why. Lastly, there is so much excellent baseline research that can be applied as best practices for applications, like e-commerce, that we can use as a foundation to build great experiences prior to layering on our own research and analytics.
 

Hearing a customer tell us what they will do should always be validated through quantitative data.”


How do you translate the data you collect into action? 

To me, the best way to translate data into action is through iterative prototypes. Take your findings and hypothesis and turn it into a solution via a clickable prototype and put it back in front of the user to validate. Based on the validation, iterate again or move it into the backlog for development. Design, test and deploy, again and again. 

New features will create new opportunities and learnings that we can build on. One example I’ve seen was with a recent searches tool. Through some A/B testing, we learned customers who clicked on a display of a recent search had a much higher conversion rate than our standard search tool, however, the content and verbiage left many customers confused by the experience. We took the data from both quantitative and qualitative research and quickly iterated to create a few new design explorations to better illustrate the experience, and through additional user testing, learned that the placement of the design could be improved. 

After these updates and testing, we moved the design into our backlog for development and deployed it several weeks later. After a few weeks of data, the updated experience more than doubled the usage of the original feature. 

 

Sarah Santi
Product Design Manager • Devbridge

Iterative research is an integral part of the agile development process at Devbridge, a software design and development company. Product Design Manager Sarah Santi said that by learning about the workflow pain points of their targeted users, they’re able to form a cohesive hypothesis to work toward and validate as the development journey goes on.

 

What UX metrics have you found to be most important for your business and product? 

What we hear directly from humans interacting with our product is by far the most valuable information. Conversations will likely tell the same story as calculated data eventually reports. We never allow numbers to override the human experience we hear when scripts end and deeper connection begins.  

Zooming out, as a digital product consultancy that ships products to market fast without process integrity loss, UX metrics must be gained efficiently. Efficiency requires focus. We rally that focus on metrics gathered from stakeholders on engagement from day one. What numbers do clients need to call the product effort a success? With that clarified, we then zoom into user value as we begin design. 

We find the ease of completion and error rate to be the most informative and impactful quantitative metrics. These factors quickly pinpoint design weaknesses that ripple into the likelihood of success metric accomplishment. These two numbers are least resistant to human waywardness. If a user assigns a “1” or “no difficulty” to a task, yet we observe them struggling to complete the task, we may note the discrepancy. Anonymous metrics don’t afford the same insight. 

 

How do you balance the quantitative data you’re collecting with qualitative data you’re getting directly from users? 

Stakeholder success metrics and initial qualitative data gathering fuel hypotheses, goals and outcomes that a quantitative data plan then needs to reveal, prove, disprove and adapt over time. Staying focused on desired outcomes clarifies the selection of methodology.  

We spend heightened effort on qualitative discovery efforts at the beginning of an engagement. During a re-platform and user expansion effort for an investment tool, we spent time with existing persona members learning about the workflow pain points and with targeted persona members, understanding value creation. The qualitative findings help create our hypothesis: increased user adoption through the expansion of services. We then look to validate our hypothesis with Google usage analytics and adapt as we journey on.
 

We find the ease of completion and error rate to be the most informative and impactful quantitative metrics.”


How do you translate the data you collect into action? 

This question is our North Star in determining which research activities to perform and which data to gather — what will these collective efforts teach us and how will we use them to evolve the product? A simple recent example of metrics-driven UX enhancements came to us via Mixpanel data for a healthcare provider app we’re actively evolving. Our client stakeholders hypothesized more usage on mobile devices (70 percent) than desktop by (30 percent). Still, to cover universal usage, we built a responsive web app rather than native but adopted a mobile-first UX approach. After a year of usage metrics, we learned the device split is 50/50, and we’re now focusing on feature parity across both desktop and mobile. 

The outcome was that in subsequent interviews, we’ve heard from our primary persona that the freedom of choice is a massive win. Customers really appreciate the enhanced flexibility to use the platform on any device without sacrificing efficiency.

 

all campus design team
all campus design team.
Steve Robinson
VP of Creative Design • AllCampus

At college and university enrollment management platform All Campus, data — collected via heat maps, click maps, scroll maps, visitor recordings and more — informs the majority of UX decisions the product designers make. For example, VP of Creative Design Steve Robinson said that they use click maps to make decisions about the structure of a client’s website, which then guides their A/B testing strategy.

 

What UX metrics have you found to be most important for your business and product?

We have myriad tools providing data that help garner insights about user and visitor behavior. These include heat maps, scroll maps, visitor recordings, funnel visualizations, form field completion data, page visits, time on site and, of course, conversion data. The value that each tool provides depends a lot on what we’re trying to learn. 

Heat maps and click maps are very useful for seeing where people are interacting on a site or application. They tell us what navigation links are getting the most traction and what content is resonating most with the audience. Scroll maps tell us how far visitors are scrolling and what percentage of users reach certain parts of a site. This helps us decide on the hierarchy of information. While we can only add so many elements above the fold, we can optimize the layout based on what draws most visitors’ attention. 

We also look at visitor recordings, where we can see real-time recordings of users navigating a site. This provides more anecdotal information, but reviewing these can also lead to inspiration for larger-scale testing or insight into user behavior that we wouldn’t have been able to glean otherwise.
 

We get the most out of data by creating feedback loops, where one type of data can inform the direction of testing to gather more information.”


How do you balance the quantitative data you’re collecting with qualitative data you’re getting directly from users? 

Feedback from users can be invaluable when analyzing a product or software platform. In the design and development cycles, you can sometimes miss something because you are so entrenched in the project. 

Seeing actual users interact with the end result can expose those frailties. If there is a general consensus that gets repeated in the feedback, it’s usually something you should acknowledge. You can have similar insights from visitor recordings, which might allude to an issue that a number of users are having or a behavior that’s unexpected. That said, we are cautious about making too many knee-jerk decisions based on a very small sample size, so user feedback is often the beginning of larger-scale testing. A/B testing across a wider audience will give you a greater pulse on what’s working and what isn’t. We get the most out of both types of data by creating feedback loops, where one type of data can inform the direction of testing to gather more information. Ultimately, both are absolutely critical to ensure a quality, optimal product or experience.

 

How do you translate the data you collect into action?

Data informs the majority of user experience decisions we make. One example is how we use click maps to make decisions about the structure of a website and guide our A/B testing strategy.  Most recently, this strategy helped us avoid a considerable drop in conversion rate. We were redesigning a client’s organic homepage and the designer made a decision to change how a key function of the homepage behaved. 

When we delved into the click map data, we found that the element was heavily interacted with and performing as expected. As a result, we created a third iteration of the homepage when creating a larger test to compare the versions directly: the same redesigned homepage, but faithful to how that element behaved originally. 

Results are ongoing, but the redesigned homepage where we changed this element is showing a 12.90 percent drop in conversion rate. The redesigned homepage where we were faithful to this element is showing a 68.61 percent lift. The redesign is clearly a better approach, but we might not have known that had we not listened to the data and kept a key aspect of the homepage as it was. 

Responses have been edited for length and clarity. Images were provided by the featured companies.

Hiring Now