February 11, 2026



February 11, 2026



Before we get into the ins and outs of running a benchmarking study in Dscout, it helps to level-set on what benchmarking actually is—and what it isn’t.
At its core, benchmarking is about establishing a baseline. It gives you a repeatable way to measure experience quality over time, across flows, or against alternatives, so you can move beyond one-off findings and track meaningful change. Rather than being something to avoid, benchmarking is most useful when you want clarity and consistency in how you evaluate performance.
Researchers use benchmarking to create durable baselines they can return to again and again. Designers use it to see whether iterations are truly improving the experience. And product managers use it to align teams around what “good” looks like, and measure progress in a way that’s easy to share and understand.
In this guide, we’ll walk through how to run a benchmarking study in Dscout—from setting up your approach to turning results into action.
Before you set up your study, you’ll want a clear understanding of what you’re trying to evaluate. There are three primary reasons to benchmark:
Benchmarking can add broader context to more passive and internal performance metrics like CSAT. For example, a low CSAT score tells you “what”, but it can be difficult to pull the “so what?” from that number. Benchmarking helps you dig even deeper into user feedback to form baselines that you’ll use to measure your product’s trajectory against.
Benchmarking provides insights into how you compare to competitors in your industry, highlighting your strengths and weaknesses.
Benchmarking is growing in popularity—especially as continuous research continues to expand across organizations. Including benchmarking in continuous research fosters a culture of continuous improvement by setting realistic performance targets aligned with industry best practices.
Once you’ve decided to benchmark, it’s time to set up your study. While there are a number of ways you can benchmark in Dscout, I’ll take you through the general flow and highlight a couple of examples.
Select a product flow or experience to evaluate. Within this, identify the core tasks a user might navigate, and use these as a way to better measure the flow’s overall success.
For example, when we ran an internal benchmarking study, we wanted to get more color on our CSAT scores. We decided to focus our own efforts on the recruitment flows and study design flows in our usability tool. Within these flows, defining recruitment criteria, and drafting a Task question were just a couple of core tasks we underlined, respectively.
When selecting your participants, you have the option to:
There are many ways to run a benchmarking study in Dscout! Teams have a lot of flexibility to uncover the insights they need.
One way to run a benchmarking study is via our usability testing tool.
Here’s how to set it up…
Note: We intentionally chose constructed tasks over capturing organic behavior to ensure data consistency and reduce participant burden.
Another great option is to invite participants to a media survey.
Here’s how to set it up…
After you’ve designed your study and recruited your participants, it’s officially launch time.
One key perk of running a benchmarking study in our usability testing tool is that Dscout offers an excellent continuous-recording feature.
As participants complete tasks, their actions are recorded throughout the process. You can see precisely where they fumbled, where they caught themselves, and, step by step, how they navigated the core tasks.
For the most part, a benchmarking study with the Media Survey tool will be like any other. Your entries will trickle in, and you can keep an eye on responses with charts and other data visualizations in the Responses tab.
If you’ve included a standardized ease of use questionnaire (like SUS) in either mission type, you’ll likely need to isolate this data and convert the raw responses into score contributions, though this is quick and simple once exported into an Excel sheet.
If you’re relying on just SEQ questions in Usability, you’ll get this score automatically.
For our own study, it was exciting to go back and watch the continuously recorded responses. Users completed the 10-question closed-ended SUS survey while being recorded, and most spoke aloud as they rationalized their responses.
When I got our score, it was extremely helpful to look back at all the task recordings, note where people were struggling, and get more color in contextualizing how that score was achieved.
Overall, participants DID find the usability tool satisfactory, but there were some notable points of friction in core tasks, which I summarized and defined for the product team to tackle. It was great to be able to make reels or clip specific instances where someone got confused with the UI, and I could just show that directly to our team of designers to address—which they have!
Benchmarking is rarely about chasing a perfect score, it’s about learning in context and making adjustments.
Because you’re often blending structured metrics with real human behavior, a few small choices can make a big difference in how useful your results end up being.
Keep these pro-tips in mind as you benchmark to get insights that are not just measurable, but genuinely actionable.