May 20, 2021
May 20, 2021
“If only we had thought to ask that” is a common regret in the research world. Perhaps you got surprising results on a quantitative question and you’re stuck theorizing why. Maybe you generated a new qualitative insight, but have no way of knowing just how relevant it is to your user base as a whole.
It’s those moments that leave us kicking ourselves. There’s potentially a great insight in our data, but we have to leave it on the table because we just don’t have enough information!
Once this has happened enough times, research design can start to feel stressful.
We want to throw everything at the wall and hope that something sticks, for fear of regretting that we didn’t ask that one question. Or, we agonize over the wording of a new question, not knowing exactly how participants are going to connect with it.
This alone makes the case for iterative research—running smaller back-to-back research initiatives, instead of over-indexing on one, over-scoped study.
The stress of design or the regret of analysis dissipates when you can have multiple shots at uncovering an insight. Iterative research design gives you the freedom to be more creative with your approach. When you’re no longer wasting your “one shot” at research, you can try more “wacky” approaches, sometimes with unexpected and fantastic results.
Iterating can also give you the chance to pivot your research to where the actually interesting insights are, and abandon the threads that aren’t as fruitful. You can pilot a new question or design to see if it’s going to work, ensuring that you’re going to get what you need when you ultimately fire off your full-scale research effort.
There are quite a few ways you could use dscout to iterate on study design, depending on what you hope to accomplish with an iteration. Here are a few cases that come to mind when iterating, and what you might want to do to accomplish them:
Sometimes iteration is just about wondering! You’re not quite sure what’s interesting yet, so you ask a few questions and see what thoughts they generate. Then, you ask a few more until you strike insight gold.
A few ways to start structuring your exploration in dscout (or your qual research tool of choice):
This is a lightweight, standard, mixed-methods approach, that should allow you to layer your data and triangulate insights within it. This iterative method starts quant and ends qual.
If you have a creative research idea but are unsure if it’s going to work, try launching a “beta” research design. There are a number of ways you can do this with Express or a short survey:
These examples are small scale to large scale, and usually part to whole, but don’t belong in a specific method style.
If you didn’t ask a key follow up question during your study, here are a few ways to get some additional context:
If you’re looking to dig into an interesting response to see if it has legs as an actual insight or is merely a singular opinion, turn your qualitative findings into mixed-method findings. This provides an opportunity to ground your small-scale results in large-scale frequency data to see just how prevalent they might be in your customer base.
To do this, consider the following approaches for your “follow-up” study design:
These examples start with qual and small-scale, and end as quant and large-scale.
Express is by far the most nimble way of making sequential research that dscout has to offer—so we rely on it for most iterative studies. However, if you don't have a dscout subscription, these principles can be applied to surveys or unmoderated studies on your tool of choice.
Knowing the quick-turn, low-lift nature of Express, we felt that we had a lot of freedom to throw some "weird" questions at the wall. We didn’t feel beholden to understanding whether insights were “real” or not right away; we were really looking for anything that might be interesting. There was flexibility to keep the mission very small, because data validation would come later down the line.
Some things to keep in mind as you’re designing your own iterative study:
Keep in mind, this analysis isn’t about finding insights -- it’s about finding more questions.
Start by immersing yourself and having conversations with your team. The analysis doesn’t need to be very formalized at this stage. You’re looking for anything that makes you go, “That’s interesting,” or any quotes that make you think, “If this was a common sentiment, that would be very powerful.”
Try to think towards your deliverable. If you see great quotes in your data, ask yourself, “Would I want to see a graph alongside that quote?” What graph are you envisioning? That’s your new question!
Then, look at thematic open-ends and try to solidify the main themes coming through. There’s no need to tag because there’s no need to quantify 30 open-ends when you’re taking your findings to a larger scale. Focus on impressionistically choosing the top 5-10 themes you’re seeing and try to crystallize them into an easily digestible list.
Tip: you can use the dscout platform to generate word clouds, which can be a shortcut to generating picklists around scout sentiments!
Where the first design can be free-wheeling, this is where you need to rein it in a bit. The key word here is follow-up–ask questions that help you better understand the insights you began to uncover in the first mission. If you start asking brand new questions, you may end up getting stuck in an iteration cycle that will always leave you with unanswered questions.
Another important consideration here is scale. The bigger and more quant-minded you want this second mission to be, the more you should avoid asking follow-up qual questions. These will raise the price on your mission, which can balloon quickly at larger sample sizes. Keep it lean and keep it quant–try to rely on your first mission to get the qualitative color you need.
Areas to focus on:
An ideal study here is short and lean to get maximum engagement. Afterwards, feel free to open it up to a much larger audience.
Now that you have two sides of your research story, you can start putting it all together. You’ve likely developed some hunches from your qualitative run, which were (hopefully) confirmed by your new quantitative backing. This should make final analysis relatively straightforward.
Go back into your first study and identify what you now know to be the most important or common themes, tag if necessary, and drill down into those areas to find the quotes or artifacts that you need.
Presenting your iterative study findings won’t be much different than presenting a single large-scale mixed-methods survey. Use your qualitative findings to color your well-formed quantitative questions, or use your quantitative work to legitimize and bolster the insight you intuited out of the smaller study.
Plus, the iteration doesn’t need to stop here! If you’ve used this process to land on a fascinating insight and you or your stakeholders must know more, you can treat this as a pilot for a multi-part mixed methods survey in the diary tool!
We wanted to know what remote learning has been like for high-schoolers and college students during the pandemic. We weren’t sure exactly what to ask a wide range of kids to get a proper pulse on the experience, so we started with a qual-focus express mission aimed at a small sample (n=30).
Questions we asked:
After the results were back, we dug in and found a few interesting things. This led to some questions for us:
We used our second survey to fill in the blanks and validate existing data. Our sample size was much larger (n=500). We kept the survey short and inexpensive (12 questions, $1 per complete) to make it as broadly accessible as possible.
Some close-ended questions (multiple choice/single select options) we included were :
From quant study, we learned that 76% of students feel less motivated in remote learning, and 72% feel less productive. The top drawback of remote learning is that they’re distracted more easily.
We get qualitative color and nuance about distraction and motivation from the qual study:
"When I’m in school I have set times to work on my school work and set times for breaks and lunch, but at home I kind of just do everything whenever I feel like it and it took me off my normal schedule so I don’t get as much done."
Ali S. (She/Her/Hers) | 17 | South Bend, IN, US
"It is easy to get distracted, and I enjoy seeing my professor daily, it helps keep me accountable to my schoolwork, and staying on top of things."
Stanton T. (He/Him/His) | 20 | Cary, NC, US
"One drawback to remote education is the lack of motivation I experience. I am a student who gets straight As and has a perfect gpa, but I still struggle with having the willpower to say no to being on my phone all day instead of doing work."
Anais K. (She/Her/Hers) | 17 | Springfield, MO, US
From quant study, we learned that 58% of students feel less happy in remote learning. Missing friends and the community of the classroom are both in the top five drawbacks of remote engagement.
From qual study, we learn more what loneliness means to people:
"I was less happy in a remote setting because I was no longer able to spend time with my friends. The interpersonal relationships you build with your peers are so important to me and overnight it was gone. Once the pandemic started, I never saw my friends again and I have since graduated."
Anthony G. (He/Him/His) | 22 | Winston-Salem, NC, US
"I think happiness stems from being around other people, and I’m not doing that when I am learning in a remote setting. I miss my peers!"
Cameron Q. (He/Him/His) | 18 | Overland Park, KS, US
Our survey of workspace styles revealed that the top workspace is a bedroom desk (49%) and (confirming our hunch) the second most common is students’ beds (35%).
From our qual study, we collected some cool artifacts and quotes about people’s workspaces:
Karen is a researcher at dscout. She has a master’s degree in linguistics and loves learning about how people communicate with each other. Her specialty is in gender representation in children’s media, and she’ll talk your ear off about Disney Princesses if given half the chance.