Undercover in Dscout

Josh LaMar (He/Him)
12 min readSep 5, 2019

I went undercover on dscout to see the experience from the user side. I found lots of survey problems and learned first-hand how to create a bad experience for your dscout participants, also known as, “Scouters.”

Here are the top issues I uncovered and my recommendations for creating the best experience for your users…. and thus conducting better research in the first place.

Photo by ian dooley on Unsplash

When evaluating new research tools, it’s important to try them out from all angles. When I started using UserTesting.com, I signed up as a user to see what the experience was like. I went through my own studies to make sure they worked out. I’ve heard about dscout for a while now and now that I have my own research company, I decided it was time to learn more about the experience from the Scouter side in order to create better studies.

So, I went undercover.

I signed up for the free researcher account with my work email and I signed up for a “Scout” account with my personal email. And then I started applying for missions… and wow, I learned a LOT about how to make good and bad experiences for your potential scouts on dscout.

It’s probably every researcher’s dream to be a participant in a study. I know it has been my dream for a while. I always try to make my studies dynamic and interesting to ensure participants have a good time so that I can learn a lot about the topic I’m studying. I have always wanted to be a participant in my own studies.

I don’t think everyone else is like me.

In this article, I’ll discuss the most common errors I’ve come across as a user of the platform followed by my top recommendations for UX Researchers to implement right now to improve your dscout studies and make them more fun.

Mission Photo, Title, and Description Text

Your photo, title, and description text are the first thing your users see about your Mission. And you have to entice them to want to be a part of it. Users have to apply to be a part of your mission, so this is key.

When browsing the lists of missions, the first thing your users will see is the photo you select: pick a good one! It should be related to your mission thematically and it should be representative of the things that your users will be doing.

Boring or generic photo: boring Mission.

Fun photo: fun Mission.

participate.dscout.com

Your title is the next thing they will see. From the feed, they only see the Photo, Title, Reward and Number of Openings (See below for more on Reward and Openings). Your title should be clear and enticing and not too long. The goal is to get them to click on the item to learn more and apply.

Your personality as a researcher will come out based on your photo selection, title, and description. I’ve seen intriguing titles that made me want to click to learn more and I saw boring titles that indicated that was a study that I probably didn’t want to be a part of.

Another interesting part of this process is that it’s up to the user to self-select if it applies to them. I’m not sure how many of the open missions get pushed to me based on my profile settings or if I just see all of them that are open now. I think there might be an initial push of things based on my profile. But after that, I need to be enticed.

Recommendation: Treat your mission photo, title, and description like the top part of the marketing funnel… because it is. You should be Attracting your target users to you study with your title and photo, generating Interest with your description, and then Converting them to apply to be a part of your study.

Reward vs Openings

If you do some quick math, you’ll see the budget that the researcher had for the study:

Reward (Incentive/Gratuity) x Number of Openings = Budget

I’ve seen some studies with 100 or even 150 openings in them! That’s a lot of people to go through, but in those cases, the reward was very low (around $30 for one that I saw for 150 openings). In other studies, I saw a much higher reward ($75–100) with only 30 openings.

From the user perspective, it’s very clear what your company priorities are: you either want depth and quality with fewer users OR you want a quant study without paying for a quant study and you want to get as many users as you can.

If you see a really low number as an end user, all I can think about is, “Will it really be worth all that time just to get $30?”

Probably not.

Unless participating in studies is your full time job, but those are exactly the people that most researchers never want in their studies.

If you make decent money already and you’re using dscout to make a little more on the side, it’s likely not going to be worth it. And then you’re alienating the people that you really want to be a part of your study in the first place.

Recommendation: Pay people well and be picky so you can only get the highest quality candidates. You’ll be happier and so will your users. The gift of qualitative research is its depth and dscout is tailored for depth.

Fatigue

Once you’ve attracted your user to apply to be a part of your Mission, they have to fill out a survey. Remember learning about this thing called, “Survey Fatigue?” Yeah, it’s a real thing and it definitely applies here!

I went through about 20 different applications so far and I’ve seen the best and the worst!

One application survey was so long that I almost quit… twice! It just kept going and going and going… It was the Energizer Bunny of surveys! And the worst part was that it got so long and it was boring. The worst part was that it was just the application to maybe be a part of the study. Even if I was selected, then it was only $25!

Energizer.com

If the researcher wanted this much from me just for the application, I worried about how much they would ask for the rest of the study. I went through with the survey just to see how long it would be. I think it took about 10 minutes after all the questions and typing and videos.

I wondered if the researcher was trying to get all the information they needed just from the application survey without having to pay for the completion of the Mission. This is an interesting case of potentially evading the payment for users time. As a researcher, you select the number of completions that you want and you are charged for that number of completes.

But if your application survey is really long, you end up gathering a ton of data for free-but maybe only about people that you aren’t as interested in. The user isn’t paid for their time it takes to do the application survey and the researcher gets all that data for free. Yes, they get to be picky about who they want in their study, but when the survey got to the 20th question, I was just too tired.

I have more thoughts about the types of survey questions below.

Recommendation: Keep your application surveys short-and by short, I mean 5–7 questions maximum. This isn’t a quant study and your users are not being paid to fill out your application. Respect their time and they will respect yours back.

The optimum application survey would include: A few questions to weed out the folks you definitely don’t want, one question where they have to type a sentence or two, and then one video talking about an interesting topic so you can gauge whether you want that person in your study.

The Mobile Platform

For the end user, the primary way of interacting with dscout is through a mobile phone app. This is good and bad. I’ve been testing it out using the app on my iPhone.

It’s wonderful because it’s easy to install the app on your phone and then you’re ready to go. You can easily show whatever it is you’re talking about as well just by taking a photo. You can record a video with your phone camera and upload it to the system. The dscout application does a great job at making it easy to respond to the survey questions and to take/upload media.

Other question types include Radio buttons and Checkboxes. Super simple.

However…

When you have a survey application that asks you two type, it means that you’re typing on your mobile phone’s soft keyboard. You are not on your laptop. (The only time you’re on your laptop is if you’re in a live session with a researcher through the web).

So, if you’re asking the user to describe something in text, 1–2 sentences are fine. But if you need more than that, fatigue sets in quickly as you start responding with a paragraph or two of information… Also remember, that the user isn’t getting paid for this response. Everything you ask them to do, you ask them to do for free to maybe be selected later. Asking for paragraphs of text gets tiresome quickly when typing on the phone.

One application I went through offered a great idea, which I will share with you all here: use your phone’s audio to text conversion feature to input as text. That’s brilliant! Do I know how to do that? No, actually not. Despite the fact that I’ve worked in Technology for the past 15 years, there are lots of things that I don’t know how to do. And finding features on my iPhone is one of them. So, even better than saying it’s possible, tell your users how to do it.

Recommendation: Remember your users are on a mobile phone and optimize your questions for input on the phone (Aka photos and videos). The mobile phone form factor is horrible for typing a lot of text.

Good Survey Questions

This final section is probably one of the most important ones because it’s just about creating good survey questions. By far, the most egregious errors I found taking application surveys were with the questions themselves. Here are some tips.

Ask ONE question at a time

Yup. Just one. Not two. Not three. Not 12.

When you ask users to record a video, your question shows up on the screen. But when you tap the Add Video button, you get to your camera and you start recording a video. If you ask one question (like the good example shown below), it’s not a problem. The user knows exactly what to do.

However, if you ask two or three questions, you might forget. For example, I went through one survey that asked three questions and asked me to respond to all of them in the same video in 60 seconds. Halfway through recording the video, I forgot the other questions and then had to stop and that was super annoying. I’d rather record a couple videos about different questions than try to cram it all into one.

Video Capture Screen on dscout

Another bad way of asking two questions at once is when you have a single question that asks two things at the same time. I found an example of this and then didn’t get the screenshot, but imagine I asked you:

On a scale of 1 to 5 where 1 is infrequent and unimportant and 5 is frequent and important, how would you rate blah blah blah?

The problem here is that it assumes that frequency and importance are always linked for users. But what if they aren’t? This question should really be split up into two questions. It’s the differences between these responses that are interesting. If you find a pattern between the two, that’s also interesting. But when you combine two questions into one, you can’t disentangle them later and you’ll never know what the user really meant unless you come back to it and ask them again.

Radio Buttons vs. Sliding Scale

This is an interesting topic because it brings together many of the previous topics. On the mobile phone, when you have a radio button question, you invite your users to select one and only one response. This is very similar to the sliding scale question as well, however, the use of screen real estate to ask the question is different. The radio button question can ask the same thing as a 5-point likert scale question, but the implementation makes it so much easier with the radio buttons.

Example Radio Button question on dscout

Next, take a look at the sliding scale question. It took me a couple tries to make the dial move and then select a number. It was way more difficult than it should have been. In the case of a 5-point likert scale, it would have been easier to just have radio buttons.

Sliding Scale Question on dscout

Closing the Loop

Good surveys will have clear questions with defined scales which aren’t too long but still get to the most important parts of what you need to know. But not every Mission is for every person, and it’s nice to know whether you’re automatically removed or not.

Most of the applications I submitted didn’t have this. But a few did and it was really nice to see.

Closing the Loop in dscout

I’m sure the researcher knows exactly what they’re looking for and they will get it when they are clear. But it’s really nice to close the loop with the scouts that you know you won’t be working with. And this does that in a nice way.

Feature Requests

As I’ve been through the experience being a Scout, there are some things that I would have liked that could have improved the experience.

  • Indicate particular interest: I’d love to be able to indicate that I’m super interested in a particular study topic and really want to be selected. I’d love to just be able to flag for the researcher that I think I’m perfect for the Mission and I have a lot of interesting things to say about a topic. Perhaps this is just a checkbox somewhere in the application.
  • Not interested follow-up: When I’m not interested in a study, it would be good to be able to ask a question about why. For example, there was one about diabetes. I don’t have diabetes, so I marked one study as, “Not interested.” But it would be great if there was a follow up question about why that could then be tied to my user profile that could then be used in future Missions to help tailor better ones to me.
  • Mission application progress: I would love to be able to see how far I’ve progressed and know how much more there is coming. E.g. Question 5 out of 12 or 75% complete. When I was on the energizer bunny survey, I really would have liked to know how much more was coming, so I could perhaps finish it later or not even apply.
  • Scouts Selected: I’d really like to be able to see when scouts will be selected for a Mission… e.g. by date. Then I know I’m likely not selected if that date goes by.
  • Not selected status: Also would like to know that my application was reviewed and that I was not selected. It’s a good way to close the loop that so far, has only been closed when a survey response tells me.

It’s now been a week since I started using dscout. I’ve probably submitted about 20 applications and so far, I haven’t been selected for any Missions. I’m a little sad about that because I think it would be so much fun. But I guess that’s just how it goes.

What I’ve really enjoyed about this experience is learning firsthand the joys of seeing new missions in the feed, thinking I’d be perfect, and then applying. There seem to be so many different types of studies going on and they all look interesting. And the experience looks fun too.

I’ve also really appreciated how much control the researchers have in selecting participants. No one selected me, but that’s ok. I know that when I’m running my own studies, I’ll have all the power!

I hope this article has been fun to read and hopefully helped researchers think more deeply about the research experience they are providing for their users. I’d love to hear about other tips and tricks you’ve come up with when using dscout as well, so feel free to comment here or in the Facebook or LinkedIn comments.

Josh LaMar is CEO & Co-Founder at Amplinate, a product strategy thought partner committed to helping tech companies save millions by amplifying what resonates with your customers in order to build products that solve real problems.

--

--

Josh LaMar (He/Him)

Co-Founder & CEO @ Amplinate.com, UX Strategy Consultant, and Technology Thinker and Writer