Let’s do a survey! Or: User Research for a Content Strategy

Sonja Radkohl
Content Mines
Published in
5 min readFeb 28, 2018

The word “survey” alone makes me sweat instantaneously. My mind starts imagining random numbers, combined with super-difficult correlations that do not mean anything to me. Nevertheless, surveys never fail to teach me important information. This is a post on how to gain insights with surveys. Easily. With great impact.

I won’t lie to you: Running a survey can be complicated and requires much time and work. But if you do it right you can gain very helpful insights quite easily. I found out the hard way by trying and failing, but eventually succeeding in the end. With this post, I want to share my experience with you, and hopefully help you avoid the mistakes that I’ve made.

Looks like fun, doesn’t it?

So, here’s the situation:

Together with my colleagues — Sandrine Fackner and Benjamin Barteder — I recently conducted user research for “Open Knowledge Maps (OKMaps)”. OKMaps is an open research platform that builds visual interfaces from search results. It operates with the scientific search engines BASE and pubmed.

OKMaps wanted to know: Who are our users? And how do they feel about us?

Lesson 1: Start with What You Want to Know — But Focus!

One of our greatest challenges was to keep the survey short. We had extended discussions on what to ask and how to phrase the questions. If I had to do it again, I would now start with writing down one or two goals to be able to refocus later. We finally decided on focus areas:

  1. General information about literature search
  2. Participants who know OKMaps
  3. Participants who do NOT know OKMaps
  4. Demographic Data

Lesson 2: Create a Coding Plan & Make Sense of the Data

Yeah, that one also doesn’t sound like so much fun. After running the survey, we simply imported all aggregated data into an Excel spreadsheet and started to code the answers: Simple yes/no answers where coded with 1/0, single choice questions with 1/2/3/4 … With multiple choice questions, it got a little trickier: Every answer had to be coded as a single yes/no question.

That’s what my coding plan looked like.

So here’s the data and the coding plan, but what next? We processed our results with “R”, an open source software for statistical computing. The only catch here is to enable smooth import, so make sure to save the coded data in .csv format and then import it into R.

Lesson 3 — Use a Frequency Analysis to Gain Meaningful Insights

The term “frequency analysis” sounds very sophisticated. In fact, there’s not so much magic to it. Bascially you:

  • look at all questions you asked …
  • … and count how many times an answer was chosen by participants.

Easy, right? This method can give you very interesting insights — or did you really know about the age range, the ratio between male and female participants or general preferences of your users before?

This method helped us to identify four target groups of OKMaps — students, fact checkers, personal researchers and scientists.

In what context do people conduct literature research?

Lesson 4: Understand Median and Mean — Data Can Be Scattered

Let’s do some basic calculations: median and mean. The median describes the value in the very middle of the range of your sample. The mean summarizes the total of all numbers in your sample and then divides them through the number of participants (the so-called average). Placing these numbers side to side, they will tell you how scattered your data is.

For our survey we asked the participants about the importance of research tool features. We out that e. g. filter options had a median of 5 (very important) and a mean of 4,3 (important). That means that most people found it very important while only a single person gave the very opposite answer: not important. That makes the mean lower than the median.

The diagram to the right shows: Most people find filter options important, only one say it isn’t. That makes the mean lower than the median.

Lesson 5: Likert Scaling Provides Information on General Feelings

We wanted to know if people find literature search hard or easy. To do that, we served them five questions (you can see them in the graph below) to be answered with “strongly disagree” (1) to “strongly agree” (5). After that, we took the total number of responses per option (x) and divided it by the number of response options (5) and the number of participants through “R”. Overall, median and mean are 2,6. So, we could see a tendency to “literature search is not super-easy”.

A Likert Scale: How easy is it to do literature search?

Lesson 6: Close the Case with Cross-Referencing

In the end, it’s all about summarizing your key findings. To do that, we did some cross-referencing on our questions. E. g. We wanted to know how many of our fact checkers, scientists, personal health researchers and students (our target groups mentioned in lesson 3) know OKMaps.

So we compared them in a simple table (see below). Our results showed that there is great potential in advertising OKMaps to non-scientific communities like the groups “personal researchers” and, most importantly, “students”. Why? We hypothesized that advertising to these groups would increase their awareness and thus their knowledge about OKMaps.

Without cross-referencing, we would have never found this out. The morale of the story: Yes, survey can get you lots of data, but you really have to connect the dots and look at it with an open mind.

Some very basic cross-referencing work

After doing all this statistical work, I was REALLY tired. I also knew that I have not been able to explore our survey to its full potential. But hey, everyone makes rookie mistakes. In the end OKMaps gained some crucial insights about their target groups and that was a cool outcome of all the spinning in my head :-)

round and round, round and round, round and round …

Related stories

--

--

Sonja Radkohl
Content Mines

culture, arts, music and public relations, books-lover, content strategist (yeah, i’ll explain that one later)