Testing babies online over Zoom (Part 1)

Shari Liu
6 min readAug 6, 2020

--

Photo by Charles Deluvio on Unsplash

I spent 6 years during my PhD conducting looking time studies on babies in the lab, and I spent the last few weeks piloting replications of lab studies over Zoom. In this series of blog posts I will answer the following two questions:

  1. Is it possible to do violation of expectation studies (VOE) with babies in their homes over video chat? (Part 1, this post: Yes, and here’s how.)
  2. Are the data high-quality? (Part 2, next post: Yes! Let me show you.)

Some preliminaries:

  1. These blog posts will not cover recruitment (though check out Children Helping Science for a solution on this front), IRB approval (please, don’t test participants without your institution’s approval), or stimulus design. Here, we’ll focus on what to do after you have IRB approval, a set of stimuli, and babies to test.
  2. The focus here is on violation of expectation (VOE). VOE is a method where babies are familiarized or habituated to a stimulus (e.g. see many pictures of cats, one at a time), and then tested on new stimuli (e.g. new pictures of cats, or new pictures of dogs). The primary measure is looking time — how long babies choose to look at each stimulus — and this measure is thought to reflect how novel or informative babies find each test stimulus, relative to what they saw during habituation or familiarization. Although VOE is the focus here, I hope that the content below will be helpful for researchers who use preferential looking (left vs right), anticipatory looking, and other behaviors (e.g. pointing, smiling) as dependent measures.
  3. You might be wondering why I didn’t run these studies on Lookit. I do plan on moving studies to Lookit in the future, but here, I wanted to present infant-controlled trials, such that the length of the trial depends on the attention of the infant. This feature isn’t available on Lookit (yet). However, the Lookit platform is incredibly powerful, and has many benefits over the methods I cover here. Check out this tutorial to learn more.

Question 1: Is VOE over Zoom possible, and how?

The answer is “absolutely yes”. Here are my suggestions for how to run your own violation of expectation studies over Zoom.

Make the stimuli easily accessible. I put all stimuli into unlisted YouTube playlists, one playlist per study. (Tip: if you designate a video as ‘made for kids’, you don’t have to worry about ads.) Here’s an example, with an initial baby-friendly video to engage the baby during high chair and camera placement, calibration videos, attention-getting videos, and the actual stimuli. I went with this option because I did not want to deal with issues of video compatibility, and I also did not want to ask parents to download anything. (Keep in mind, though, that YouTube is not accessible worldwide. Vimeo may be a good alternative.)

Figure out Zoom workflow. This was definitely the most work-intensive part of the process, but totally worthwhile!

  • Develop a family-friendly set of slides to introduce what’s going to happen, ask for consent, and debrief, like these from Hyowon Gweon’s lab.
  • Do your stimuli look good over screen share on Zoom? If so, great — you can play the videos on your computer, and share your screen with parents. If not (this was true of my stimuli), you can send the playlist to parents and have them share screens with you during the study while you record.
  • How will you measure the infant’s attention during the study? Because my studies require that I dynamically present the stimuli based on each infant’s behavior, I can’t just code the videos after the session. In the lab I use jHab, from Amanda Woodward’s lab, and that’s what I’ve been using for online testing too. If parents will be playing the videos on their computers, you’ll need to request remote control so that you can start and stop each video.
  • What views will you need to record over Zoom? Check out all the options here. Consider how you are going to stitch different views together afterwards. On this front, one powerful video-editing tool to check out is FFmpeg.
  • Write down a step by step procedure, and follow it exactly, just like in the lab. (Here’s mine.) During piloting, be sure to revise as you figure out what works best for you.
  • Test things out along the way with labmates. Practice communicating (this is very important — oftentimes, you cannot see what the parent can see), and keep in mind that not all parents will be familiar with screen sharing, YouTube, or Zoom!

Involve parents and guardians! In the lab, families come in and everything is set up ahead of time. In our lab, there’s a dark and quiet room, a big screen, cameras and computers for displaying stimuli and recording the session, and basically nothing else. Testing babies at home is really different, and you’ll need parents to help you make the session as successful as possible.

  • Acknowledge that this is not lab testing. This seems obvious, but not everyone has: a laptop/computer/iPad, high-speed internet, a high chair, a quiet room at home, familiarity with Zoom, familiarity with infant studies, etc. Be sure to acknowledge parents’ effort and time, and acknowledge that this adventure is new to you, too.
  • Send parents information ahead of time about your ideal setup (e.g. a quiet room, no pets or people walking through, etc), and acknowledge that the ideal is not always possible. If parents have time and energy before the session, they may be able to work on these steps ahead of time. If not, then they will at least be familiar with the goals of setup.
  • Ask parents what they think will work best. Parents know their babies, and their homes, a lot better than you do! For example, some babies may be a little nervous about sitting in a high chair without their parents right in front of them, so those babies might be more comfortable in a lap.
  • Don’t be afraid to ask for help! If you’re not sure how to solve a particular issue (e.g. the baby is kicking the table holding the laptop, and the video feed is unstable), feel free to ask parents for suggestions. If you notice something in the middle of experiment (e.g. the laptop looks unstable and is about to fall, the baby found a cool toy hidden in their high chair and is really interested in it), don’t hesitate to pause and check in with the parent.
  • After the study, ask for feedback. Some of the steps in my protocol came directly from parents.
  • Thank parents sincerely for their time and effort. We can’t do this work without their help, in normal or pandemic times — so be sure to acknowledge their contribution, and if possible, pay them for their time.

That seems like a lot of work. Is it worth it?

My next blog post will dig into the data from these sessions, but in short, I think that this method is absolutely worth trying.

Consider the following tidbits (more next time):

  • Babies are comfortable at home. In lab studies, some proportion of babies do not complete the study because of fussiness. In my studies, this is usually between 10 and 20 percent, depending on the age of babies and other factors. But out of the ~30 sessions I have run, only 1 session ended because the baby was upset.
  • Babies are, overall, very attentive. For one of my studies (N=60), babies in the lab look for 60s (median), and ~55s (mean) on the first familiarization trial. At home, using the same stimuli, a smaller pilot sample of babies (N=9) look for 60s (median), and ~50s (mean). Given that the babies in the pilot studies were older (i.e. on average, more active than the babies in the lab study) and that they were looking at a small screen at home (vs. a huge projector screen in the lab), this is pretty cool. I have not needed to adjust trial lengths or lookaway lengths at all so far — using the same criteria seems to be working well. Of course, this may not be true of other studies.
  • Online testing is so much easier for parents! They do not need to pack all their baby gear, drive to the lab, participate, and reverse the whole process, all for just a 5–10 minute experiment. (These Zoom testing sessions usually last 20–30 minutes.) So for studies that are able to run remotely, it is definitely worth exploring this option.

More next time. If you have questions or feedback, please feel free to get in touch at shariliu@mit.edu.

--

--