G-Visor Prototype R1 — User Testing

For the Global Virtual Design Sprint last April (2019), I took on the role of a research lead for the Internet Of Things (IoT) Second Sight Initiative. I was asked to prepare, recruit and organize a user interview and testing effort to validate a prototype our team was working on.

And here’s how I went about it.

Btw, if you’re interested in participating in the next GVDS, which will be held this November 2019, check out this announcement.

ps. To clarify, we called our smart-glass prototype GVisor, and it has nothing to do with Google’s gVisor.

Preparing the Test

I planned to create a role-playing type of user testing. I wanted to make the user feel more into the character of who we were looking for. To do that, I need to get some props together.

The first order of business was top attach a DIY spy-cam to a pair of sunglasses. I took pencil to paper and started sketching out what that might look like.

I made some annotations to this image so you could understand what each component was, concerning one another. (1 of 3)
There were all the physical components of the spy-cam I was building. Note that none of the components pictured here were pre-built. (2 of 3)
Having the general sketch done, I started affixing the components to the sunglasses via masking and scotch tape (3 of 3)

My plan was to build the physical prototype so that the spy-cam could provide a live video feed over a local wi-fi connection. However, it turned out that the video generated by the attached spy-cam was too small and of low quality.

I started working on the headpiece for the headset. I had the Punisher looking over my work just to make sure I wasn’t slacking off or screwing up anything. He was a very strict taskmaster! (1 of 3)
Another view of the headrests, along with the quad-pod I was using to proper my Android phone to test the video feed. (2 of 3)
As you can see, the cam barely catches Gutts figure and he’s only 32cm (12inch).. and we’re trying to capture as big as an adult human visual field… (3 of 3)

So the conclusions from the quick test, the DIY spy-cam didn’t cut out for the job at hand.

While I couldn’t get a proper video feed in place, we still needed the sunglass prototype to work and feel real to our users and for their subsequent interviews. So, I went ahead with the existing design, adding a Bluetooth headset as an input/output (I/O) device.

Adding a Bluetooth headset into the mix while also checking the video feed.

To supplement the live video feed, I made the conscious decision to act as a living accessory for our participants. I held my smartphone, preloaded with a video conferencing application, as close their eyes as possible. This would give the illusion of the sunglasses acting as a video camera for testers to view different images.

I also acted as the participants’ method of paying for something at a convenience store while holding the camera (smartphone) for them. The prototype’s AI to help testing participants make a purchase would also be emulated by the same person holding the phone.

If successful, the G-Visor Glass would show a live video feed for the user, “seeing” what they would see if they were using an actual prototype to find what they were looking for. The AI would guide the user towards making a purchase.

The best-case scenario that I was shooting for us in our live user testing. *fingers crossed*

In summary, here’s everything I had arranged for our user testing sessions.

Tools :

  • G-Visor Glass mockup
  • Bluetooth headsets (1 for user, 1 for G-Visor AI)
  • Smartphone with video conf app
  • Laptop for G-Visor AI

People needed for testing:

  • A visually impaired person
  • A videographer to capture the experience from a 3rd person perspective.
  • Someone who could emulate the G-Visor AI
    (Note: I had help from a friend named Risma, who was in another office. She viewed the same video feed while communicating with the test participant as the “AI”.)
  • Someone to hold the smartphone and act as the “wallet”
    (That’s me. However, I would not be able to hear the dialog/conversation between the user and Risma.)

Participant #1: Tofi (4/30/19)

For our first session with Tofi, I kept things simple.

  • The path Tofi would take with the G-Visor Glasses (from booth to destination) would be around 15–20 steps.
  • There were 2–3 turns Tofi would need to take to get to her desired item in the convenience store.
  • We gave Tofi an objective to buy a particular brand of detergent. We wanted something that wouldn’t break if dropped and made sure we stayed within a specific weight.
  • We made sure there weren’t any fragile objects near the target item (detergent), nor on the path to get to the racks where it was placed.
Here’s the 3rd person view of the experience (from Risma’s laptop)

We used Zoom for our teleconference application to facilitate communication between my smartphone and Risma. Unfortunately, we had some issues:

  • Zoom did supplement Risma with live feeds, but we had neglected to record the live feed (oops).
  • The video was delayed or sometimes froze altogether, due to connectivity issues with the on-site wireless.
  • Our problematic connection also pushed our video and audio to be out of sync with one another.

After it was all said and done, we had a post-interview conversation with Tofi about her experience.

Participant #2: Tio (5/3/19)

With our second participant, I wanted to offer Tio several possible tasks to choose from. To figure that out, I needed to touch base and have a conversation on his previous experiences with buying things from a store.

When I asked him “What do you most frequently buy at the mini-mart (convenience store)?”, he replied, “I rarely visit the minimart, so I don’t know.” I was dumbfounded and kicked myself for assuming that all visually impaired peoples casually go to physical stores to get what they need.

So, I needed to improvise a bit on what I had planned.

Here’s what we captured from Tio’s session on our live feed.

Since I couldn’t afford the risk of Tio engaging with fragile objects (such as glass bottles), I decided to target a smaller but safer object: a plastic bottled soft-drink, or water.

This choice of a target item also increased the path Tio would need to take. It was the longest possible path we could try in this particular convenience store. While there were alternatives on the ‘Promo Display Rack’ we skipped this option and attempted to test our prototype with the longer, more challenging path.

We also adopted Google Duo for Tio’s session instead of Zoom. Our poor performance and connectivity issues with Tofi had us searching for some alternatives (Skype, WhatsApp, etc.). In the end, Google Duo gave us the least delay with the greatest video-audio sync during our pre-session trials.

After it was all said and done, here’s what Tio had to say about his experience with the prototype.

Post-Test Thoughts

Overall, I learned quite a lot from testing and experiencing the prototype with both Tofi and Tio. We captured a lot of candid and insightful feedback from both of them on what we did well and what we needed to improve. Just seeing their feedback on video was very meaningful for both myself and the team.

If you have any questions about what you’ve read in my article, please don’t hesitate to reach out and get in touch. Feel free to include ideas and perspectives on how I can conduct better research in the future.

Thanks for reading!

Credits :

  • Thanks to Robert Skrobe, for his generous effort in helping me fixing this article to have good English.

Recognition :

superimposed subterfuge