More Data Gathering Methods

Payel Bandyopadhyay
3 min readNov 29, 2016

--

This week’s reading deals with more technologies of gathering data.

Hutchinson et al. [1], discussed about two technology probes to gather data:

a. MessageProbe : deployed in both US and Swedish family. Had different results. People were a bit hesitant in sending messages.

Figure 1: U.S. MessageProbe (left) and Swedish message (right).

b. VideoProbe : Deployed within french families.

Figure 2: VideoProbes in the French families’ homes

Promising designs: Through log files, interviews, and workshops, the families have identified a variety of different interests, from practical to whimsical, for staying in touch with members between and within households.

The above methods of data collection were successful in three ways.

a. they helped reveal practical needs and playful desires within and between distributed families.

b. they provided real-life use scenarios to motivate discussion in interviews and workshops.

c. they introduced families to new types of technologies beyond the accustomed PC-monitor-mouse-keyboard setup, which encouraged them to consider more whimsical and creative uses of technology.

Druin [2] discusses about “cooperative inquiry”. It includes three crucial aspects which reflect the HCI literature:

(1) a multidisciplinary partnership with children;

(2) field research that emphasizes understanding context, activities, and artifacts;

(3) iterative low-tech and high-tech prototyping.

These three aspects form a framework for research and design with children.

Figure 3: Image of contextual inquiry

Cooperative inquiry has been developed to support intergenerational design teams in developing new technologies for children, with children. While this approach requires time, resources, and the desire to work with children, it is thought-provoking and rewarding experience. Cooperative inquiry can lead to exciting results in the development of new technologies and design-centered learning. The cooperative inquiry methodology continues to evolve as we use the techniques over time.

Kittur and Suh [3] discusses about Amazon’s Mechnical Turk, which is a promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative performance measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.

Two experiments were conducted to test the utility of Mechanical Turk:

  1. The authors attempted to mirror the task given to admins as closely as possible.
  2. The authors tried a different method of collecting user responses in order to see whether the match to expert user responses could be improved and the number of invalid responses reduced.

In Experiment 1, the authors found only a marginal correlation of turkers’ quality ratings with expert admins, and also encountered a high proportion of suspect ratings. However, a simple redesign of the task in Experiment 2 resulted in a better match to expert ratings, a dramatic decrease in suspect responses, and an increase in time-on-task.

References:

[1] Hilary Hutchinson, Wendy Mackay, Bo Westerlund, Benjamin B. Bederson, Allison Druin, Catherine Plaisant, Michel Beaudouin-Lafon, Stephane Conversy, Helen Evans, Heiko Hansen, Nicolas Roussel, and Bjorn Eiderbieck. Technology probes: inspiring design for and with families, Proc CHI 2003.

[2] Allison Druin. Cooperative inquiry: developing new technologies for children with children, Proc CHI 1999, 592–599.

[3] Aniket Kittur, Ed H. Chi, and Bongwon Suh. Crowdsourcing User Studies with Mechanical Turk, Proc CHI 2008, 453–456.

--

--