More Data Gathering Methods
This week’s reading deals with more technologies of gathering data.
Hutchinson et al. [1], discussed about two technology probes to gather data:
a. MessageProbe : deployed in both US and Swedish family. Had different results. People were a bit hesitant in sending messages.
b. VideoProbe : Deployed within french families.
Promising designs: Through log files, interviews, and workshops, the families have identified a variety of different interests, from practical to whimsical, for staying in touch with members between and within households.
The above methods of data collection were successful in three ways.
a. they helped reveal practical needs and playful desires within and between distributed families.
b. they provided real-life use scenarios to motivate discussion in interviews and workshops.
c. they introduced families to new types of technologies beyond the accustomed PC-monitor-mouse-keyboard setup, which encouraged them to consider more whimsical and creative uses of technology.
Druin [2] discusses about “cooperative inquiry”. It includes three crucial aspects which reflect the HCI literature:
(1) a multidisciplinary partnership with children;
(2) field research that emphasizes understanding context, activities, and artifacts;
(3) iterative low-tech and high-tech prototyping.
These three aspects form a framework for research and design with children.
Cooperative inquiry has been developed to support intergenerational design teams in developing new technologies for children, with children. While this approach requires time, resources, and the desire to work with children, it is thought-provoking and rewarding experience. Cooperative inquiry can lead to exciting results in the development of new technologies and design-centered learning. The cooperative inquiry methodology continues to evolve as we use the techniques over time.
Kittur and Suh [3] discusses about Amazon’s Mechnical Turk, which is a promising platforms for conducting a variety of user study tasks, ranging from surveys to rapid prototyping to quantitative performance measures. Hundreds of users can be recruited for highly interactive tasks for marginal costs within a timeframe of days or even minutes. However, special care must be taken in the design of the task, especially for user measurements that are subjective or qualitative.
Two experiments were conducted to test the utility of Mechanical Turk:
- The authors attempted to mirror the task given to admins as closely as possible.
- The authors tried a different method of collecting user responses in order to see whether the match to expert user responses could be improved and the number of invalid responses reduced.
In Experiment 1, the authors found only a marginal correlation of turkers’ quality ratings with expert admins, and also encountered a high proportion of suspect ratings. However, a simple redesign of the task in Experiment 2 resulted in a better match to expert ratings, a dramatic decrease in suspect responses, and an increase in time-on-task.
References: