BeeInSight Interaction Prototype
How might one incentivize surveys and data collection in a smart phone application?
How could the data collection interaction be quick and simple in order to focus attention on the personal benefits?
BeeInSight is an application concept that attempts to make environmentalism and protectionism beautiful while collecting data for researchers via embedding and reducing data collection into essential interactions with the application.
The development of this prototype began in the HCDE Charrette Studio in order to ideate essential considerations to guide the interaction design process. The initial prompt was to consider how a user could submit/enter data on animals and the environment to researchers elsewhere. The group and class brainstormed on specific animals and environments, specific mobile users, specific motivations, and how to best collect the data from the user. While ideating with my group, we kept circulating around supporting research that enabled environmental sustainability or wildlife protection.
Environment | Animal | User | Motivation
One of the most at-risk and crucial animals to our society is the honeybee. And so, my process focused not only on designing an interaction between user and smart phone, but also with user and the honeybee environment. In order to collect data on honeybees, the user must find the honeybees and perhaps inherit qualities of the honeybee via the phone/application.
How might I incentivize users to approach wildflowers — the honeybees’ feeding post? As this project is conceptual, I embraced limitless potentials and decided the application would include a physical infrared filter for the user, in order to capture images of how a bee might see (image 1) while providing a new artistic outlet for the photographic user. Which brings me to the user typology I considered most: an individual interested in nature photography, and/or an individual interested in protecting the honeybee’s extinction. The application would provide opportunities for the user to learn nature photography with an alternate lens while helping protect the environment.
The infrared filter invites user to find and photograph flowers, one of the only plants with natural infrared designs. Upon finding and photographing a flower, the user is prompted to answer a question of whether or not a bee was seen around the photographed flower. If so, the screen changes to prompt for two more variables: numerical data of the number of bees, and textual information of anything unusual about the scene. Upon submission of the data, the researchers then can access this data along with the photograph and geolocation to better understand where and which flowers these at-risk bees are drawn towards.
My process can be broken down quite simply: create an interaction flowchart, create graphical layouts amongst the flowchart, import layouts into Marvel, and upon testing, loop these steps for editing. I found this process to be beneficial in organizing the interaction and figuring out what and when screen elements should reveal themselves. And Marvel actually provided an outlet to test whether or not my graphic design and action flows were legible in live interaction.
However, in critical hindsight, I found my process to be restricting in how the application presents itself. Because I drew separate screen templates to sketch within, I was restricted to the association that main elements of the application concept should fit within one screen dimension(no scrolling) and should be individual entities, rather than having subtle overlap in how the elements transition into and on top of each other. Nevertheless, it was a quick and simple method of testing ideas. My next steps would be to attempt more subtle integration of the screens into each other and enable a better flow between functions.
My background is in architecture and my process is usually one of building physical models, which require precise measurements to piece together and significant planning, even if the model is a sketch. I found myself following similar methods when I picked up my drawing tools to sketch the screens. It enabled a clean look in some parts, but in the future I will try and be more loose with my sketching and prototyping to develop more iterations and break away from structured sketching.
I plan to use these techniques in developing further applications and interaction designs. Approaching and staying in this lo-fi interactions enables quick iteration and conceptual ideas of the app to exist freely. Resting in the lo-fi stage allows for less commitment to idea and willingness to toss ideas out or include more outlandish ones. Potential applications that would benefit more from this are interactions that are more experimental and in turn require an approach the easily separates the designer from their preconceptions and connotations.
However, with application ideas that are essentially derivatives of common applications, staying in this stage might not be efficient as most of the graphical organization and screen flows have been deeply considered and applied in many applications that can be used as reference. I found that my application was very similar to Snapchat or Instagram in a few elements, and these elements hadn’t changed significantly through my iterations. I could see similar situations arising if the entire application is derivative, perhaps best to jump straight into high-fi interaction design to set the application apart!
Image 2: In-class photographer