Integrating Online and Offline Channels for Grocery Shopping Experience
The study focuses on how can location intelligence and ubiquitous computing deliver personalized experiences by integrating an operating system for the physical world and an app designed for online experience. Data collection including two contextual interviews, surveys and traditional interviews for two participants. And by using affinity diagram and sequence model to analyze data, the study could generate system requirements. The system requirements including indoor positioning system combined with a designed app to improve offline shopping activities experience like in-store shopping route, item locating service, on sale notification on the app. The app also provides farming system database, country of origin database, recommendation system, and shopping list creating and managing services. It is noteworthy that since the app is combined with indoor positioning system, the app is designed for one grocery store but not limited to a specific grocery store.
Contextual Interview Participants
Sequence Model for Participant 01
Sequence Model for Participant 02
Two Primary Functions
Function 1: Scanning the QRcode
- Find the scanning icon at the home screen
- Point the screen to the QRcode
- Get the detail of the product by scanning the QRcode
Function 2: Using AR navigation
- Find the navigation button for the recommendation product
- Read the augment reality guide and press OK
- Follow the navigation to find the recommendation product
Summary of User Feedback
For function 1, user thinks that the scanning icon is hard to find on the home screen. Besides, after pressing the scanning icon, there should be some tour guide page for them to understand what to do next. However, user like this design and think that it could help them when they are shopping. Since there is no such comprehensive information available for current grocery app.
For function 2, user thinks that the interface is intuitive and easy to use. But the “Where is it” button should keep in constant place when they are sliding cards, so it should be moved to other place then it would not be affected by the words of cards. In addition, user has doubt in the technical feasibility. Since it’s hard to achieve indoor locating in centimeter precision, we have to consider image recognition if needed. But since image recognition takes time to perform, it would ruin user experience. If we could not make sure this design would be faster than use bare eye directly or ask employee there, we have to rethink the ideation process.
Since the research goal is clear and simply, and the design process including vision, UED, and storyboard do help a lot and serve different functions. For example, the vision is like the mind map for our designer, the storyboard is like the scenario for considering user, and the UED is the mechanism that combine everything together. By doing so I could develop the prototype very quickly and without much hesitate. The step by step process in contextual design makes me more confident and more understand how to iterate and interpret research data into design. I also feel that generative research for contextual design is very helpful and plenitude in compare to other generative research approach. I think that for the user center design process, this approach — including vision, storyboard and UED, are very appropriate. Some aspect that I did not consider well in UED before were indicated from my user feedback. For example, I did not think carefully that how to conduct navigate function when I was building UED. Hence, I got feedback from a participant with an augment reality background telling me the technical feasibility for AR.
When I looked back to my navigation process in UED, I realized that how thoughtless I was. So next time I would be more respectful when building UED in my further development of the proposed application.
For the technology feasibility, I would like to do more research on the technology I use. The indoor positioning system has its limitation and I didn’t notice at the beginning of the project. So the augment reality navigation system need some adjustment. For example, it may not act like a navigation system, maybe it could just show the location or direction when user holding their phone and standing. I would like to do more research for the technology I would like to use next time.