At Tictail, we believe that the future of e-commerce is mobile and based around video. We believe that videos do a far better job at showcasing products than images and text descriptions. Therefore, we’ve been spending the last three months integrating video into the core of the experience.
Using the Real Camera
When designing this feature, it became clear that a prototype would be essential for a great result. A camera interface requires thorough testing and could be rather hard to communicate without a proper visualisation.
By using the actual camera of the iPhone, feeding the prototype with live video, we’re able to get super close to the real experience. This blogpost reflects my learnings on how we were able to achieve this.
Choosing the Right Tool for the Job
There’s clearly no shortage of prototyping tools, with new ones popping up regularly. But since I wanted to use a real viewfinder and be able to record video, the options became much more sparse.
What about Framer?
My first instinct was to turn to Framer, which has been my main tool for prototypes for the last few years. Since it’s based on code, that usually means that you can prototype without limitations. Almost anything is possible, which I find to be perhaps the biggest advantage of Framer.
But to use the device camera turned out to be surprisingly tricky. Since Framer is based on web technologies, it’s limited by the capabilities of the browser. Support for using the camera is something that most browsers have started to roll out, but in many cases the APIs are incomplete. Safari on iOS has very poor support at the moment, and since I was building a prototype for iPhone it meant that I needed to look for alternatives.
When looking for alternative tools that supported using the camera, Origami stood out from the (tiny) crowd. For a tool that is built primarily to be used internally at Facebook, it’s a surprisingly polished product, with great documentation. Did I mention that it’s free?
While Framer is based on the browser, using code makes it spin, Origami is build around the visual programming technique used in Quartz Composer. It shares many similarities with wiring schemes for circuit boards. You don’t write a single line of code, but you’ll use the same logical thinking that comes with programming.
Origami makes it very easy to use the camera. In this basic example, I’ve just dropped a Viewfinder layer and an Oval layer to act as the capture button. Then added patches for connecting the tap interaction of the Oval, with the Capture Image action of the Camera, and send the result to an Image Layer.
Finding out how easy it was to get the camera going convinced me that Origami was the right choice for the prototype.
Working in Origami
Since this was the first time I’ve used Origami, the process involved a lot of trial and error. Like any other tool, it has its pros and cons. These are some of the learnings I found when building the prototype:
- While the Origami’s visual programming technique works really well for simple prototypes with just a few interactions, it works against you when building something big and complicated, illustrated in the image above. To be clear, you can structure big prototypes way better than what the screenshot shows, but structuring steals precious prototyping time.
- Coming from Framer and code, some things are simply more complicated to do with patches and nodes.
- It’s not only the camera that’s easy to use, it’s very simple to access other native device features like the keyboard, haptic feedback and the accelerometer.
- A lot of things come for free, Origami ships with pre-built components such as navigation bars. The drawback is that some specific behaviors that have not yet been thought of or prioritized, need huge hacks to accomplish.
This is a video of me using the prototype in the showroom in our office. This video pictures the viewfinder part of the prototype, where you can do things like switching between back and front facing cameras and test out different perspectives.
This video is of the step after you’ve recorded a video or snapped a photo. Here you can add text and stickers on top of the video/photo.
Prototyping is primarily something you do to enable rapid iterations. To quickly be able to validate features, compare different versions, find possible UX problems etc. Spending too much time building the actual prototype could therefore be almost counterproductive.
In this instance, I believe that it was time well spent. In fact, much of our design was done directly in Origami. Designing with the real thing in front of you — like a functioning viewfinder — is an almost magical experience. A lot of issues that would have been difficult to anticipate in static screens, became known and was solved early in the process. Communication with the developers gets a lot easier, and no interaction is left to chance. But perhaps the biggest help is of course with user testing. Without real camera input, it would have been really hard to get valuable feedback.
Given our focus on video and mobile, we will continue to ship features built around the camera. This was our first stab at using the real camera for prototyping, but it certainly won’t be our last.