Last week we released a new version of Panels, our comic reader for iOS. In this version we introduced a new feature that we like to call Zoom control, which allows you to zoom and move along one page with one finger after long pressing a page.
The idea behind this feature was to improve the reading experience for those users reading comics on iPhone (portrait orientation). In previous versions, to read comics while in portrait mode you’d have to pinch the screen to zoom in and out using two hands, which makes the reading experience tedious and time consuming.
We still think that panel-by-panel navigation is the ideal solution, but it is challenging to implement. This is why we decided to prototype other ideas while we keep working on a panel-by-panel solution.
Prototyping can be a long process. Especially materialising different ideas into something that you can test and feel. This is why solutions like Marvel or Framer, to name a few, are so good. With nearly 0 effort (and 0 code) you can get your idea ready to test.
But, as developers, we like to code. To prove a hypothesis, to check how a new interaction feels, it is easier for us to build it on top of what we already have.
Solution 1: magnifying glass
Our first approach was to build a magnifying glass (similar to the built-in iOS that appears after long pressing on text).
To create a magnifying glass view on iOS you can simply instantiate a view that renders scaled (or zoomed) content from another view.
On our heads, this solution was pretty good, but the main problem we noticed as soon as we tested it on a real device was how uncomfortable it was. Having to drag your finger along the screen was incredibly annoying and, after reading a few pages, the user could get frustrated.
However, the main issue was the fact that you need to reach with your finger all four corners of the device.
Solution 2: magnifying glass + better control
Suddenly I remembered a “hidden” feature of iOS -> force touch the keyboard to move the caret position.
The idea of dragging your finger across the screen was bad, but if there was a small area where the user could control the magnifying glass, with shorter movements, it would solve the problem.
Technically, this iteration was simple to implement. The long gesture recognizer is applied to the small blue view (top bottom). This view is XX times smaller than the page. To calculate the position of the magnifying view, we only need to know the position of the finger in the blue control view, and multiply both x and y values by XX.
This solution was much better but, after playing with it for a while, we realised that not all speech bubbles will fit in the magnifying glass, which makes it difficult to read.
In addition, there was too much focus on the magnifying glass content, preventing the user to read the panel as a whole (text + image). We found ourselves activating and deactivating the magnifying glass all the time. To solve this problem we thought… “let’s make the magnifying glass big… bigger…”.
Solution 3: zooming the whole image + better control
We made the magnifying glass so gigantic that it almost filled the entire screed. We realised that, instead of using a magnifying glass, we could just zoom the whole content.
This was a great improvement. The user was not restricted anymore by a small rounded area, the whole page was zoomed and it could be easily controlled from one corner, with just one finger.
There still were a few problems remaining. How are we going to train our users to long press the bottom-right corner of the screen? We couldn’t leave a red square to showcase the long-press area. And how about the left-handed users?
Solution 4: Final feature
The final solution was to instantiate a control view (red view on the video below) when the long press gesture state was recognized. That way, it doesn’t really matter where the user puts his/her finger, it will always work.
As developers, we can also (and should) prototype and try new ideas and interactions. We can do it very quickly (specially with Playgrounds!), it doesn’t need to be production code, and surely we don’t need any external tools. Sometimes is good to start coding even when you don’t know where you will end up.
For this particular case, this feature turned up to be a bit “scary”, because we were unsure how the users would react. We were sceptic the first time we tried it, but it took us just a few seconds to realise it was actually helpful and felt very natural.
And soon after shipping, users started to love it too ❤️