Yesterday, March 2nd, we released OneShot, an iOS app for highlighting and tweeting screenshots of text. I thought it would be fun to share a little bit about how the app came together and talk about a few of the design decisions. I’ve also included some of the early iterations of the app, previous icons, a framer.js prototype, and more.
It all began with a DM…
On January 14th, my friend Ian Ownbey sent me a direct message on Twitter about a freelance design project. It turns out he was working on a small iOS app with his friend Jason Goldman and they were looking for a designer to help them wrap it up. At the time, I was in the middle of a substantial iPad design project for another client, but I really wanted to work on something with Ian and Jason. Luckily, since their project was small, I was able to squeeze a week of time for them in the middle of my other project.
Ian sent me the prototype he’d put together, which had most of the functionality of the final app but in a very rough form and zero design. He also sent me Jason’s product spec for the app. A few quotes stood out to me in the spec:
Despite building all kinds of “share” buttons into both apps and sites, people prefer to share by screenshot. This is true for both private sharing where the screenshot is sent over text. As well as public sharing.
This is so true. I screenshot tweets, articles, and all sorts of things to share via text message or back on Twitter. But it’s a very tedious process to make it look how I want it to before sharing.
“Again the experience challenge is that the oneshot card has to look good but not so stuffy that it doesn’t feel like it came from a human. We’re not trying to create illuminated manuscripts out of web pull quotes.”
The app was meant to make tweeting screenshots of articles easier, and I was the perfect user. Plus, it was interaction and UX-heavy, which is where I feel most confident in my abilities. I eagerly said yes to working on the project.
A quick note about process
Since the project began, I haven’t sent or received an email from the founders. We spent everyday in a Slack room, where we post our work every few hours. I’ll make a few designs or a prototype, upload them to Slack, they post some comments, then I make changes. As someone who built a tool for giving design feedback, Mocky.com, I was surprised by how much I enjoyed this process. It’s not perfect, but it’s the best experience I’ve had yet. I wouldn’t be surprised if we started seeing more and more design, feedback, and project-management tools built where the core interactions with the product are inside Slack.
Designing the image
We decided to begin, before anything else, by focusing on the image created by the app. The design and functionality of the app would be entirely dependent upon what we put on the image. Since I only had one week to design the entire app, I knew we had to settle on a direction on the first day. So, as you’ll notice in the mockups below, I prioritized speed over polish.
These were the first mockups I posted in Slack. We agreed there was too much info on the image, the title was too large, and we preferred the title on the bottom. So I iterated further. Somewhere in the next iteration I tried the torn paper look. The first version was very ugly, but it got the point across. Usually I’m not one to make “skeuomorphic” designs, as they aren’t particularly a strongpoint of mine nor do I like them. But we all agreed the metaphor helped here. Since the app doesn’t have any control over the text on the page or what the user included in their crop, it is important that the person viewing the image realizes that it’s an excerpt from a larger page. Sure, it’s not perfectly elegant, but it’s effective and useful.
After another round of quick iterations, I started thinking about how we could incorporate a color picker I made two years ago called Colorplane. It makes it easy for a user to quickly grab a color without caring about RGB or HEX or HSL or any other color values. We can figure that out for them. This would also allow users to personalize their OneShot a little bit, or make it correspond to the colors of the page manually instead of us doing it automatically, which would likely be breakdown too often to be worth it.
We decided the final images would have torn paper, the headline on the top, the source URL just below, and a background color. With this decision, I could now move on t design the in-app flow.
The shape of the image
Before OneShot existed, I always attempted to crop images in my tweets using the official Twitter app’s “wide” crop option. This crop is at a ratio of 2:1, which displays the full image inline in Twitter’s feed. The user can still click on it to zoom, but at least there’s nothing hidden before clicking. So my first inclination was to force this ratio, as it was what I wanted.
After some discussion and seeing examples, we decided against using the wide crop format. As you can see in the image below, a close crop doesn’t show the entire image in the feed, but it is much less awkward when you zoom in to read it. We decided that the wide crop format was great for seeing the entire image inline, but the experience of zooming in was much worse for reading. Our choice was to optimize for a better reading experience vs. a better in-feed experience.
Now we know that the app needs to give the user the option to do a few things when creating their OneShot:
- Choose an image/screenshot to tweet
- Crop (focus on one section, or at least remove the navbars)
- Highlight text (option for multiple highlights. native highlight doesn’t work either because the copy/define popup is in the screenshot.)
- Pick a background color (also doubles as the highlight color)
- Choose a source URL (ideally the app would find the exact source without fault, but that’s likely impossible with only a screenshot)
- Tweet text for adding a comment along with the image
Fortunately, only a single one of these steps is necessary: choosing an image. The rest are do-as-you-want, but we knew that most people who would use this sort of app would use most of them them every time. Given that, my challenge was to figure out how to make these requirements feel like there aren’t a lot of steps, while also having a lot of control.
Why not an iOS extension?
We’ve seen some people complain that OneShot isn’t an iOS extension, so I thought it’d be nice to explain why we didn’t go that route for the initial release (it might come later). Every time we hear this complaint it seems to be from an iOS developer or tech person. Yes, the experience would be great if you could highlight the text you want on the page, and then tap a button to open OneShot directly from the page. But the reality is that it just isn’t that simple.
Personally, not a single one of us who made the app uses extensions in any regularity, and we don’t know anybody else who does either. However, taking a screenshot is already a common, well-worn path for users young and old. People screenshot Snapchats, tweets, text messages, emails, and more. They already screenshot articles and tweet them. Screenshots are becoming a common mechanism for “linking” to something on mobile phones where, unfortunately, apps typically don’t have hyperlinks to content deep inside, or the receiving user may not have the app. It was obvious to us that the best path forward was to augment that flow instead of creating an entirely new flow with an extension.
But we also decided against it because we wanted broad support across every single app. Safari has nice support for extensions because the share-sheet icon is right there at the bottom, but Chrome has weak share sheet support and the official Twitter app’s browser doesn’t have it at all. A screenshot can be taken in any app… Message, Twitter, Snapchat, Chrome, Safari, Mail, etc. There are no limits to the content, just what information is passed along inside. (BTW — it would be amazing if this topic kicked off a deeper discussion about what information we can add into the EXIF data of screenshots to make them more extensible by other apps.)
Conclusion: Most people don’t use iOS extensions.
They’re great in theory, weak in practice.
It was obvious to me that cropping should be the first step. Ian suggested that most people would want to highlight first, and that only some people might want to crop. We discussed a bit, and finally settled on cropping being the most important step. No highlight would be effective if the crop is poorly done.
We wanted to encourage users to crop the most important section of the post, both to create an effective image and so the output wouldn’t be too tall. If you look closely at the crop designs below and in the final app, you’ll notice that the crop defaults to removing a portion of the top and bottom when you first choose an image. Also, the crop outline only has lines on the top and bottom to guide you towards moving only the top and bottom of the crop. Any article you screenshot would ideally not need to crop the left and right sides, but we still allow you to crop them. We also made a version of the app where you couldn’t crop the sides at all. It was nice, but we felt it would be frustrating the few times you needed it.
If you look at the top row of mockups here, they all have too much UI and are over-designed. It distracted too much from the image and current task. The other issue we found was that the big “Next” button didn’t really indicate where you were going, nor how many steps there were in the process. If you had to hit “Next” on each page and wait for a big transition, then the app would be dead-on-arrival… too many steps and too slow to use.
Tabs for editing
The next revisions start to finally have the feel of the final app, and got us most of the way. You can see that the color is finally incorporated on every page. It felt good to have the color visible while cropping and highlighting.
We also decided to use tabs on the bottom for each of the editing actions. Instagram and other apps uses this pattern, and it helped drive home that only some of these steps were necessary. It was now obvious that steps were optional, and you could skip them at any time, or jump out of order and it wouldn’t be a big deal. The issue we had now was space… the tabs and the Next button at the bottom took up a lot of space.
You can see on the left here where the “Next” button moved and where the first iterations of the “Share” page come in. At this point it becomes obvious that “Editing” and “Sharing” are two distinct steps. “Editing” has three tabs for crop, highlight, and background color. Then “Sharing” has the tweet textbox and the source selection/confirmation. Theoretically, with this layout, you could choose your image, tap “Next”, then tap “Post Tweet” in about two seconds. You can post a tweet in two taps, and no steps are required. However, as most users will choose, you can take your time and refine things before tweeting.
One of the beautiful parts of the app is how Ian made it possible to highlight text on an screenshot. Technically, it’s a wonderful breakthrough. However, from a design and interaction perspective, we had to figure out a few things to make sure it worked great and also felt great. So, here’s a few decisions we made over the course of building the app:
- As soon as your finger touches the screenshot, something should become highlighted. You should never be guessing at what the computer thinks you’re highlighting, it should just show you.
- The left and right sides of the highlight block should align themselves evenly. (Better support coming for this in the next version)
- The magnifying glass should be displayed to give you a close-up of what you’re highlighting.
- The beginning and ends of the highlight should have bars, or something that looks draggable, so you know that you can drag it.
- When you first tap+drag to create a new highlight, it should only allow selecting whole words. However, if you edit an existing highlight, it should allow you to edit at the character-level. This allows for quick highlight creation without much precision, but then gives you the choice of precision if you need it. Plus, it feels natural.
However, a new complication came up. We decided we wanted the ability to choose white and black, which Colorplane wasn’t setup to do. White and black were colors that it very opinionatedly didn’t want you to choose. So we changed some of the math behind how saturation and brightness were calculated to allow for very light and very dark colors. Personally, I felt that this made it much more possible to choose “ugly” colors, but it allowed for a broader spectrum, so we went with it.
Source selection and post tweet
One of the first things I noticed with Ian’s first prototype was that the source selection rarely worked for me. Actually, it didn’t even work for the first 3 articles I tested. I even remember saying to Jason and Ian that I didn’t care if the app got it wrong, but don’t automatically choose a single source for me and act like it got it right. That would actually upset me as a user. We discussed it, and we decided to just show the user multiple options. Now, the final choice was pushed onto the user who would be much better at it than the computer, and gave us an easy out if it didn’t work. (“We tried, but we just couldn’t find it… Sorry!”)
Yesterday, after we released the app, Neven Mrgan wrote a nice post about the decision we made to show multiple source options. I’m really glad he “got it” so quickly, and I thought it was the perfect antithesis to the people who said we created a clunky experience. If you look at the app under a microscope, it might have a lot of steps, but it doesn’t feel like a lot of steps. There’s a big difference.
Here on the left you can see a few more variations of the share page. We decided to put the headline and source URL outside of the paper tear, which gave me a bit more breathing room. The biggest thing I had trouble with here was trying to decide what options to give the user. We even had search for a while, but decided that it would be worthless. If we couldn’t find the source ourselves, you wouldn’t find it in the search. So Ian changed it to allow you to “paste” in a URL if you had one copied to your clipboard. I actually use this feature a lot, and love it. He’s a smart guy.
The home screen, where you would choose which image to post, hadn’t been given any love at this point. For some reason, I thought a grid layout would be good. Every photo app uses a grid, and I defaulted to using it too (p.s. defaulting to a design pattern everybody else uses without thinking it through is the worst way to design anything). We used the grid for a while while testing, but it was immediately obvious that it was the wrong solution.
But then one night, while laying in bed, it clicked for me. As a user, you are most likely going to open the app immediately after taking a screenshot. You might need to scroll back 3 to 5 images, but that would be rare. Also, since we are filtering for only screenshots, you will likely just choose the first image. So why not just put the most recent screenshot right there in the middle, big and ready to edit? We tried it, and it worked great. Done. Next problem.
Transitions and interactions
After the UI was about 80% complete, I decided a prototype would be useful for Ian to have while coding. I’m a firm believer that defining the transitions and interactions are crucial to the design process, so Ian should have something as close to real as possible which defines those. Below is a screencast of the Framer.js prototype I made to demonstrate the transitions between the editing tabs. The code was sloppy, but it didn’t matter… only the animations. Ian used this prototype to get a feel for how things should feel in the final app. He hated doing most of the work, because it was complicated, but we all agreed that they were important to make the app feel seamless and fast.
App icon and “logo”
This is the icon and logomark as they ended up in the final app and on the website. I’m pleased with how they came out, but it was the most difficult part of the project for me. Branding and icons are something I really enjoy working on, but I rarely do that kind of work for clients.
With that said, I thought it’d be interesting to show a few of the variations we tried. There were a lot of really bad icons that I cranked out at first, and I’m too embarrassed to show them here. If you look at the file names below, it starts at “icon 29", if that gives you an idea of how many horrible ones I made. But at some point I ended up trying the circle+bars concept and it felt like we were onto something. At this point I just decided to double-down and make as many variations as I could to see what felt good. There wasn’t much of a process… I felt like I was in a dark cave feeling around with my hands until something came together.
Well, I hope that was fun for you, because it was really distressing for me.
After I finished everything above, it took about 3 weeks for Ian to put together the rest of the app and polish it. We finally submitted the app to Apple for approval, so all we needed now was a website. We had about a week until the app would be approved, but by this time I was back on my previous client’s project and my week was entirely booked. So I spent a few hours this past Saturday and Sunday putting together a website based on the screenshots we put in the app store.
With an app like this, especially since it is so interaction-heavy, I thought it was important to actually show what the app does. Static images wouldn’t work. I made a single long screencapture of the entire flow of using the app by connecting my phone to my computer and using QuickTime. I trimmed the video down to be five separate ~5 second clips of the main interaction for each page. Then I dropped these clips into Miro Video Converter to export each to the 3 formats needed for cross-browser HTML video support, and also GIFBrewery to export an animated GIF version of each clip.
Now, with HTML video, I can make the videos download and play on loop automatically (no sound), as if they were animated gifs. This is nice because they could be higher quality and potentially lower file-size. Then I decided to use the animated GIFs as fallback since you can’t autoplay videos on mobile, and even then they end up playing in a full-screen player vs. inline on the page.
Then, the morning of our launch, I realized that the page was maxing out my CPU when I kept it open for a few seconds. The videos were draining my computer. So, after all the work I did to get them ready and export them and make a simple fallback, I dropped the videos and decided the GIFs would be good enough for every device. Nonetheless, I’m happy with how it all turned out. If you want to see the final design and animated GIFs, check out the OneShot website.
Thanks for reading. I know it was long, but I hope it gave you a little insight into the decisions we made and why we’re proud of this little app we made. It was a lot of fun to work on, and we hope you enjoy using it as much as we had fun making it. Bye!
Follow me on Twitter here: @DanielZarick.