Redesigning SkinIO

Maxwell Barvian
7 min readMar 20, 2018

SkinIO is a startup helping to make skin cancer screening and skin health monitoring accessible to everyone, whether at the doctor’s office or in the comforts of home. I joined their team in July of 2016, back when it was called ECD Network (short for Early Cancer Detection). I was the first full-time designer the company brought onboard, having outsourced it all before me. After half a year at the company, it started becoming clear that the app’s design I inherited was becoming convoluted for the new concepts we had introduced. It was time for a rethink.

The challenge

When I started at SkinIO in July of 2016 our app was technical and data-driven. New business objectives resulted in frequent additions of extra screens and components to access data but kept old patterns in place. As design debt accumulated the complexity became apparent; our most common tasks took dozens of taps/swipes and multiple screens to complete.

Pre-redesign screens in our app (work: mine). From left to right: patient profile screen, patient sessions screen, photo taking screen (grouped by region)

We decided it was time for some course correction. I worked with the engineering team to become more proactive and intentional about our app’s design and less reflexive about implementing new feature requests. I started by creating a shared online post-it board that we could use to record feedback and critiques of the design; at the time most of them came from us and one dermatologist who used the app.

Our board full of self-reported design feedback

New pieces of feedback were left in the Backlog category. Every week before sprint planning I led a meeting to organize that week’s new feedback into various categories that had started to emerge. The goal was to identify common hurdles and address them effectively as a team during the next sprint, time and priorities permitting.

After a few weeks of these meetings a core pain point became apparent: our app’s design wasn’t optimized for presenting photos to users in a way that made sense with our new business model. Patients took up to 13 photos as part of a photo session (which they paid for), but once that session was submitted and processed the app reorganized the photos into two views: a “body map” (the leftmost app screenshot above) and a grid. A history of sessions was available but photos were always accessed latest-first and were disconnected from sessions after being submitted. We enumerated a few deficiencies with each photo view in an early planning meeting, and some quick team wireframes pointed to solutions. I planned a half-day design studio to finish these thoughts and come up with a solution we could move forward with as a team.


Beyond the problem statement, I wanted to list clear objectives/hypotheses that we could test for this project. A redesign would take many weeks of prototyping, user testing, and development, so it felt important to figure out how we would measure success before pushing out something potentially worse than our current solution. I came up with three desired outcomes for our redesign and corresponding metrics to measure their success. We refined these as a team and I collected them, the problem statement and description, and sample scenarios into a meeting handout. We discussed them for a few minutes at the beginning of our design studio meeting, sketched individually for 15 minutes, then spent the remaining meeting time consolidating concepts and coming up with 2 ideas we could test individually against users. Within the last 15 minutes, however, it became clear that the differences between the two concepts were contrived and a combination of both would be best.

Finalized sketches

I spent the next day refining this concept into sketches/wireframes the team agreed on. Our plan from here was:

  1. Create a basic enough design to test ideas like color coding (green = no action required, red = attention required) and a “report card” metaphor
  2. Develop an interactive prototype as quickly as possible
  3. Test prototype against users
  4. Evaluate results (based on our developed outcomes) and refine design
  5. Hand off to engineering, assuming outcomes were being met

Initial design and prototyping

The prototype design came together within 2 days as I was able to repurpose most of the components from our existing design. I designed in Sketch and layered components so they would play well with Framer. I chose Framer for this project because the coding aspect made it feel more reliable for a user tested “dummy” app than less complex tools that produce well-transitioned chains of static screens.

Partial demo of the testing prototype

Framer was a good tool for the job. Some of the transitions were clunkier to implement than they’d be in a tool like Principle but the state management was robust and felt sturdy enough to present confidently to users. The entire prototype came out to only a few hundred lines of code and took only a few days to implement. After some extensive testing within the team, we decided it was ready to test against actual patients.


We were fortunate enough to have a representative participant pool to test this design against in-person. We set up shop at a medical location armed with our prototype to do some guerilla-style testing.

Objectives and script

We didn’t often have the chance to test our product with actual patients, so I tried to be as targeted and intentional as possible when collecting feedback from our testing participants. Before our testing date I wrote down broad, business-oriented objectives and design-specific, testable objectives aligned with our desired outcomes for the project. I condensed these both into a testing script with tasks and subjective measures that we could run by each patient. For each participant we introduced ourselves and our product and asked them to complete four objective-driven tasks using the prototype. These took the form of:

3. Alright, you’re now ready to take new photos to see how your skin is doing. How would you start and submit a new photo session?
4. How would you review these results? What would you expect tapping the “Mark as Read” button would do?

Time permitting, we also asked them to answer brief subjective measures such as:

1. How difficult (1) to easy (5) was it to review results and submit a new session?
2. How difficult (1) to easy (5) was it to find information from your doctor?
3. What did you like most about the prototype? What did you like least?

5. Do you have any recommendations for improving the prototype?


Reception to the new design was positive overall. With few exceptions, all patients were able to complete the tasks without asking for help. Critiques and feedback were specific and helpful; we realized one of the design metaphors wasn’t connecting the way we’d hoped, and the importance of setting proper expectations for the doctor-patient relationship within this new system became clear. Given the nature of our testing location, patient time was very limited, so we didn’t get as many answers to the subjective measures as we would’ve liked. The few patient responses we did collect, however, rated the design as easy (5) to review results and submit a new session, and easy (5) to find information from the doctor.

I spent a few hours summarizing this feedback and organizing it into categories for our team to review and think about. We discussed it later that day and came up with solutions for the shortcomings users encountered. I made quick changes to the prototype to test these refinements among ourselves. With these addressed, we spent a few hours planning the development schedule and roadmap for the new design.

High-fidelity mockups

I coordinated with the engineering team after our planning meeting to figure out which assets, screen sizes, and states they’d need to implement the new design. I took the opportunity to refine the mockups a bit more as well. I updated our color palette to be brighter and less corporate looking, hoping to look friendlier to our new patient-first users. I added a few additional details to distinguish certain states for the report cards, designed an iPad version, added an empty state for the home screen, etc.

Final mockups for each screen size and state


I think we did a great job on this redesign, especially considering it was the first project we had the opportunity to test with actual patients. It was one of the first times we were proactive about our product’s design, as opposed to treating design as a checklist item while developing a new feature. I was particularly grateful for the involvement of the entire team at each critical step in the process. As an engineering led company before my arrival, this project represented an important shift in thinking about our product and led to satisfying results for both our team and users.



Maxwell Barvian

Dedicated problem solver with a passion for learning and iteration.