Collaborative design methods

Kathiravan Subramanium
Razorpay.Design
Published in
12 min readAug 27, 2021

--

During various stages of designing products, we have been conducting different workshops and design activities to collaborate and generate more ideas with the different teams at Razorpay. Collaborations with different stakeholders helped us generate more ideas and gave us a chance to involve them from the very beginning of the process which in turn helped us to drive the product ownership across the team. These sessions at times also helped us to converge towards the right solutions when we have had a lot of ideas generated.

Here is a list of 5 such exercises and sessions I have organized, facilitated, and moderated so far along with the team at Razorpay. These exercises are adaptations from various design practitioners like Jake Knapp, John Zeratsky, and many others. I hope you find this helpful.

Before We begin

I believe that the design process, in general, has to be very organic and flexible, it should be able to provide what a designer wants to achieve/understand through that step of the process in order to design a good product. Having said that, there is no one right way to conduct these exercises/workshops, these are just the ways we have done it in the past wherein we reaped a good value in return.

1. How might we sessions

Stakeholders
Designers, Developers, PMs and PMMs

Duration
1hr

Conducted for
Brainstorming initial solutions for the identified user problems.

We conducted this session after the team had done extensive user research on onboarding-related challenges. There were a lot of pain points collected from the research and the designers wanted to brainstorm solutions with the team. I was acting as a moderator for this session.

Preparation:

Initially, we collated all the pain points identified during the research. Then listed them by the phase of onboarding they belonged to. Once that is done we tried to club identical problems and came down to ~4 problems per stage. Then we constructed those problems into questions that basically induce collaborative thinking. For instance, a problem like this There was a significant drop off in signup when the users start their journey on mobile, can be put in the following format

Q: How might we engage the users during the mobile signup process?

:: During the session ::

Presentation (5–10m)

An inevitable step in all brainstorming exercises is to set up the context. In our case, we prepared a one-slide presentation on the insights from research. This is to give the participants an idea of what has happened so far and get them onboarded so that they are able to empathize with the situation they are put in.

Answering the Questions (20mins)

The most important point here is participants individually answer/add suggestions to the presented question. Since this activity needs to be free of bias, the moderator made sure this activity is conducted silently and any active interactions are avoided. Facilitators can help in any doubt clearing if required.

Participant Presentation (20mins)

The participants were asked to stop adding any new points to the board, they were then asked to present any three important points that they have listed on the board. The number of points discussed was decided based on the number of participants and the time remaining. During this phase, the facilitators checked if there are any similar ideas presented and marked them as duplicates or clubbed them into one idea.

Voting (10mins)

Before voting the moderator re-arranged the ideas to keep them agnostic of the contributor. We had provided the participants with five votes each and asked them to add their votes to any of their favorite ideas or suggestions. We asked them not to cast votes on their own ideas. They were asked to finish up all their votes within the given time.

Snaps of a similar session I moderated with Abhishek, as the participants were high in number we divided them into smaller groups for this.

Outcomes

The designer along with the other product stakeholders then reconciled all the ideas collected and listed them based on the number of votes. Then depending on the criticality of the problems we categorized the answers on an impact vs effort 4-quadrant graph. This gave us a clear picture of what all problems we should aim at solving first.

2. Feature Prioritisation

Stakeholders
Designers, PMs

Duration
Asynchronous exercise

Conducted for
Prioritizing the features according to user stories

We had identified a set of user personas from the research inferences. At this point, we wanted to identify the features to be built based on the identified pain points. This exercise was conducted asynchronously over a week’s time.

Pain Points:

Once the personas were finalized we went on to narrate their daily journey with their business (as we were working on a B2B product). This helped us note the pain points better as we could relate and understand the overall triggers for every action they might perform in the product.

Feature Identification:

From the journey, we took out the places where they might use our product to achieve their goals. While doing this we also converted their pain points into features and tagged them contextually as important, frequent, and urgent.

Prioritization:

There is a famous rule that Ben Schneiderman came up with regarding the information hierarchy. According to which any information can be positioned hierarchically in the following structure.

Overview first → Zoom and filter → Detail on demand.

Based on this approach we created the following 2d framework which helped us to decide on the placement of different identified features. Here is how we placed them, a feature that solves an important, urgent, and frequent pain point goes on at the overview level. the ones that are neither frequent nor urgent can be kept at a level deeper in the product architecture.

Information Architecture:

With this framework as a base, we went ahead and drew out the information architecture of the whole application. We followed this method to create the first MVP version of the application. When we moved ahead and wanted to scale the product with more features, following the same framework made it easier for us to pinpoint the sections that needed changes in the application clearly.

3. Design Ideation

Stakeholders
Designers, PMs and Developers

Duration
1hr 20mins

Conducted for
Ideating on the design approaches

When we conducted this exercise we had identified the final list of features and architecture, we wanted to ideate on the overall layout and navigation of the application. We went back to the designers with a set of initial ideas that we have sketched out on paper. In order to bring non-design and design peers on the same page, we made sure we all used pen and paper. It also helps in ideating faster and not letting people get too deep into the design details.

Preparation:

We prepared a one-slide presentation that had a list of features their criticality and the overall architecture of the features. We had also pasted a printout of the architecture in the room where we had conducted this exercise.

:: During the session ::

Presentation (5–10mins)

In this exercise, along with the presentation of all the features and architecture, we also presented 5 of our ideas sketched on paper at the beginning of the session. We were afraid that by doing so we might bias the audience but this had actually reduced our efforts in explaining the expectations a lot better.

Comments (20mins)

We pasted those iterations on the board and asked the participants to post their comments on the iterations. To make it easier for us to reconcile we had asked to use green colored notes to comment on what they liked, red for the dislikes, and a neutral one for any suggestions or improvements.

Ideation (30mins)

Now that they have already commented on the presented ideations, we asked them to pick up our printed wire-framing papers (the paper had three outlined phone frames printed on them). We asked them to come up with their own iterations. At least three iterations per participant were the expectation, but as it was a pen and paper exercise, on average we received 5 iterations per participant.

Presentation and voting (30mins)

We asked the participants to put up their iterations on the wall. Though most of them were self-explanatory, we had allocated some time for them to present their ideas. We then provided the participants with vote stickers and asked them to vote not just on the whole iteration but also on independent sections that they liked in the iterations. For example one of the iterations had a universal search bar which most of the participants felt it’s great to have.

One important thing to make sure of during these sessions is, as this is for idea generation make sure none of the individual opinions are discussed openly unless asked to. This helps to keep the ideas unfiltered.

Snaps of an ideation session on Razorpay mobile app navigation.

Outcomes

We then reconciled the ideas generated and started designing those ideas as defined low fidelity screens. These then helped us to discuss with other product stakeholders on the possible approaches.

4. Silent Critique

Stakeholders
Designers, PMs

Duration
1hr

Conducted for
Collecting feedback on the design iterations/wireframes.

This project that we were working on fortunately had a flexible timeline to create ample iterations, we roughly had created around >50 variations of the home screen alone. We had hit a block where we were not able to finalize the approaches. So we conducted this exercise with the team to collect feedback on those.

Mostly when critique sessions happen in-person people tend to often discuss their comments/opinions in public (though this might change from person to person). What might happen sometimes is, if the designer tries to defend the ideas the whole session goes into a series of discussions and one might lose a lot of time in discussing only a couple of iterations. In order to avoid that, we came up with a silent critique.

Preparation:

Along with the one slide presentation we also had to boil down the ideas generated from 50 to around 8–10 iterations. We had refined them enough to look close to a high-fidelity mock-up. We had created some prototypes for conveying micro-interactions and transitions.

:: During the session ::

Presentation (3m per idea)

We started off with a one-slide presentation for understanding the previous phases of the product and quickly jumped into presenting the iterations to the audience. Here we also mentioned explicitly that we were not looking for any visual feedbacks as we have not worked on them yet.

Comments (along with the presentation)

While we present the iterations we also shared the design file to the audience (Figma files in our case) and asked them to comment their feedbacks directly on to the designs themselves. To make it easier to pinpoint, in some designs, we had as well placed the individual sections separately.

Discussion (30mins)

Post this session we were anyways planning to go through all the comments individually, but to also collect some emotions around different ideas, we had opened up space for a discussion. Here in this phase, we asked the participants to come up with 1 of their favorite iteration or a feature out of all the ones presented.

Snaps of some of our silent critique sessions with the team

Outcomes

This had helped us to converge on the ideas generated. We were able to understand and reiterate to arrive at not more than two iterations post this session.

5. User testing

Stakeholders

Designers, Users

Duration

45–60mins

Conducted for

Understanding the user acceptance

We conducted this exercise after we had created some final prototypes of the product. Before passing them on to the tech team for implementation we wanted to validate them with the users. In our case, we had created two iterations with slightly different approaches. So in order to validate the approach, we had invited 7–10 users as per the identified persona for the testing.

Preparation

Prototype

We had made click-through prototypes of both iterations on Figma. We tried to cover as many critical tasks flows as we can. We have also ramped the visuals with some brand elements so that it gives a finished feeling to our users. This is just to make sure the users do not have a very different experience from our existing product suite.

Script

We had created an interview script to maintain the flow of questions in the same way between different interviewers. This helped us to avoid any delta this could cause in the outcomes of the testing. We had added statements to make sure the user is comfortable about giving explicit feedback on the designs presented.

Tasks

From the identified user journey we had devised a set of tasks that they are expected to complete using the prototypes presented. These tasks are very situational and may not directly convey what they are expected to do. This helped us to understand the natural task flow which sometimes happens outside the product as well.

Metrics

We wanted to quantify the outcomes for a better understanding, here I have followed the criteria mentioned by Matej Latin in his article on Measuring and Quantifying User Experience

Success Criteria / Completion (C1)–0,1,3

For each task, we defined the success criteria to measure the completion of that task. This is rated based on the time taken to complete a task and on whether the participant was able to complete the whole flow of the given task.

Complexity(C2) — 1,2,3,4,5

This rating is to mark how complex the task is, was measured either based on the number of steps the user has to take to complete the given task or on the discoverability of the feature.

Criticality(C3) —1,2,3

This rating is based on the priority of the task, was derived from the feature framework we discussed before in this article.

:: During the session ::

Testing (30 mins)

We conducted these sessions as a combination of in-person and remote.
Having a recording of these sessions (with consent, of course) helps in post-session analysis.

After understanding the user’s business nature and the operations (as it was a B2B product), we presented the iterations and asked them to initially spend a couple of minutes to explore and understand the different sections of the application. At this stage, we asked the users to be vocal about what they think each section of the app did. We presented the tasks after this.

General Feedback (15 mins)

Once they had performed all the tasks, we asked them for overall feedback on the product and also asked them for three things that they wished they had in the application. This last point gave us inputs on which direction to take next.

Calculating Usability score

Post the session we had collected those scores as decided and calculated the usability score of a particular task with this formula.

Usability score = (Completion + Complexity) * Criticality

For example :

Task — To sign up and create an account

Completion = 1 (Total 7 steps, the user was able to reach till 4th) Complexity = 3 (Very linear process but the user has to add a lot of documents) Criticality = 3 (High)

Usability Score = 12

At the scale on which we conducted the exercise, this method did not add a lot of value as we were able to clearly see a pattern evolving in the ways the users have approached the tasks. But this might be very useful in cases where the product involves a lot of flows and it gets very subjective to evaluate with qualitative methods.

To conclude

As I mentioned at the beginning of the document, we had adapted these exercises according to our needs at different stages of the design process. There is no one right way, design is an iterative and collaborative process, so feel free to experiment with these methods. Hope this was useful. To know more about Razorpay Design, do follow us on Instagram and Twitter.

Thanks to Kshipra Sharma for the final review and for helping me to shape up the content.

--

--