The Annotation Panel Story — A Product Design Case Study

Ritesh Kalia
5 min readMar 19, 2024

--

In This case study reveals how we designed an Annotation Panel tailored to our Research Team’s needs, simplifying dataset management and image annotation.

For those who don’t know, an annotation panel is like labeling pictures to teach a robot what things are.

Why do we needed it in the first place?

Our reliance on local data storage and outsourced image annotation created critical testing risks and bottlenecks, which slowed us down as we faced several challenges:

  • Data Risk: Local storage of critical data without cloud backups posed a high risk of data loss.
  • Model Evaluation: It was difficult to measure model efficiency and real-world performance.
  • Image Annotation: Outsourced annotation led to quality issues, difficulty managing a growing database, and challenges identifying poorly annotated images.
  • Data Versioning: Inefficient tracking of dataset and model changes hampered reproducibility.
  • Organising data: We have a vast mountain of big data scattered across drives, lacking proper labels. Our Aim was to organise it effectively.
  • Collaboration Bottlenecks: Siloed work on local systems hindered the sharing of data, models, and insights.

My role in the team

Being responsible for designing the Model Manager Flow. I worked with a lean team of product, research, and engineering in a dynamic, non-linear process. We frequently revisited the drawing board as we worked closely with the research team. While this approach may not be a typical UX case study, it was a rewarding journey.

How do we do it

We began with comprehensive product research, analysing other tools, and industry patterns. Our goal was to design an panel that aligned with industry standards.

However, limited available tools in data annotation meant we relied heavily on our research team’s inputs and observed how major players solved the problems.

The Biggest Challenge

The biggest design challenge was making one panel work for different users: Data Annotators, Model Managers, and Reviewers. Each group has unique tasks, so we had to create three separate workflows within the same space.

Competitive Benchmarking:

We drew inspiration from various image annotation tools, both paid (like V7 & Super Annotate) and open-source (like CVAT & Labelbox). This exploration allowed us to understand their functionality, identify patterns, and learn about best practices. We then tailored our own panel, making necessary adjustments to meet our unique requirements.

Based on that we also made a list of combinations of the different steps user (Both Data Annotator and a Model Manger & Reviewer) will have to take to complete the tasks.

Annotation Panel Final Flow

Note : For brevity, I’ll skip early explorations and wireframes. And will only be focusing on the Dataset creation part for now. For details, please don’t hesitate to DM me over Twitter, LinkedIn or Email.

Dataset Creation

Right from beginning we know that creating a new Dataset is a lengthy process & can feel overwhelming. That’s why we decided to divide it into four manageable steps. This way, you’ll feel a sense of achievement with each step you complete. It also serves as a progress tracker, making the process less daunting. Also lowers cognitive load and less scope for error.

Step One — Name & Data

The initial step in creating a dataset for Model Manager involves three key decisions: naming the dataset, choosing its data type, and determining whether to use existing models for weak annotation of the data or not.

Name Dataset → Select Data Type → Upload Files .

Step Two — Add Data Tags

After the initial setup, it is necessary for the Model Manager to create tags. Tags are like labels that help you organize and quickly find the data you need. They also make it easier to use your data for different projects in the future!

Add / Create Datasource tags → Add / Create Event Tag → Add / Create Other Tags (if needed).

Step Three— Add Classes & Description

As you reach Step Three add the necessary classes for which the dataset should be annotated. We’ll create classes to represent these categories. Think of classes like colourful labels! We’ll give each class a name, a colour, and some clear directions for the annotators so everyone knows how to use them.

Add / Create Classes → Colour Code Classes → Add Database description & Instructions.

Step Four— Add Members

Finally, add Annotators and Reviewers, and distribute the dataset among them using a percentage-based division.

Add Annotators → Add Reviewers → Divide the Dataset.

Key Learnings

  • Crafting this tool was a new experience for me. Designing the user flow was challenging, especially when considering all potential errors and what if’s.
  • We had a mountain of already existing data to organize. With a clear goal in mind — to make data search and retrieval as simple as possible. This objective led us to develop a comprehensive tagging system that extends beyond individual datasets, right down to the image level.
  • Collaborating with the research team taught me a ton about AI model development. I picked up the process, the terminology, and gained a whole new perspective

--

--