Spectra —A Portal for developer productivity metrics and feedback
The background
As mobile teams scale, there are couple of challenges that come in the way. Some of those are
- Huge number of merge requests
- Lots of code reviews
- Umpteen number of code review comments
- Build breakages
- Performance impact in terms of memory,app size
- Different team dynamics and velocities
- Feedback to developers in a timely manner
Why we needed this?
- Developers need constant feedback in terms of their tech metrics. It would help developers to keep track of their tech metrices over a period of time and do a self evaluation of how their tech metrics have fared over time.
- Tech teams need to know bottle necks and read areas for them to optimize processess and tools
Ex: There are some metrices that would give meaningful feedback to developers and help them improve over time
- The number of lint errors/warnings recorded in the code submitted by a developer
- The number of build failures caused
- The quality of commit messages
- The adhearance to design quality and standards
- The size of images used
What we desired?
We wanted to have an open portal where developers or any one can look at team/individual metrics. The portal would have various graphs which would give feedback on how different metrics have fared over time.
How we built the portal?
The data : To build a portal, we needed the data on different metrics
The Source of data : CI System. CI pipelines is where all the information originates.
- Gitlab runners
- Jenkins
- Python scripts → Any validation scripts were written on Python.
Ex : We wanted to have a way to check if a merge request was certified by QA, Product Teams and Design Teams. JIRA had all this information. So we wrote a python script which can contact the JIRA system and check all the above statuses and trigger certain signals
The above 3 formed the power lifters who would power the entire system.
Example of data sources : The CI system would know all about
- Build failures → Success or failure
- Lint results → How many errors and how many warnings
- Static code analysis tool results
- Qulity of commit messages
The data storage: All the signals generated in the CI pipelines viz, build success, build failures, the stage in which a failure happened, the commit message quality, the lint results, the number of tests that passed and failed, would be sent to a data store. In our case Elastic.
The portal/dashboard: ELK Stack
Why elastic?
In addition to having a NoSQL data store, ELK stack provides dashboarding in the form of Kibana. All basic dashboarding charts like Pie, Graphs, Meters are pretty much out of the box.
Once data is ingested into elastic and indexed, we could easily mix match indices and create delightful dashboards.
Conclusion :
Spectra has started proving its mettle by giving some measurable insights.
As teams scale, tracking metrices would help in identifying bottle necks and help in optimizing dev processess
Co authored by Ayush
Designs by Abhi
Image credit : Photo by Carlos Muza on Unsplash