Discipline is the bridge between goals and accomplishment

Patrick Winters
13 min readSep 9, 2018

--

I’ve been excited to share this success story about effective team formation and process adjustment since long before the outcome was certain. I’ll admit that I had assured myself that our success was inevitable, but the result has been more positive than I had anticipated. This is a story more generally about team formation and more specifically about assuming greater engineering discipline. After pulling together as a new team, my group of frontend developers adopted a few key tools and practices that have significantly accelerated our development and made our process more consistent and predictable. Team members have recently described us as “cruising,” we’re on target to ship our first major project, we’re introducing technologies that are important to Bronto’s future, and our team’s velocity has been steadily increasing. I’ll explain why I attribute this, in large part, to one overarching theme, discipline.

Forming!

This story begins with the formation of a new Frontend team at Bronto in February of 2018. After an extended period away to welcome my third child home, I returned to Bronto with enthusiasm to join this team and tackle some of the difficulties I experienced “Building a Data-Driven Reporting User Interface.” I had already explored and investigated a number of ideas that I was keen to put into action, and I had been working on architectural plans during my absence. Bronto assembled a large team without any specific marching orders, and we immediately fell into maintenance of our legacy monolith and existing features.

Before formation, frontend developers at Bronto had been working somewhat independently or in pairs on disjoint projects. As a result, frontend development involved a number of disparate tools, technologies, and expectations. Most of the team’s practices were inherited; few had been established by or owned by any of the existing team’s members. Our first major project together aimed to bring Bronto into compliance with the EU’s GDPR privacy regulations and largely involved working within our legacy application and following historical practices and processes. We may have assembled a new group around a single stream of work, but there was little appetite to address historical process debt.

Storming!

For my part, I may have skipped directly to the storming phase of team formation. In my naive evaluation, the time was right to upend existing practices and adopt Extreme Programming! I’ll be the first to caution others against over eagerness at this phase, especially when merging into a group that has existing practices. Everyone experiences heightened sensitivity to change at this stage, and confidence can quickly turn to or be interpreted as arrogance. I took a step back and tried my best to observe more quietly. I began to think tactically about adopting, among other things, continuous integration and automated testing, and how we could steer towards collaborative development. We still debated about whether to use Crucible or Gitlab for code reviews but avoided any major changes to our processes.

Within a month or two of formation, we had a role call of 11 developers (some newly hired); and I transitioned from focusing on frontend architecture to thinking about how to wrangle it all into something we could make our own. The frontend team was expected to work together, on a single stream of work, with the unofficial goals of modernizing Bronto’s frontend and scaling frontend development for the organization.

Observations…

Feedback loops were long.

Members might easily work for a week or two without meaningful discussions about their work. This reinforced silo’ing that was occurring institutionally, and made it difficult for developers to share knowledge and normalize practices. The more stark examples of this silo’ing included frontend developers being leased out to other teams for months and work being pre-assigned to specific developers that were considered feature experts during sprint planning. We needed a drastic cultural transformation to turn into a more collaborative squad.

The number of line changes in code reviews was… high.

Without working together closely, it was too easy to let empathy slip and submit unreasonably large code reviews. The number of line changes for a single review might be in the thousands, and everyone felt taxed both performing reviews and waiting for them to get completed. This also meant that we couldn’t meaningfully digest what was being presented, and it led to a lot of rubber stamping. I’ve personally been part of the problem, submitting giant merge requests. Although I was aware of the impact this had on others, I found that a lack of team collaboration discouraged me from personal discipline.

Our over the wall mentality led to lots of followups, warm fixes, and hot fixes.

Because our release process gated deployments to integration environments, we didn’t have the ability to test and verify features after development. When a developer passed a code review and the branch merged to master, items waited until the following week to get tested by our quality assurance (QA) team in an integration environment. This meant that we’d receive little feedback from QA and product owners until we had begun to work on something else. With such a large and business critical code base, it might be irresponsible to manage releases in a less predictable way. However, it meant that our QA team and product owners weren’t able to test and approve our work when it was completed. Furthermore, it also meant that it would take equally long to correct problems that made it to production, leading to regular hot fixes. Likewise, problems found during testing would encourage a warm fix in our integration environment, an effort to avoid blocking the entire release train. We would have no choice but to move on to new features, finding ourselves pulled back into follow ups and warm fixes on items that got thrown back over the wall. Unfortunately, developers aren’t great at juggling many things; we do our best work when we achieve flow.

We lacked build automation that made merge conflicts and mistakes unavoidable.

A handful of our Javascript applications were served directly from our legacy monolith. Unfortunately, the build process for this multi purpose php application had grown unwieldy, and the frontend team hadn’t felt in control of it. A decision was made years ago to build some Javascript bundles without automated tools (i.e., manually on developer laptops) and to check those bundles directly into source control. This limited the amount of collaboration that a project could support, since multiple developers might be forced to resolve conflicts on large generated bundles, and it understandably led to mistakes and regressions where we lacked integration tests and continuous integration tools to run them.

We named this problem the “Bundle Wars,” and it felt like it had raged forever.

Adjustments…

In my post “Toward Micro Frontends,” I alluded to some major architectural changes in the works for Bronto’s frontend ecosystem, but at this phase, these were no more than designs. We continued to add features within our legacy monolith; we didn’t have any avenues that would reasonably support or encourage experimentation. Adding to the confusion, Bronto pivoted and delivered a high profile, high impact initiative to our doorstep. Most likely for the better, this forced us to reckon with the conflict between business needs and our technical and process debt. Fortunately, we secured an agreement in April to build a proof of concept micro frontend (MFE) application to demonstrate the feasibility of this architecture for this project while it was still in its design stages. We used the opportunity to adopt and define new practices that departed from how we previously managed our software development lifecycle. In short, this provided us with an opportunity to experiment with a more disciplined process.

Continuous Integration and Gitlab

For code reviews, the frontend team had previously used Atlassian’s Crucible. Crucible cannot merge branches nor does it present the existence of merge conflicts, leaving developers to resolve conflicts and merge branches manually. I’ve used Github and Gitlab for years, and I felt that a PR/MR process could encourage more constructive communication and smoother integration. Most importantly, we began using Gitlab CI/CD for building MFE projects. We weren’t the first team at Bronto to shift from Atlassian’s Bamboo to Gitlab CI, but we were early adopters. Gitlab CI builds and tests every merge request to provide feedback to developers and reviewers on the review about the safety of merging the changeset. Holistically, Gitlab helps ensure that our continuous integration works for us, saving us from costly merge mistakes and broken builds and enforcing discipline around tests, reviews, and merging.

Semantic Release and Commit Discipline

Perhaps my favorite of our tools, semantic-release promises “fully automated version management and package publishing.” By instituting conventions and inspecting commit messages, semantic-release manages package versioning and publishing. Paired with commitlint, these tools strongly encourage developers to be intentional about their changesets. Commit messages have meaning, and, therefore, so do commits. More than any other tool or practice, I believe that semantic-release has driven down the size of our merge requests and encouraged us to take smaller, planned steps. After holding a team training session where we covered how to squash and amend with git rebase, the team immediately started using these tools to present narrow, disciplined changesets for review.

Automated Browser Testing

A key tenet of our micro frontend architecture is the belief that frontend applications should be independently developable. In practice, this means that each application uses create-react-app to provide a sandboxed web server that tests can be run against. Gitlab CI runs the application’s development server and high level e2e tests in docker “builder” containers using create-react-app and WebdriverIO. The benefits of this CI based e2e testing are hard to measure, but I believe it has a positive psychological effect, increasing our confidence in each other and our process. Moreover, writing e2e tests encourages discipline around meeting acceptance criteria, adopting a user perspective, and hardening our work against accidental regressions.

Automatic Code Formatting

With Prettier, we just simply stopped debating code style. Rather than argue about line breaks and indentation, we were able to focus on more meaningful aspects of software development. With a VS Code extension and git hooks made easy for automatic formatting (typicode/husky and okonet/lint-staged), we let Prettier have its way for the greater good. Although it may seem superficial at first, automatic code formatting frees developers from making trivial decisions while coding and from making banal comments while reviewing for others. Perhaps style linters impose discipline, but Prettier makes it easy!

Pair Programming and “Stooging”

The best way that I can think of to normalize a team and spread information is to encourage pair programming. Admittedly, pairing takes practice and experience; but the benefits outweigh any early discomfort. Adopting this practice took time, however. I was unsuccessful in securing equipment for a dedicated pairing station, and we had a number of team members (myself included) that enjoyed working from home. Thankfully, we found a way to pair virtually, sometimes even sitting right next to each other. Initially with Atom’s teletype and eventually with VS Code’s Live Share, we had tools that would let us work together in groups remotely. Distance and days from home no longer affected our ability to collaborate, and we would often “stooge” (collaborative groups of three or more) on more difficult problems. At least once a week, I like to call out what I’m working on and invite others to join me. If it’s an especially tricky problem, it gives team members a chance to engage and learn without taking on the responsibility of solving the problem by themselves. Especially for less senior developers, working together like this helps to train and teach and encourages everyone’s development.

Norming!

During development of our proof of concept app, we began to feel the improvement before we had any objective indicators. We still worked through some large MRs, and our commit history had echoes of a less disciplined past. What’s more, we openly debated whether expectations were too high with e2e tests and whether we should defer them for follow up work. What we didn’t do was debate whether these practices provided value. We accepted their merits surprisingly quickly, and we worked together to find out how to best fit them in our process. Everyone began to adjust.

We held a few team workshops covering, among other things, git rebase for squashing and amending, application patterns using redux or GraphQL, and async/await and webdriver.io for e2e testing. The learning curve of our new architecture, technology, and practices affected everyone, but these group training sessions helped to uplift and normalize the team.

My favorite anecdote from this period is that a developer, getting nowhere with a request for a team whiteboard, simply walked around until he found one and rolled it to our space from across the building. On at least a couple of occasions each week, that whiteboard has three or more developers huddled around it discussing some pattern or plan. I can’t attribute cause and effect, but that whiteboard seems to have played a big part in the increase in communication and collaboration that we’ve seen.

We also began formalizing the use of some technologies that we already had some experience with (like React) and adopting technologies and patterns that are new to us (like GraphQL). We were well into a plan that would decouple our frontend into something analogous to microservices, but we were still actively developing within the legacy monolith to support our “micro frontends.”

Performing!

We’ve accomplished something that continues to amaze me, pulling together a high functioning team in less than six months.

In the past month or so, non-developers have begun to acknowledge our improved performance. We’ve been recognized for delivering features faster and more predictably. Now that our improvement has begun to project outwardly, we’re building trust with our colleagues and expect more empowerment to make decisions that continue the advancement.

To demonstrate our progress, I’ve visualized Gitlab merge request data for the three repositories we’ve worked in since forming: our “Legacy Monilith” php web application, our proof of concept React application for our new architecture (“MFE PoC”), and our most recent high profile React project using our new architecture (“High Profile MFE Project”). I’ve bucketed data by week to show trends over time and provided an “All Projects” line to display totals or averages across all projects. We can see four major themes in the data: a move toward consistently smaller and more manageable changesets, an increase in team collaboration, smaller feedback loops, and an increase in overall productivity.

Manageable Changesets and Consistency

Gone from retrospectives is the common refrain that “merge requests are too large and painful to review!” We’ve reduced the number of changes in merge requests, leading to more meaningful and manageable reviews. This has had the added benefit of forcing us to develop and communicate incremental plans for feature implementation. We’re more empathetic towards our colleagues by applying discipline around what we present to them and when.

Average Number of Line Changes in Accepted MRs

We’re currently averaging under 200 line changes per MR! Talk about a manageable review!

Average Number of Commits in Accepted MRs

We’re averaging about 2 commits per MR. Since our commits have release semantics, this means we’re mostly keeping MRs limited to a single bug fix or feature addition. In practice, we’re including a refactoring, documentation, or e2e testing commit with a feature or fix.

Team Collaboration

We’ve seen a steady increase in the amount of discussion and communication in Gitlab. Team members provide reasoned, useful feedback that has helped to spread patterns and make our codebase more predictable and consistent. We’re more disciplined about the size of our reviews, and it has ultimately led to deeper and more meaningful discussion.

Number of Merge Request Comments

An increasing amount of discussion in Gitlab demonstrates greater collaboration and communication.

Smaller Feedback Loops

As we’ve reduced the size of merge requests, we’ve increased their frequency. This means that the team is touching base much more often, requesting feedback from colleagues at shorter intervals and heading off design or implementation problems much earlier.

Number of Merge Requests Accepted

More merge requests means we’re requesting feedback more frequently.

Productivity

Our output has increased at least 3x since April and accounting for the increased quality and reduced rework might put the real multiplier at 5x.

While acknowledging that merging lines of code and accepting merged requests don’t necessarily mean we’re providing business value, these numbers correlate well with increases in our feature velocity. This chart demonstrates that the primary output of our software developers has increased, namely software.

Number of Line Changes Accepted

Our output has increased at least 3x since April and accounting for the increased quality and reduced rework might put the real multiplier at 5x.

What’s Next?

Now that we feel steady, consistent, and productive, we’ve begun to ask ourselves how else we can improve our practices. At our most recent retrospective, for example, we debated the following:

  • Should we attempt continuous delivery or more frequent production deployments?
  • Can we build ephemeral Gitlab Review Apps for early feedback from QA and product owners?

Adopting disciplined practices has led to more consistent and predictable behavior, which should lend support to our goal of measured and continuous improvement. This has truly been a group effort, and I’m grateful for the support and open mindedness of my teammates. We’ve accomplished something that continues to amaze me, pulling together a high functioning team in less than six months. I’m excited to continue our experiments and see what more we can do. After all, we have a strong foundation to build upon!

The opinions expressed herein are my own and don’t reflect those of my teammates nor employer.

--

--