Building a Bomb-Ass Team
In college, I was super involved on-campus. One of my roles was being an advisory board member of a student-run organization. We were responsible for two projects that, respectively:
- Outreached to high schools in low-income or under-resourced communities, and
- Provided services to retain college students from similar communities (e.g. first-generation, students of color, low-income, immigrants or children of immigrants).
A key responsibility I had was being involved in staff hiring for these two projects. After all, these were well-funded projects that had implications on the school’s diversity and community engagement with prospective and current students. Hiring was no easy thing; it was very rigorous.
When hiring for the project staff and director roles, one of questions that the advisory board liked asking: “What is your definition of a bomb-ass team?” Of course, this prompted an idyllic response from interviewees. The question largely ignores the sheer messiness of teams. It sweeps aside how much work it actually takes to build a team through trust, communication, and honesty.
But this was hands-down my favorite question. Understanding how people effectively collaborate was (and still is) important to me. During my time in college, I found myself as an editor-in-chief of a newsmagazine among other leadership roles and participating in group projects for classes. I am less concerned with the office politics that inhibit a good work culture (and, in my opinion, subverts team cohesion and and demonstrates poor leadership). Rather, I am more intrigued by the process of building a team that motivates, respects, and engages people. I have found that smaller teams and pairs tend to be nimble; they are most effective in achieving objectives and executing high-quality work. But how is a bomb-ass team scalable?
The Pragmatic Programmer (referred to as PP henceforth) offers some tips and ideas about how to make teams of developers great. I think some are quirky, but worthy of checking out; some I am unsure of how to execute, but sound great. Here is the best of the best, in my humble opinion:
- To combat “scope creep” (the process of adding more to your plate, especially from a client), designate a chief water tester. The responsibility of this person includes constantly checking for any changes in a project’s scope, time frame, features, and environments. Essentially, this person is keeping tabs and metrics on new requirements.
- Make meetings better. Personally, I dislike meetings because I’d rather be squirreling away and productively coding. Meetings that are facilitated should be well-prepared and any documentation should be “crisp, accurate, and consistent” with people being on the same page and “even… a sense of humor.”
- Create a fun brand and identity for your project. I have something of a knack for marketing #humblebrag. Even though thinking of an identity is not super important, it could build investment in the team’s product. PP suggests spending time coming with a zany name and logo, which gives the team an identity to build on and recognizability to the world.
- Treat documentation with care. In terms of internal documentation, I’ve come to learn that comments should be sparsely used. There are other ways to document the code and make things explicit without commenting (e.g. explicit, well-crafted variable names). If used, comments should discuss why something is done, its purpose and its goal. The code is already demonstrating how it is done, so commenting on this is redundant.
- For organizational purposes, appoint a member as the project librarian, who is responsible for coordinating documentation and code repositories. Other team members can use this person as a point of contact when looking for something. I think having a high-level overview of a project is useful, but definitely time-consuming. (See my README here, which serves as documentation on a super small-scale level).
- Automate all the things. This is essential, but I have not worked on a project that has done this. I am new to producing makefiles, shell scripts, editor templates, and utility programs — but PP suggests designating one or more team members as the “tool builders.” Their responsibilities include construction and deploying tools to expedite project’s processes. They would be making everything that was mentioned above, as well as using cron which is a Linux utility to schedule a server command or script to run automatically at a specified time. Using cron allows teams to do anything that needs to be done, but can be done unattended and automatically.
- Delightfully surprise and exceed expectations of the users/clients. This extra bit of effort should come in the form of a user-oriented feature to the system and not affect the program’s core features. Some topical, but useful, features include: keyboard shortcuts, a quick reference guide as a supplement to the user’s manual, colorization, automated installation, or a custom splash page.
It comes as no surprise that testing is a critical component to project management and is part of a development team’s responsibility. PP dedicates a good amount of the last chapter on testing. Their rule of thumb is that an application should generally have more test code than production code. So far, I’ve only used unit testing but thought it would be helpful to define and differentiate between different types of testing.
- Unit testing: tests a module and is foundational for other forms of testing.
- Integration testing: shows how major systems that make up the project work and interact with each other. These types of tests are often the single largest source of bugs.
- Validation and verification: testing whether the feature meets the functional requirements of a system. This should be done if there is an executable user interface or prototype.
- Resource exhaustion, errors, and recovery: systems in real world conditions will face limitations involving memory, disk space, bandwidth, and resolution issues. Some of these environmental limitations can be addressed, but others will not and requires figuring out how to fail.
When the system does fail, will it fail gracefully? Will it try, as best it can, to save its state and prevent loss of work?
- Performance testing: also referred to as “stress testing,” this type of testing involves scalability and may required specialized hardware or software to simulate reality.
- Usability testing: unlike all of the other tests previously mentioned, this is performed with real users within real environmental context.
- Regression testing: this refers to how we test, but I’ve encountered this term plenty of time without knowing what it means. A regression test compares the output of the current test with already known values. This ensures that bugs that were fixed today did not affect previously written code. All of the tests mentioned previously can be run as regression tests.
And lastly, as a cherry on the top, I wanted to share a new quirky term I learned from this last chapter.
I absolutely love learning about terms for concepts that I didn’t know had names. It’s like learning that the at symbol (@) is called “asperand.” Or that there is a Japanese term (hanami) that refers to cherry blossom watching, particularly the enjoyment of the transient beauty of flowers.
Fun fact: Hungarian notation refers to a variable’s name encoding its type or parameter as a prefix to its name (e.g. if expecting a boolean, using `bIsGameOver?` or if expecting a character, using `cComputerMarker`). This term was coined by Hungarian-American Charles Simonyi, who was chief architect at Microsoft. Apparently, Hungarian names are “reversed” compared to most other European names, with the family name preceding the given name.