My First Contribution to Fast-Dna
Recently I created a Node.js script for a Microsoft component library called fast-dna. During this I learned a lot, including:
- The importance of writing platform-agnostic code when contributing to open source projects
- How to better approach terminal output when writing a script file
- And much more!
I’ve already blogged about this piece before, but today will be an overview of what I actually did and how. Next week I’ll focus on something new.
What I Made
Fast-dna is composed of many different npm packages, where most webapps are one just one package. Each package has its own readme.
Referencing readme files in so many different folders can make things difficult. It’d be easier to store them all in one folder so if they need to be read by a process, this process just needs to traverse a single folder.
The script I wrote does just that. It takes the readme files in these packages and copies them to a separate docs folder. From there they can be referenced by fast-dna’s documentation site.
Before it’s run you’ll see the docs site only includes one option for its many packages.
The new script in run from the root folder, with the
npm run docs:build command. The result:
As the script runs, it will print success/error messages to the terminal:
A dry run version of this script can be run. The command
npm run docs:dry-run triggers it and outputs:
This submission was broken down into two stages/pull requests, since it made it easier for project maintainers to review my changes.
At this stage I’d create a script in the
./build/documentation/copy-package-readme.js folder that copies all the package readme.md files to the
Writing the script that copied the files what fairly straight forward at this point. But as you‘d see in the PR itself I struggled with cross-platform issues for a bit. Using Node’s path module remedied that.
During the review process the person that requested the feature mentioned that a dry-run process be incorporated.
At this point the script worked, but wouldn’t be recognized by the separate documentation site fast-dna used to serve its doc pages. To make this happen, at the beginning of each copied readme I needed to add a unique header formatted like:
Update a JSON file that contolled the side naviagtion for the site:
And finally create npm commands for the regular and dry run versions of the script so they could easily be used by people.
I had fun working with the files and learning how to manipulate them. It was cool to see how I could read from a JSON file, parse it, manipulate/add to it, then save the file again so it could be used by some other process soon down the line.
Something I was proud of was that I wasn’t dinged by the repo’s code climate bot for the majority of this process. It’s not like I was writing some massive feature but it kind of felt like a floor-is-lava situation where I just…you know….
However it ended up happening anyways, right at the end of my contribution…
But it was for a good, approved reason, kind of.
Originally when I wrote the dry run feature I was able to avoid angering the code climate bot by separating all of its logic in its own function, to avoid having too many conditional statements. But then one of the project’s maintainers mentioned that I should have the regular and dry run code paired together in the same places of the code.
And they were right. When a dev does something as simple as a quick fix, the dry run and regular run could be out of sync. Having them coupled together made this easier to maintain. But, at the same time, this made the code slightly more difficult to read and work with. Was there a way I could potentially preserve the readability of the code without making it difficult to keep these identical processes in sync? Probably.
Earlier in this process it was mentioned to me by one of my professors that the Strategy design pattern might be a good fix for this type of problem. I wanted to try and implement it, but at that point I’d already written a lot of the dry-run’s functionality, and I was hesitant to make use of it. I think I should have known about it from the onset so I could have pitched this strategy before starting any work. This PR highlighted an important gap in my knowledge, I need to have a better working understanding of design patterns so I can avoid this sort of issue in the future.
This contribution had a lot more feedback compared to what I was used to for other projects. This was a good thing, and something I was hoping for, however it meant that it took me longer to complete things than I’d initially have expected. I couldn’t just make a submission and expect it merged, or for me to tweak one or two things then have it accepted. Whatever I made would need to go through multiple iterations.
I want to write the best code possible; but proper variable names and concise functions are just the tip of the iceberg. Proper abstractions need to be used too.