Simplifying the Workflow for Processing Libraries, Tools, and Modes
Introduction
As one of the grantees of the ’s pr05 (“pros”) program this year, I had the joy of contributing to software that I’ve used and loved. Processing was my first exposure to creative coding, and gave me confidence on my artistic path. Participating in this program was my first time contributing to an open source project, something I had long hesitated to try, worried I didn’t have enough knowledge, experience, or time. Fast forward to this year: I now have years of experience in software engineering under my belt, time, and a project I cared about.
As part of the program, I worked on simplifying the workflow of libraries, tools, and modes. This project is all about improving developer experience. Open-source projects like Processing rely on volunteers, so removing unnecessary friction can make a big difference. Improvements in developer experience often involves automating what can be automated. Automation in turn can reduce work, time and human error. Also, good developer experience can mean that a code base is easy to pick up from scratch. For this work, we’ve tried to make everything intuitive and self-explanatory.
There were two main projects in my purview: simplifying use of the library template, and simplifying the process of adding contributions. The template helps the developer experience of contributors, and the process improvement helps the Processing maintainer.
This work was done in collaboration with Katsuya Endoh, and under the mentorship of Stef Tervelde, with much support from Processing community lead and pr05 grant lead Raphaël de Courville.
Library Template: A Simple Starting Point for Contributors
The Processing library template is a Github repository that developers can use to create new libraries for Processing. There exists out there a variety of Processing library templates that each provides a specific piece of the solution, but none consolidates everything into one using a modern Java build tool, like Gradle. In this template, we pulled together the excellent work of previous contributors, while applying our two guiding principles: “automate what can be automated,” and “make it intuitive.”
Like our predecessors, we provide Gradle “tasks” for building the library from code to jar file, and creating all files necessary to release a library. Gradle tasks are programs that you can run to do a unit of work in your build process. Also, they are easy to run with a double click in a modern editor, like Intellij or VS Code. Previously, to build your library, the jar files from the Processing install were needed. Some previous templates used Maven and third party repositories to resolve Processing core. Now that Processing 4.3.1 has been released on Maven, we can resolve Processing core without digging into our file system.
Functionality that we’ve newly added includes installing the library into your local Processing environment, and a Github workflow that adds the required release artifacts to your Github releases; by default, Github releases only include the source files of the repository. Hosting a documentation website is a requirement for publishing a library in the Contribution Manager, however, this requirement can add a lot of complexity. We promote the use of MkDocs for creating documentation websites via Github project pages, which is the simplest way we know to host a documentation website.
For further specifics on what the template does, I invite you look at the documentation page https://processing.github.io/processing-library-template/ and repository https://github.com/processing/processing-library-template/.
Streamlining the Contribution Workflow for Maintainers
The second part of my work was to refactor how Processing tracks contributions. Previously, this process was manual, and without going into too much detail, the best way to describe how the contributions data was stored is to say it was stored algorithmically. Some files contained information on state, some values manifested as headings, some files stored links to data. The script that pulled all this information together also set hardcoded values if certain conditions applied. The only way to know what the contributions data was was to run the script, and see the outcome.
To make interacting with the contributions data more intuitive, we converted what I’m calling algorithmically-stored data into a database. The contributions data itself comes from a properties file, one of the required artifacts. This is a file with key-value pairs, with information like library name, authors, or what category the library addresses. We automated the validation of the data coming from the properties file, and the addition of the data into the database. With this process, the contributor provides the url to the properties file in a Github issue, and a Github workflow is triggered, resulting in a pull request with the new contribution added to the database that the Processing librarian can review.
This new database is a database file in yaml format. The format is a list of objects, where each object is a contribution. Yaml was selected over tabular formats, since tabular formats can have very long rows, with a field name far removed from the value. Data stored in key-value pairs are more human friendly.
The fields for each contribution object includes all parameters in the properties file. To ensure updating values is straightforward, the data in the database directly reflect the data in the properties files. However, sometimes we need to overwrite values, for example, to correct a category. To be able to do this, we’ve implemented an ‘override’ field. The value of the ‘override’ field is an object, where any component field will replace the existing field values. For example, if we want to overwrite the category to be ‘Sound’, this can be applied by setting ‘override’ to {‘categories’: ‘Sound’}
We want the database file to contain all contributions, even when the contribution is no longer available. This means, we need to store contributions that are deprecated, and we need to store that state. This is set by a new ‘status’ field. This field can have three values — ‘VALID’ if the library is live, ‘BROKEN’ if temporarily unavailable, and ‘DEPRECATED’ if the library is permanently unavailable.
If you’d like to look closer at the work we’ve done, look at the repository: https://github.com/processing/processing-contributions/.
Final Thoughts
By applying the general principles of “automate what can be automated” and “make it intuitive,” we’ve made significant strides to simplify the workflow for adding new contributions. By doing so, we hope to encourage a new wave of Processing users to feel welcome to contribute to Processing.
Beyond the technical work, this experience was personally meaningful. While I’ve spent years honing my skills as a software developer, I’m also pursuing formal education in the arts. Processing is a programming environment where art and technology coexist with no apologies, and the Processing Foundation is a work environment where you can show your artistic flair without shame. It’s a place where creativity and diversity are celebrated, not just discussed. If you’ve ever considered applying for a Processing Foundation fellowship or grant, I wholeheartedly encourage you to use it. It’s a rare chance to combine technical growth with meaningful work in an inclusive and inspiring environment.
Acknowledgements
This work was done in collaboration with Katsuya Endoh, whom we discovered from his timely post about his own library template in the discourse forum, and who so generously volunteered to contribute to the template. I also deeply appreciate mentor Stef Tervelde, who in our regular discussions provided his inspirational views of what might be best practice, while leaving me freedom to make my own choices. Also deep thanks to Processing community lead and pr05 grant lead Raphaël de Courville, who provided so much insight on context, connected me to others with insightful information, and provided rigorous review on user-facing documentation. Big thanks to Raphaël, Tsige Tafesse, and Suhyun (Sonia) Choi for behind the scenes organizing of our events. I loved our events — I learned so much from our guests and from the discussions. And a special shout out to my fellow grantees, Diya Solanki, Dora Do, Miaoye Que, and Nahee Kim — was great to work alongside you, and to share this experience.
