Making a Cardboard Piano Talk to the Internet

Whole notes on connected keys

Joel McCance
Pandera Labs
8 min readMay 22, 2018

--

In preparing for the recent Connexion IOT Conference, we decided we wanted more than just banners, flyers, and our smiling faces at the booth. We wanted something that showed off what we can do for our clients, specifically those working with IoT. But we’re not a hardware company. We don’t have any iPad-controlled coffee makers or people-counting cameras on hand. We don’t make the gadgets — we help integrate them into the wider world. How could we best illustrate our ideas and skills?

Starting from a cardboard piano one of our engineers made as a weekend project, we spent the next week cooking up a demo for our booth. Leveraging our expertise, our well-stocked toolkit, and a surprising number of pennies, the PianoT project was born.

Read on to hear from some of the engineers that made it happen.

Making the Piano

g. link:

Inspired by the Nintendo Labo, I originally made the piano in my living room. Each key of the piano is a meticulously folded index card with a tip of aluminum foil and a counterbalance of pennies. A notch in the key allows it to rest on its cardboard fulcrum and swing down to complete a circuit with aluminum foil on the bottom of the box — all basic ingredients found right at home.

The guts of this little piano are what make it Internet-friendly. Sitting just below the keys is a Particle Photon, a WiFi-connected microcontroller that runs the show. Each key is wired into a digital pin on the board and, as the key is pressed, the board distinguishes which key that is and sends this data off to the cloud!

Defining the Architecture

Mark Moon:

Now that we had a physical piano, we had to build a system capable of receiving raw key events, contextualizing the events, persisting the current state of the piano, and publishing the state up to the UI. To validate our architecture decisions, we built virtual simulated piano simulators to generate more data for delivery into the system. Our goal was to build a system capable of scaling to thousands of pianos.

Data Flow

The physical piano was created to generate two events: key-up and key-down. As the piano generated events, a webhook written for the Particle Photon added metadata (device id, timestamp, etc.) to a JSON payload and POSTed it to an AWS API Gateway endpoint.

Upon entering the AWS cloud, the API Gateway was configured to proxy raw key events to a Kinesis stream. Listening to this first stream was an AWS Lambda designed to convert the raw Particle JSON structure into a canonical event model used throughout the rest of the system.

Once converted, the canonical event was published to a second Kinesis stream which was consumed by an event synchronizer. The synchronizer had two tasks in the system: first, perform any additional data massaging prior to persisting the current state of the piano — in this case, whether a key-up or key-down event had occurred; and second, publish the contextualized data onto a Kinesis stream which was ultimately consumed by a WebSocket in the UI.

Despite what seems like a large number of hops to get data from the physical piano to the front end, a single event would typically take just a few seconds to be processed.

Resiliency and Scale

From the outset of the project, our goal was to build a resilient system capable of scaling to handle an increasing amount of data over time. To ensure our system could scale, we built decoupled components each responsible for a discrete task. In turn, each component could be scaled up or down down individually depending upon the current load of the system. Kinesis, acting as the plumbing between the components, ensured elasticity and resiliency while the system processed real-time key events.

Managing the Infrastructure (Joel McCance)

Joel McCance:

This architecture has a lot of moving parts, and we wanted to be able to tear down or redeploy the stack as often as we liked. Clearly, managing this by hand would not be feasible. Thankfully, Hashicorp’s infrastructure automation tool Terraform means we don’t have to.

We already had a Terraform-managed stack for internal use that exposed common resources like networking, an application load balancer, and an ECS cluster. Using the remote state feature, we could easily reuse this foundation to stand up our PianoT stack.

We continued this modular approach into the PianoT Terraform projects, keeping Terraform code co-located with the services they managed. A separate pianot-infra project consolidated shared resources. This included re-exporting outputs from the main panderacloud-infrastructure project as well as standing up new, PianoT specific ones like the Kinesis streams and API Gateway. The ingest and API services then had their own Terraform projects that used remote state to pull in resources from pianot-infra. Since these projects lived in the same repositories as the source code they managed, it was easy to build and deploy new versions without needing to bounce between projects. We could build, update infrastructure, and deploy a given service with a single script.

Dependency graph of the PianoT Terraform projects

This also helped isolate service-specific infrastructure. For example, the API service’s Terraform is responsible for standing up its database and the Kinesis stream that connects the synchronizer and the web components. These resources are tidily encapsulated inside their project-specific infrastructure. If their needs change in the future, only the specific project that uses them needs to be altered.

Back-End Implementation

Mark Moon:

Given our timeline for the project, we needed technologies on the back end to get things ramped up quickly but which would also scale well with increased load. The application stack we decided to use included:

Kotlin

Kotlin has quickly become one of our favorite languages to write applications with here at Pandera Labs. Kotlin is concise and an inherently safer language than Java, allowing us to write applications faster with fewer lines of code and less errors. Also, dare we say Kotlin has made Java fun again.

Spring

Spring Boot, as the name suggests, accelerates development of Spring based applications. Boot’s opinionated approach to building applications and auto-configuration of beans significantly reduces to the need to write and maintain boilerplate code.

Despite the modest data model, Spring Data JPA quickly fulfilled the data access needs for the PianoT project. Leveraging our existing JPA entities, Spring Data provided paging, sorting, query by method name, and declarative queries with far fewer lines of code.

Stream Processing with Spring Integration and Project Reactor

Due to the popularity of Amazon Web Services, it should come as no surprise that Spring Integration (SI) has an extension for AWS. The asynchronous, message-driven architecture of Spring Integration allowed us to easily begin ingesting Kinesis stream data. Another benefit of SI is its support for reactive streams via Project Reactor. All we had to do was configure SI to consume the stream as a reactive Flux and off we were into the reactive functional bliss..

WebSockets

Our first attempt at getting data to UI was to use fast polling. However, we soon realized even with a very short polling window data was being missed. To remedy this issue, we switched to WebSockets. WebSockets provided a pub-sub model we could use to push contextualized key events directly to the UI.

Visualizing the Results

Mike Rourke:

Design and Communicating with the Back End

Behind the scenes, the web app is using React and Redux, which we use for just about all of our projects here at Pandera Labs. Since we didn’t need a lot of bells and whistles, we opted for the Rebass component library, it’s lightweight and quick to get up and running. We used STOMP.js and some middleware to integrate WebSockets with Redux. The first piano in the screenshot below (top left) represents the physical piano, so each time a key is hit on the cardboard piano, the corresponding key in the UI will be filled in with color. The rest of the pianos in the list are generated by the back end, and their key presses are randomized.

App with aggregates chart and pianos

The pianos and keys are drawn with SVG, using a design provided by one of our awesome designers, Mallory Haack. We have each piano and key connected to Redux state, so the key gets filled in with color whenever a WebSocket message comes in that specifies said key has a state of “down.”

A piano with some keys pressed down, with key counts shown above each key.

The aggregates chart is using D3 (hooray for SVG!). The chart is updated by polling an API endpoint every second that provides the press count for each note (across all pianos). The color of each line corresponds to the color of each key on the piano.

The aggregates chart with rolling press counts (updated every second).

Issues and Challenges

As mentioned before, we tried using fast polling to reflect updates to key states, but found that the browser wasn’t able to keep up — in Chrome specifically, memory usage would continually increase until the browser froze and had to be restarted. By using WebSockets and minimizing the amount of Redux processing (i.e. sorting or filtering), we were able to handle to updates without bogging down the browser.

Since the press count aggregation was handled on the backend, the most expensive operation was updating the range for the chart (using D3’s min and max functions) to ensure all of the lines were visible. Polling every second gave the browser time to garbage collect, so additional pianos could be added without causing an issue.

Wrapping Up

The PianoT project was a fun little demo to build that wove together a lot of common threads for Pandera. We made use of our go-to tools to deliver quickly and reliably. We refined our techniques (such as our Terraform designs), whose benefits we’ll reap on future projects. And we had fun exploring some new things, like, you know, figuring out how to construct a WiFi-enabled piano out of household materials and a Particle Photon.

Special thanks to g, Joel, Mark, and Mike for making this demo a reality!

At Pandera Labs, we’re always exploring new ways to build products and we enjoy sharing our findings with the broader community. To reach out directly about the topic of this article or to discuss our offerings, visit us at panderalabs.com.

--

--