Cloud-printing for Restaurants with AWS IoT Greengrass

Kyle Neideck
agiledigital
Published in
19 min readJun 24, 2021

At DataPOS, we handle online ordering for restaurants, and one part of that is our cloud-printing system for order receipts (“tickets”). We give the restaurant a Raspberry Pi with a receipt printer, and they set it up in their kitchen with a WiFi connection. When orders come in from the website, we tell the Raspberry Pi to print out a receipt, and that tells the staff what to cook.

Over the last month or so, I’ve been using AWS IoT Greengrass to build a new iteration of the software that we run on those Raspberry Pis, and now we’re releasing it as open-source. You can find it at PrintOS Receipt Printer Device Software on GitHub.

Photo by Daniel Bradley on Unsplash

It’s a complete, working project, but simple enough to hopefully work well as an example for other engineers working on Greengrass projects. That said, this is my first attempt at using Greengrass, so a few of the choices I made along the way may serve better as examples of what not to do. I’ll explain those in detail further along in the article.

You can also use the project to set up a cloud-printing system of your own. The only catch is that you’ll have to write the part that actually sends the receipts to the printer driver. We don’t own the IP for that part, so we’ve included a mock version of it instead.

I’ll also explain what Greengrass is, so if you’re not familiar with it already, just bear with me.

Our homegrown system

The Greengrass version is the third iteration of this project. We started with a more-or-less homegrown solution and later re-wrote it using AWS IoT. Then I took the important parts out of that version and wrapped Greengrass around them.

I’ve never even seen the first version, but at its core, the Greengrass version is still quite similar to the second version. Though, it had some reasonably long scripts for commissioning and decommissioning devices that I was able to get rid of, thanks to Greengrass.

I fixed some bugs and made plenty of improvements along the way, but the application code performs the same basic tasks in both versions. It

  • receives print jobs from the network, over HTTP or MQTT,
  • formats them and prints the receipts, and
  • reports its successes, failures and other health information to AWS IoT and our server-side applications.

Why Greengrass?

We (read: my bosses) were mainly interested in Greengrass for its observability and monitoring features, to update our software over-the-air, and to make it easier for us to maintain our IoT devices, i.e. the Raspberry Pis.

In my view, one of Greengrass’s most important features is that you can use it to deploy updates to your IoT software over the internet, across all of your devices. It can roll out your new software gradually and roll it back if something goes wrong. And you can use the same mechanisms to manage updates to language runtimes, apply security fixes, and so on.

Greengrass is very flexible and will work with a wide range of devices and operating systems. It’s also more-or-less agnostic to the way you write and run your application code. As long as your hardware supports it, you could write your software in Haskell and run it in Docker containers. Although, AWS can’t provide SDKs for every language, of course, and not every language is supported for all Greengrass features.

I was also interested in its collection of pre-built components that handle common tasks your IoT devices might perform, like streaming data back to the cloud or running machine learning models. We didn’t need any machine learning for this project, of course, but there are enough components in the collection that we were bound to find some useful ones.

Greengrass lets your device software use AWS technologies like Lambda and Kinesis, often in the same way you would use them in code you deploy to the cloud. One of the big selling points of Greengrass is its ability to run Lambdas locally on your IoT devices.

The marketing site for Greengrass repeatedly makes the point that it will let your devices “act locally on the data they generate” and that it can “bring local compute […] capabilities to edge devices”. I didn’t get it initially — our computers can already compute — but I think what it means is that if you already have Lambdas running in the cloud to process data coming from your IoT devices, you’ll be able to run them on your devices instead, and won’t have to rewrite them. Then your devices won’t need to send as much data back to the cloud.

Our devices weren’t sending much data to the cloud, mostly receiving it, but the non-Greengrass version of our cloud-printing system did have a homegrown health check feature that regularly reported on the application’s health. We thought we might be able to use Greengrass to replace it with something more simple and less bespoke, using standard AWS services.

Greengrass logo
Have you forgotten how to write software that isn’t built out of AWS services, but now someone wants you to make them a smart dishwasher?

Our project is simple enough that some of Greengrass’s most impressive selling points don’t matter that much to it. Greengrass can manage fleets of millions of IoT devices, for example. We’re not quite at that scale yet.

It also supports local device-to-device communication and IPC between software components to help you write application code that keeps working when it loses its connection to the cloud. It’s a cool feature, but our receipt printers don’t need to talk to each other, and if they go offline, they can already continue printing receipts that come in from the local network.

It’s worth noting that some (most?) AWS services won’t work offline, even with Greengrass. Greengrass can run your Lambdas locally on your devices, but if those Lambdas use a service like DynamoDB, they won’t be able to use a local instance or anything like that while they’re offline. You’ll still need to handle that case in your Lambda code.

What is Greengrass exactly?

If you haven’t worked with Greengrass before, I’ll need to give you an overview of what it actually is and how it works. But if you have, feel free to skip over this part. It’s pretty dry.

According to its documentation, Greengrass is an

Internet of Things (IoT) edge runtime and cloud service that helps you build, deploy and manage IoT applications on your devices.

In more concrete terms, Greengrass is made up of four main parts,

The AWS IoT Greengrass service is the “cloud service” from AWS’s description. It’s the part you interact with through the AWS Console or the AWS CLI tool. You can think of it as the back-end of Greengrass.

The AWS IoT Greengrass Core software is the “edge runtime” from the description. It’s the main part of Greengrass that runs on your IoT devices.

It’s a bit like systemd, I guess. It starts up your services (Greengrass calls them “components”), sorts out the dependencies they have on each other, and tries to restart them when they crash.

The AWS-provided Greengrass components are the ones that handle the common IoT tasks I mentioned earlier. I’ve only used a handful of them, but there are about 30 so far.

The AWS IoT Device SDK is on the list because it has the interfaces for interacting with Greengrass from application code, and because your application code will most likely make use of other parts of AWS IoT using this SDK. Our receipt project uses a feature called “Device Shadows”, for example, which lets each device store some global state in the cloud and receive external requests to change that state.

We actually didn’t use the Greengrass part of the SDK at all. It seems to be mainly used for IPC and local communication within the device group. And it’s currently only in the Java and Python versions of the SDK.

There’s also an AWS IoT Greengrass SDK, sometimes called the “AWS IoT Greengrass Core SDK” just to add to the confusion, but you only need it if you’re using version 1 of Greengrass. The latest is version 2.

Where Greengrass fits into AWS IoT

AWS IoT Greengrass is part of the AWS IoT platform-as-a-service, which is made up of Greengrass and other AWS services designed around IoT and embedded devices. I haven’t spent much time with the others yet, but AWS IoT has services for IoT device management, IoT security, IoT analytics and quite a few more.

The service central to the platform is AWS IoT Core, not to be confused with AWS IoT Greengrass Core or “core devices”, which is what Greengrass calls devices that run the AWS IoT Greengrass Core software.

AWS IoT Core provides the basic functionality that the other services use, like the MQTT broker for publish/subscribe messaging between devices and the cloud. It also provides a shared abstraction model, which includes Things — as in “internet of things” —, Thing Types, Thing Groups and Device Shadows. Not sure why they didn’t call them Thing Shadows.

For fairly obvious reasons, you’re still responsible for provisioning and managing the hardware and OS with AWS IoT and Greengrass, unlike with other cloud application platforms. For example, we use Watchdog on our IoT devices to reboot them if they lock up or if certain hardware starts to fail.

AWS provides a tool called AWS IoT Device Tester that you can use to check whether the Greengrass Core software will run properly on your device hardware, and there are detailed requirements in the Developer Guide. You can also search the AWS Partner Device Catalog for devices validated for Greengrass by their manufacturers with AWS IoT Device Tester.

The Greengrass abstractions

Greengrass extends the AWS IoT model with a few new concepts: core devices, components and deployments. The AWS Console has a section for each under the “Greengrass” heading in its navigation bar.

I already mentioned that core devices are the Things that you’ve installed the Greengrass Core software on, but that definition is a bit circular if you don’t already know what the Greengrass Core software does. According to AWS, in a typical use case, core devices act as local gateways or hubs and are each connected to many embedded devices that don’t run the Greengrass Core software, and often wouldn’t be capable of running it.

Usually, these will be microcontroller-based devices, like toasters or those smart lightbulbs, in which case AWS would recommend you use their fork of FreeRTOS. It lacks most of the features of Greengrass but will let the MCU devices communicate with local devices, including core devices, and with the cloud directly.

In our receipt-printing project, though, our devices are all core devices.

Components are “the atomic unit of management” in Greengrass. You package your application software into components and deploy them using the Greengrass service. On the devices, the Greengrass Core software downloads the components, runs them, restarts them if they crash, reports their statuses to the Greengrass service, and so on.

A component is made up of a recipe file and some artifact files. The recipe file holds the component’s metadata, such as its version number, default configuration, the command to start it, and its dependencies on other components. The artifacts can be any files, often an executable and the data files it will need.

Components can be normal OS processes, Lambda functions, containers, or even VMs. They can be long-running, triggered by events, or by timers. They can also be resources that you share between other components, such as data files, libraries, or language runtimes.

Deployments are made up of a collection of (specific versions of) components and configurations for those components. You specify that information in a YAML file and send it to the Greengrass service, or do the same through the AWS Console, and Greengrass will start rolling out your components to the core devices in your chosen Thing Group.

If you add a new Thing to that Thing Group later, it will be included in your deployment and Greengrass will automatically deploy your components to it.

You can tell Greengrass to stagger your deployment, in case something goes wrong, and what it should do if the deployment fails. Usually, you’ll want it to try to roll the components back to the previous versions.

Our new system

The main architectural change in the Greengrass version of our receipt-printing system, aside from the addition of Greengrass itself, is that I’ve split the code into three separate Greengrass components. One receives print jobs from the cloud, one receives print jobs from the local network, and the other does the formatting and prints the receipts.

Component diagram
The component diagram for our new cloud-printing system

With the components separated, if the remote-printing component stops working, Greengrass can try to repair it without interrupting the local-printing component.

We’re still working on this internally, but we can also create dependencies between the components. That will let Greengrass know, for example, which other components to restart if one of the components crashes or has its configuration updated.

What did we get out of this?

I’d say Greengrass delivered on most of the observability and monitoring benefits we were hoping to gain from it. And it gives us the option to set up several other features in that area that we haven’t found time for yet.

We’re planning to set up Greengrass’s telemetry, which will let us graph the history of our devices’ CPU, memory and file descriptor usages, as well as the states of our Greengrass components running on them. That should give us a better chance of catching some problems early and give us a way to look for systemic issues, like a component that works most of the time, but has started to fail and recover regularly.

There’s also a Greengrass component you can use to set up custom CloudWatch metrics to track anything else you can come up with to measure on your devices.

The Greengrass section in the AWS Console gives us a high-level view of the health status of each of our devices and each of the components running on them. It also lets us see the status of our deployments and tells us which devices they’ve been applied to.

We’re now also able to access logs from our devices through AWS CloudWatch. We added the AWS-provided LogManager component to our deployments and configured it to upload the device’s new logs at regular intervals. Apart from making it easier to read and search through the logs, this also gives us access to logs from devices that have gone offline. That can be particularly helpful for us as some restaurants turn the devices off when they close.

Deployments with Greengrass have been working well in our testing and we haven’t had any problems updating our software on the devices. We’ll also be looking into using Greengrass for general maintenance tasks, like applying OS updates, next time that comes up.

Keep in mind that our use case is relatively simple, especially when you consider the trend in IoT towards more ambitious and complex projects. Greengrass has many features that we haven’t investigated at all, like its support for streaming data and for something called a “data firehose”.

What didn’t we get out of this?

Of our initial plans for Greengrass, the one we haven’t fulfilled yet is using it to replace our homegrown system for sending back regular health checks from the application software on the devices.

For a particular device, the health information you get in the Greengrass section of the AWS Console is largely limited to telling you whether any components have crashed, in which case it tells you to “view the logs on this core device”. You probably don’t want to design all of your components so they crash when anything goes wrong, so you need to find another way to report non-fatal problems.

With Greengrass, I think the idea would be that you would monitor application-level health information in much the same way you would in a non-IoT application in AWS. I don’t have a huge amount of experience with AWS, so I’m not sure what the best practices would be, but we’re planning to figure it out for the next iteration of the project. It looks like it would be straightforward for us to create CloudWatch alarms to notify us when a device logs a severe enough error message, for example.

It’s hard to be sure that, in the long run, the benefits of Greengrass will be worth the complexity it adds to the project. We were hoping that Greengrass would offset that to some extent by taking over some of the AWS-related tasks we had been handling in our code. But there’s little question that Greengrass has added more complexity than it’s taken away.

You could argue that Greengrass was overkill for such a simple project. It often seems to me like Greengrass was created with much larger and more complex projects in mind.

Of course, Greengrass also added a lot of functionality. Instead of comparing the Greengrass version of the project to the previous version directly, we have to imagine we had implemented that functionality ourselves or found an alternative to Greengrass.

I did spend a few hours researching alternatives, especially the fully open-source ones, but not much came of it. There are plenty of open-source platforms, monitoring tools and deployment systems for IoT, and I’m sure some of them are great. But it became apparent that building a reasonably informed argument one way or the other would have taken longer than I was willing to spend, so we decided to remain safely within the AWS ecosystem for now.

Lessons learned

I’ve been working on this project for the past month or so. Also during that time, my apartment has been infested with toxic mould. So I’ve been making a lot of mistakes.

I thought I would end the article by explaining some of them and hopefully saving you from making the same mistakes yourself.

Over the month, I would say a total of something like two or three days were completely wasted. Roughly four days went into writing documentation, and about the same to reading it.

I’d say I ended up reading about 20–30 reasonably long articles in the Greengrass Developer Guide alone. The Developer Guide is comprehensive, detailed and quite clear for the most part, so this wasn’t as bad as it sounds.

I spent maybe six days writing configuration files and scripts to do things like deploy the project and glue various bits of it together. The actual application programming only took three days or so, and I’m not sure what I did with the other four days. Instead of working from home, I usually went into the office, so that I’d have witnesses in case someone asked what I had been doing this whole time. That, and because of the mould.

Mistake 1: Mixing up the versions

I started learning Greengrass by doing the “Getting Started” tutorials and then realising that I had somehow ended up on the tutorials for Greengrass version 1 rather than version 2, the latest. So I had to start again with the version 2 tutorial and try to forget what I had learned before.

It’s very easy to start reading an article or using a library or tool that was designed for Greengrass version 1. Even now, I still get caught out by this occasionally.

Version 1 and version 2 are dissimilar enough that it’s best to avoid anything written for version 1. That includes anything published before December 2020, when version 2 came out.

Mistake 2: Using the recommended project structure

In the part of the Greengrass Developer Guide about how to write your components, it tells you to include the component’s version number in several places, including in the name of some directories and files.

We recommend that you use a separate folder for each component version’s artifacts so that you can identify which artifacts are for each component version.

[…]

Important: You must use the following format for the artifact folder path. Include the component name and version that you specify in the recipe.
artifacts/componentName/componentVersion/

This is accurate, but a bit misleading. Greengrass does expect to receive the files in that structure, but the sensible way to handle that is with scripts that rearrange your files before you hand them over to Greengrass. I thought the documentation was describing a standard structure for Greengrass projects, and figured I would understand the reason for it later on.

Eventually, I tried to come up with a release process for the project and realised that keeping a copy of every version at the head of the repository made releasing a new version very confusing (especially with multiple components in the same repository).

I looked into it a bit more and checked out some of the components AWS has open-sourced. Their projects don’t use the structure I had been using, which I assume means they must have some automated process for producing artifacts in the structure that Greengrass expects. So I reorganised our project and wrote the scripts to adapt it for Greengrass.

Mistake 3: Trust issues and --remove

At times, Greengrass can feel like you bought a cursed monkey’s paw and wished that your boss would let you work on embedded systems. Even when they run entirely locally, every Greengrass CLI command takes just long enough to tempt you into doing something else while you wait and losing your train of thought.

It can take Greengrass quite a while to re-deploy your components locally when you have new changes to test. I must not have realised that at first because I was reasonably sure that local deployments would sometimes, without any indication, have no effect, and the old versions of the components would remain in place.

Or maybe that really was happening, and it was just a bug in Greengrass. I still haven’t figured it out completely.

I suspected that Greengrass might have been expecting me to bump my components’ version numbers for every deployment, even local deployments when I just wanted to test a tiny change. Given that I had each version number in about three or four different places at the time, that seemed unreasonable.

It didn’t occur to me that the problem could have been that I just wasn’t waiting long enough, and I had no idea how to inspect the Greengrass internals to debug it. So I assumed that I was supposed to remove the old versions and then deploy them again.

In hindsight, I’m pretty sure I was wrong about that. If so, then I guess the root cause of this mistake is that I felt like the project was going too slowly, and tried to speed it up by not validating my assumptions.

But by the time I started to doubt this assumption, I had already wasted too much time working on my local deployment script, so it’s still far more complex than it probably needs to be.

It also takes about five minutes to run because it polls the Greengrass CLI after removing and re-deploying each component, in an attempt to make sure Greengrass has finished with each command before giving it the next one. I did try to speed it up by making it remove all three components in a single command, but I could only get Greengrass to deploy them all at once, not remove them all at once. I think that actually is a bug in Greengrass, or at least something missing from the documentation.

It didn’t help that if you’ve deployed a component through the cloud (i.e. not locally), the Greengrass CLI commands for locally removing or re-deploying that component actually do, without any indication, have no effect. As far as I know, this is by design.

There’s also no easy way to tell whether a component was deployed locally or through the cloud. If it was the latter, you can remove it locally using the Greengrass CLI, but only if you remember to pass it an additional option, --groupId.

Mistake 4: Cutting down on boilerplate with Lambdas

There’s a bit of boilerplate in our application code around connecting to the AWS IoT MQTT broker and accepting the MQTT messages that contain the print jobs. My boss suggested we could get rid of that boilerplate by having Greengrass trigger a Lambda on the device when each print job message came in.

After I had implemented the suggestion almost completely, I realised that it wouldn’t work with deployments that apply to more than one device. In the component configuration, you tell Greengrass which MQTT topic to subscribe to and only messages published to that topic will trigger your Lambda. We use the topic print-job/[name of the device to print on] when we send a print job to a device, but Greengrass doesn’t support any kind of variable substitution in the topic name.

We could have used a wildcard in the topic name, so every device would receive every print job. Then they could just discard the jobs intended for other devices. But presumably, that would have completely erased the scalability benefits we were expecting to get from Greengrass.

We could have set a different configuration for the component on each device, so we’d have been able to change the topic name in each configuration. But components are configured per deployment in Greengrass, so we would have needed to make a separate deployment for every device.

The AWS-provided component that runs the Lambdas is one of the closed-source parts of Greengrass, so it was a bit tricky to convince myself there really was no good way around this. Luckily, Greengrass is mostly written in Java, so it’s fairly readable with a decompiler.

In the end, I had to admit defeat and, since there was no other reason for the component to be a Lambda, I changed it back into a normal, long-running component.

Mistake 5: Writing Bash scripts

A while ago, I wrote a Bash install script for a different project, and it ended up being 850 lines long. Apparently, I still haven’t learned my lesson because this project is currently very close to 50% JavaScript and 50% Bash. Little of the Bash code is specific to the project and, in hindsight, I wonder how much of it I actually needed to write.

It’s a common mistake. I thought the scripts for this project were going to be simple, so I wrote them in Bash, and then they grew. I could argue that the scripts shouldn’t need to be complex and blame this on someone else, but maybe I shouldn’t expect people to design their interfaces in a certain way just so my code can be as neat and compact as possible.

Actually, AWS has published tools to help reduce the amount of code you need for common Greengrass tasks like deploying your components. Although, I don’t think they’re mentioned anywhere in the Developer Guide.

The main one seems to be AWS Greengrass Provisioner (GGP), which

Simplifies provisioning Greengrass Cores and building Greengrass Lambda functions

The first part of that sounds like it would be useful to me. (I assume that “cores” and “core devices” are the same thing.) But when I was deciding whether or not to use GGP, I couldn’t figure out how complete its support for Greengrass version 2 was, and it seemed like GGP would take a while to learn.

I reasoned that I was already learning a few new technologies at once for this project, so why take on even more uncertainty? So I wrote manual instructions for most of the provisioning process, and scripts for the rest.

Who knows, maybe figuring out GGP really would have taken even longer. It’s possible that the Bash scripts were the right call and they shouldn’t be on my list of mistakes at all.

It’s possible that none of these were mistakes, and that the Greengrass engineers didn’t make any mistakes either, and maybe this is as fast as software development can possibly go.

Disclosure: Agile Digital is an AWS Consulting Partner.

--

--