Pager Team’s Technical Stack

Michal
PagerTeam
Published in
2 min readApr 5, 2019
Photo by Luke Pennystan on Unsplash

I’ve had a couple of folks reach out, curious about the technology that Pager Team is built on top of. While I’ve posted Pager Team’s stack on Stack Share, it might not be obvious how everything fits together.

Everything runs on a Google Kubernetes Engine-managed Kubernetes cluster. GKE is nice, although running the free-tier eligible f1-micro instances is problematic. GKE will let you create a cluster with f1-micro instances, but it will fail in assorted non-obvious ways which I’ve chalked up to insufficient compute power.

On top of Kubernetes, Pager Team uses Traefik as the ingress controller. Traefik runs multiple pods using an etcd cluster to share configuration and, importantly, Let’sEncrypt certificates. Traefik also seamlessly handles the ACME protocol to get those certificates from Let’sEncrypt in the first place, and automatically renew when appropriate. When a request hits the Kubernetes cluster, Traefik routes it to the appropriate pod. DNS is handled by AWS Route53.

Pager Team’s front end is based on a modified hackathon-starter. It uses uses ExpressJS and renders using pug and SASS. The base CSS framework is Bootstrap. There are a few javascript libraries (ChardinJS, Morphext, FullCalendar, Prism), but the front-end is relatively simple and it’s mostly just jQuery. For development ease, rev-file is used as a cache buster for the the browser cache.

The back end is NodeJS using sequelize as an ORM against an AWS RDS Aurora MySQL-compatible database. Notifications are done by AWS SES for email (once the SendGrid free trial ran out), and Twilio for SMS and voice. The email provider is abstracted away with node-mailer which makes changing email services quite easy.

Everything undergoes trunk-based development and there’s a Drone-powered CI/CD pipeline which runs a bivvy of tests (unit and functional). Functional tests are done by running headless Cypress against the staging environment and test users and rotations to make sure no bugs are introduced. Deployments are conducted using Helm, with the pipeline automatically incrementing the package version on each build and uploading it to GCR.

Individual Kubernetes cron jobs handle notifications (every minute), on-call rotation (every minute), and scheduling (every few hours). Each cron job is monitored separately. Monitoring is an evolving story using a combination of healthchecks.io, UptimeRobot, and Uptime.com. (The website, API, and cron jobs are all monitored.)

Cloud infrastructure is versioned with a commit history and is treated using the Infrastructure as Code principle by utilizing Terraform.

Analytics are also a bit of a work in progress but a combination of Google Analytics, Crisp, and CrazyEgg are currently enabled to help optimize the site and conversion.

Billing is done through Stripe, with Paywell for analysis.

--

--

Michal
PagerTeam

Founder @PagerTeam, formerly @MSFT, @AMZN, @IMDb, @HBO, @BuiltForMe