A day in the life of one of my side projects

I have a lot of side projects. Ever since I started writing code I’ve done fun things on the side. It’s how I learn, it’s how I relax, and sometimes it’s how I make a little fun money.

Most recently I’ve been working on two larger side projects that have slowly been growing. Tiny Bank and Tiny Stats. Today’s story is about the latter.

A 2016 Christmas break side project I built as:

  1. An exercise in utilizing a MeteorJS job queue across various timezones.
  2. An experiment to build a dashboard-less paid service.

About a week of work and a little polish after New Years and I was up on Product Hunt.

Things went decently, definitely not a raging success, but I didn’t have any wild expectations either. Made a couple hundred 💵💰but since Tiny Stats wasn’t a recurring cost service most of my early users fell away after a month or two. Perhaps a failed experiment on #2 but I really liked and still like the idea of giving folks a little more freedom with their options when it comes to money.

This brings us up to last week when two things happened on the same day. Tiny Stats choked and nearly died, and I got a major feature request.

The Error

I use Sentry for all of my error reporting and logging across my side projects. It’s not great but their UI is pretty and for my use case it’s free so ¯\_(ツ)_/¯

Within my codebase I’ve got a nice little watcher inside my job queue.

This was the trigger. I had some stuck jobs. Maybe no big deal, it had happened before, maybe some syntax issue or a missing database item, but at this point I had worked out all those bugs and the app had been sitting pretty for a month or so. The heck was going on? Why stuck jobs now?

After a little digging I found the source of the problem. A Mr. McToken 7a13d56. He was receiving ~100,000 views per week and my little side project with its 512MB RAM Digital Ocean server couldn’t handle the weekly and monthly reports. Over half a million view items being held in memory every time I wanted to send out a report to this guy was crashing my server along with any subsequent jobs who had the unfortunate fate of being served up after this fella.

The Solution

I had a couple different options here.

  1. Restrict tokens to a limited number of views before kicking them off the service.
  2. Upgrade my database, server memory and processing power. (aka $$$)
  3. Find a way to keep the service and my costs unchanged by decreasing the load on the server at report build time.
  4. Run away to Mexico and start a band called the Regal Chalupas.

I of course opted for the latter. Err wait… the latter less one… third from the top. The last if the actual last wasn’t there.

It took a lot of brain storming and false starts, because I’m a bit dumb and don’t have a clue what I’m doing, but I did eventually arrive at a very elegant solution of which I’m quite proud, all while not spending a single dime more.

Within the app they’re called Snapshots. It’s a summarized resolution granularity of, at the moment, individual hours. So let’s say you have 10,000 views per day on your site, rather than storing those 10,000 views indefinitely and ultimately having to call up and calculate those views in real time, a snapshot job is running in the background actively summarizing and calculating view reports all day long. So rather than 10,000 * 7 = 70,000 Views every week you have 24 * 7 = 168 Snapshots.

The Snapshot is of course slightly larger than a View item, but not by much.

As you can see a snapshot acts as a sort of counter for the views on a per hour basis. And really that’s all there is to it. With a few tweaks to the rest of my codebase to make use of Snapshots in place of Views I was back online and even 7a13d56 was soon happy as a clam again.

In the future if it becomes necessary via this model I can further optimize to include not only hourly resolutions but also build and maintain, daily, monthly or even yearly snapshots.

The Feature Request

Now all of this is fine and good but it’s worth noting that without the feature request I very likely would have just let Tiny Stats die. It wasn’t doing much money wise and there seemed to be very little interest from the community at large in the service, so I figured it was a good time to just let it go and move on. Thanks to Curtis however I was encourage/inspired to restructure Tiny Stats for use by agencies or larger site collections via an API.

Once the error was resolved and the app was back up and running smoothly, adding an API and the necessary methods and pages only took a few evenings of work plus some testing.

So there you have it, a day in the life of one of my side projects. Messy, a little bit O’ hip fire and obviously a day for my side projects !== an actual day.

Enjoy this post? Pound the ❤️ and eat some Doritos.