UCCLI

By — Aaditya Sharma (Engineer, Platform)

UC Blogger
Urban Company – Engineering
10 min readDec 15, 2020

--

Introduction

Most of the contemporary operating systems (Kali, Ubuntu, etc.) come packed rife with tools. One can have a really hard time memorising all those hefty commands to achieve seemingly simple tasks, just because those are generic enough to execute a bundle of operations, which we usually don’t require in our daily routine. Taking an example from my own experience at Urban Company, to set up multiple Node.js services in my local machine, I have to:

  1. Individually find the git URLs for all those services.
  2. Change their configs to they can ‘talk’ to each other (a part of service discovery, which I won’t go into right now).
  3. Switch to the correct node version for every individual service (this also requires me to actually check which version a particular service is running on).
  4. Run npm install on all of them.
  5. Find the correct entry point file.
  6. …and finally start those services.

Whew! that’s a handful. I mean, forget about having to do this multiple times everyday, just thinking about the correct sequence took away 10 minutes of my life, and I’m not even sure if I skipped something crucial.

The Dilemma

Now imagine you missed just a single step in this havoc. Granted, some oversights like selecting the incorrect node version would hardly pose a challenge in debugging. But let’s say we missed a config change for service discovery (which is apparently a very frequent blunder). I bet, that on an average, you’ll spend at least an hour debugging what went wrong, only to bang your head against the wall at the end. Why? Because the cluster of services you just spawned up would work impeccably, since it would actually be ‘communicating’ with the default instance (which is usually deployed in the dev environment) of the service you just missed in your config change and there might be no errors. But it would render the whole point of this exercise futile, since the service you actually wanted to test, never got called. Good luck explaining the tardiness to your project manager and why the whole project got delayed, or even worse, why a buggy code made it to production (since it never actually got tested).

I can go on about numerous other examples that software developers deal with, that squander humongous amounts of time from their everyday lives, but let’s jump right in to a solution first.

A Solution

Have you ever pondered about the difference between an OS like Kali and, let’s say Ubuntu, both being linux based systems? Apart from a few other idiosyncrasies, Kali comes pre-packed with a colossal library of tools (for pen-testing, security, etc.) that hackers can exploit to expedite their attacks (or ‘hacks’, as the script kiddies would say). On a side note, if you think the term hacker refers to an evil person terrorising some random village, you really need to brush up on the use of the term. But let’s not digress.

An epiphany! Why don’t we do what Kali, or Amazon, or so many other platforms did? Create a suite of commands under a single roof, developed specifically for our own needs. And that’s exactly what we built, our very own command line interface tool, called UCCLI (the name is derived from the ubiquitous AWSCLI, used by almost everybody who works with the AWS cloud platform). For the uninitiated audience out there, a CLI is a text based interface (like a linux terminal), to communicate with your operating system (an oversimplification, but quite enough for this article). And the tools that run on these interfaces are basically your CLI tools, just like any other command you would execute on the linux shell, or any other OS terminal for that matter.

The famous TV show Mr. Robot, in which we uses a self made and fictitious command: ‘astsu’ for executing complex operations in a go (have a peek at the last two lines).

Single line commands

What we have is an immaculate and a foolproof single line command to achieve this ‘swarm of instructions’ with potential blunders, all in one go. Going back to the Node.js service deployment example, instead of having to memorise and execute all of those bulky commands and procedures like some barbaric 90s “hacker”, we have a straightforward procedure now:

This will take care of updating the latest changes from your git repo, service discovery, setting up the node environment, correct node version selection, dependency resolution, starting the service, and finally tailing the log files, ALL in one go! Aah, the sophistication *pops open a bottle of fine wine*.

Just think about the psychological benefits this would have on a developers life. This was never about saving 10–15 minutes per session, although no doubt, that’s a huge bonus. This is about overcoming that feeling of reluctance that developers might have when even thinking about having to test their services in this chaotic array of procedures. With this seemingly simple automation, we’ll never have that feeling of ‘God! now I have to set up the whole thing on my local machine for testing before I send the deployment.’ Just run this single command and go nuts on the testing procedures.

UCCLI’s help command

Backend Infrastructure

But before I go on bragging about the benefits and possibilities of uccli, or something similar, let’s peek on its backend a bit, and understand in a nutshell, about how we achieved this. There’s really no one way to build something like this, but what we present here are some of the best practises that might prove beneficial for a kickstart. The rest is up to your imagination really, never limit that.

Inspiration

There are so many tools in the modern world that you can never run out of ideas from where you might find your inspiration to build something like this. Personally, this inspiration primarily came from awscli, a highly tested, robust, and reliable embodiment of something similar, used daily in the production environments of organisations with tremendous traffic.

AWSCLI’s help command

If you ever dissected and scrutinised the source code for AWSCLI, they use a library called Botocore, which takes care of all the nitty-gritty stuff, and a sort of a cli wrapper (in a vague sense), which uses that library and enable the CLI commands on any machine. The whole thing comes nicely packed in a pip installable package, via a single ‘pip install’ command.

And..that’s it, that’s all we plagiarised from this heavily tested tool, its structuring and a general flow of instructions. And that’s how motivation works. You don’t rip off the entire thing. You study and appraise why something works so well, as it does, and just extract the essence, absorbing the most quintessential qualities. Devising a project like this completely from scratch is squandering away valuable time and resources, a fact I learned the hard way.

UC-Platform-Bot

Going in the same direction as described above, we developed a library called uc-platform-bot, keeping everything simple and generic. The goal of this library was to have functions and classes generic enough that they can be used by anyone importing this library, but not so generic that we lose the purpose of automating the entire flow. We won’t go too deep into the code itself, but I’ll try to explain the most significant facets of what we had in mind.

First off, we kept this library highly tuneable, based on various configs provided to it. As an example, we know that our git URLs will rarely ever change, so that goes in a config. Similarly, the process of cloning, pulling, pushing, or any other git operations, would not change in the near future. So, this entire flow could easily be automated in a very specific way, still keeping things generic enough on an organisation level. Anyone who wants to clone a specific service anywhere, would just have to import this library and call a function with a source and destination, and voila, it’s done! Likewise is the flow for operations involving ECS, service discovery, S3, or any other AWS related workflow.

UCCLI

This is what I mentioned earlier as ‘vaguely a wrapper’, for our library. But what UCCLI essentially does is provide you with an actual interface with which you can use these functionalities effortlessly. It too, comes in a nicely packed pip installable package, built in python3, and makes available a command called * spoiler alert * ‘uccli’. Now you can simply install this package and run (as described earlier):

and you’ll have a nice and clean cluster of services running and talking to each other in your local system. As an additional functionality, we have also provided an option to run those services in a docker container, so they can replicate the entire dev environment running on ECS, on a local macintosh machine. How cool is that! This means when you actually go ahead and test a service in dev or staging, you can rest assured it would work exactly the same way as it did when tested via uccli.

We used an open source library called ‘click’ to make our commands available on the terminal, along with any necessary help docs. This also enables a very simple ‘ — help’ command by default, for every sub-command. This can essentially be a readme file for all those specific sub-commands, or the entire tool itself.

Optimisations

Now that everything has been automated and codified, and no manual intervention is required, this gives us an opportunity to utilise the underlying operating system and by extension, the underlying hardware, very efficiently. For example, now instead of sequentially spawning up several different terminal sessions and initialising the node servers, we can use multithreading and multiprocessing to utilise every iota of resource that is available to us in our systems, and optimise the whole process. Complex semaphores and signalling mechanisms can be used to make our system fail safe. This does add an additional layer of complexity to the whole process, but the end result is so worth it.

Cost Savings

Let’s pause for a moment here to think about how to take things a step further. Nowadays, many organisations are issuing high performing laptops to their developers, only to find out that those are being used to play games and watch 4K movies (although not the worst thing in the world). But, how do we use this CLI tool to utilise all the power bestowed upon us? And above all, how do we use this to save a plethora of money? An inkling has already been provided in the discussion above.

Docker to the rescue! We can use the power of containerisation, along with our sophisticated little automation described above, to essentially come up with a deployment flow that would replicate the dev environment on our local machines. What this means is, you can virtually shut down your entire dev environments on whatever cloud platform you’re currently running it, and perform the dev testing on your local machines entirely, without much significant differences. If you observe in this entire flow, almost everything has been containerised and spawned up on your local machine instead of a cloud server, and everything is exactly the same. Any other idiosyncrasy related to your deployments can be easily taken care of by this very same tool, if you work on it properly. In our case, this was the process of how service discovery takes place between our services, which was easily achieved using uccli. This single click new approach, if implemented flawlessly, will reduce the testing time, save cost, result in more efficient testing, and above all, will keep a developer engaged, reducing the time it takes to deploy your services on a cloud server.

Future endeavours

We as software developers are a proud community, and we know deep down what a great weapon laziness can be. It forces us to think about achieving complex objectives in ways previously unimagined. In this context we’re not lazy, we just want to spend our time doing something new everyday, and reduce the redundancy from our lives. And that’s why god created these beautiful electronic gadgets (or the wise women and men in science did, i should say), and as an icing on the cake, the power to automate virtually anything nowadays. With this tool, the possibilities are endless, and when i say endless, i literally mean it’s only bound by your imagination. We’re trying to automate everything using this tool: from raising git merge requests, to creating new users on slack. The deployment problem is just the first one we tackled successfully, and there’s loads to come. You can even think about creating a pipeline for controlling the thermostat in the office, with a proper approval flow, and so much more. You just have to be crazy (and lazy) enough, to imagine these possibilities.

And to end things with a beautiful aphorism:

A command a day, keeps the bugs away!

About the author

Aaditya Sharma likes exploring the possibilities of integrating coding into the other fields of engineering. He is a physics and maths fiend. In his free time, he can be found creating electronics projects or reading science-related books.

Sounds like fun?
If you enjoyed this blog post, please clap 👏(as many times as you like) and follow us (@UC Blogger) . Help us build a community by sharing on your favourite social networks (Twitter, LinkedIn, Facebook, etc).

You can read up more about us on our publications —
https://medium.com/uc-design
https://medium.com/uc-engineering
https://medium.com/uc-culture

https://www.urbancompany.com/blog/humans-of-urban-company/

If you are interested in finding out about opportunities, visit us at http://careers.urbancompany.com

--

--

UC Blogger
Urban Company – Engineering

The author of stories from inside Urban Company (owner of Engineering, Design & Culture blogs)