The life cycle of a ruby project in a startup perspective
Probably once in your life you found yourself working in a project whose specs can change constantly and in the time frame of 1 day.
Nowadays that’s pretty common in a startup scenario, where we’re in constant pressure and usually improving the service through client feedback.
Everything usually starts by knowing what’s the MVP (Minimum Viable Product) for what you want to provide/solve and how to deliver that in an incremental way.
In some cases it is pretty easy to find projects created based on what the product owner thinks, usually without metrics and constant feedbacks. This scenario is a nightmare, later you can find yourself with a unicorn that solves a lot of problems but the client’s real problem.
And that’s what startup is about. Failing, pivoting and improving the way you find out about what the client really wants and ending providing a valuable service/product.
Once you start building, measuring and learning from what you’re doing, your project can have more perspectives of where you should go from now.
In data we trust!
After working in many startup projects I realized the importance of extracting data from what I’m building and pretty much convert this in information about what the hell the end users want.
As a Software Engineer, my perspective of “what should I do now when the company(startup) I’m working for doesn’t allow me to improve my code” changed a little bit.
Once you start “getting along” with business, your understanding about the whole project changes, and sometimes it’s more valuable to keep delivering continuously new features and wait a little bit until you can do major refactoring.
Companies started learning the value of data and what they can extract from there to help them grow faster. Many methods are being used such as BI, A/B Testing (People should start thinking more about this powerful methodology) or must of the time spreadsheets using some data imported from the application database.
Based on those informations, we now have the power to explain why we should create this new weird feature that maybe doesn’t make any sense for the IT team but can make our business grow really fast. Because in data we trust!
Pray. Trust. Wait — The Bash Script Era
Before you get me wrong, I’m not going to say that the bash script way is a bad thing.
If you’re newbie in Web Development you probably started your career by the point where a lot of things are automatized, such as you development environment or the servers you’re working with.
A couple of years ago, not so long. Small companies at least in Brazil, have no power to contract specialized Infrastructure people to handle everything. And must of the time, pretty much it was a Software Developer duty. At this point even the Infrastructure guys spent many hours provisioning servers whose services where prepared for our application deployments.
Each Professional had your own “toolset” of brash scripts (recipes) for installing a bunch of services in a server, allowing developers to have a healthy environment for their application.
The problem with that is that if they need a new server/machine the time spent provisioning this machine were really long, and in case they have a chaotic scenario where there’s no access or way to work with a server, everything goes bad and they time for restoring a VM backup or setting up a new server would prejudice the company.
Load Balancers And Easy Deployment
After some years companies started adopting new technologies such as Git and Capistrano for deployment and later new services such as Heroku and Elastic Beanstalk started being accessible for all developers.
At this point we have powerful tools for tracking our code development cycle, easy way to provide a server where we can deploy right away. And also a good infrastructure where in some scenarios we can control the resources applied for determined applications in range of period in a day (eg: Elastic Beanstalk).
The “bad” thing here is that when our application become heavy and needs to scale everything start becoming pricy in therms of infrastructure. Once you need more “magic” resources, you need to spend more money for that.
Provisioning and sharing recipes
By the time developers use small bash script recipes for provisioning server, it was not common that they would share those scripts, and many times they have very different versions of how they would organize/provide a good way for keep maintaining a server with those scripts.
In advent of the creation of Tools for Server provisioning such as Puppet, Chef and Ansible. Server automation became something which everyone would start using, sharing and improving. With those technologies following some patterns, people started contributing for common sense using the same “ideology/DSL” to provide those scripts.
Now days is pretty trivial to provide a Ruby on Rails specialized server for you project. Projects such as Ansible Galaxy make our life easier.
If we need a recipe for a nginx, passenger or postgresql service we want to install in our server, certainly you’re going to find something where you can just download and start using right away.
One of the biggest benefits of start using such a kind of tool is that comparing with the technics before used, where everybody spend hours provisioning a server. Now, we can provide a complete server in less than 4 minutes depending on the connection and resources.
If tomorrow we need more 5 servers, we can use the same recipe. It definitely makes our life easier when we have such a tool that can help any developer in our team with basic knowledge to provide a new sandbox environment.
Throughout all those history facts in web development, the necessity of a better interaction and understanding between Software Developers and Infrastructure Specialists started to be considered.
The way we develop software has been changing continuously with the adoption of Agile, Lean cycle with feedbacks, Continuous Integration, once we started depending more on tests to guarantee we have consistent functionalities, Continuous Delivery When started deploying our code using some triggering system such as Jenkins. And at some point Continuous Deployment, when the company is mature enough to delivery/deploy some valuable change/feature any time following automatic approval or manual QA, being also capable of doing automatic rollbacks in case something goes wrong in production.
We’re walking to the point where we’re reliable enough to be able to avoid maintenance windows or downtime feeling for end users. With the advent of technics such as Canary Release, we’re able to after the long process of development and deployment, release a feature for a determined amount of users and massively test this feature before it goes public for everybody.
I’ve been talking a lot about history facts and how we’ve been evolving the way we develop and deploy a project.
In a ruby perspective, you can see many scenarios where you probably have worked before such as deployment with Heroku or Capistrano in any server through SSH access. If you have some experience with ruby environment and community there’s no surprise here, maybe a good refresh in your mind.
With the advent of the creation of new technologies for OS-level virtualization such as Docker, we are now able to deploy our entire infrastructure in our continuous deployment phase.
Instead of having our pre-configured servers, now we have application images where we can build and create containers from. And must of the time in a hight speed way where the end users can detect downtimes between releases.
With those technologies, depending of the abstraction of the services you have in your application, it’s trivial for example scale your web app to start having 10–100 instances and for example in case you massively use background job workers, just scale the number of Sidekiq workers from 1–100 in seconds.