One small step for mankind👣, one giant leap for UnscrewMe 🍷 – tackling deployment challenges and getting the idea online🌐.

Goetz Buerkle
Mar 18, 2018 · 9 min read

As mentioned in a previous article, UnscrewMe runs on AWS Elastic Beanstalk. One of the benefits of using a Platform as a Service (PaaS) solution is that all configuration naturally happens as code. This means that not only the application logic, but also the infrastructure configuration is tracked in source control, in our case in GitLab.

Any PaaS will in some way or another be less flexible than a virtual machine you manage yourself. However, AWS Elastic Beanstalk allows many adjustments right out of the box, even without building customised “platforms”, which is also possible. But as we want to manage as little as possible ourselves, and let AWS do most of the work for us, our aim is to use standard configuration options wherever possible.

Customising the PaaS standard configuration using .ebextensions

AWS Elastic Beanstalk allows wide-ranging customisations via the .ebextensions directory which can be added to the root of a repository or application package. AWS Elastic Beanstalk automatically takes the content of this folder and uses it to change the default configuration.

Working with *.config files in the .ebextensions directory can be tricky, since all the files are sequenced simply based on their filename, so you have to make sure that commands that should run before other commands are in a file with a name that comes before the other one if the file list is sorted alphanumerically. We simply always prepend two digits to the file name, which we use for the ordering. That way, the name can be descriptive, and yet also clearly convey the sequence of commands.

Within the configuration files, the commands to be executed must be defined in YAML notation. This is a simple and straight-forward formatting convention. Some people might not be happy that white space and indentation are significant in YAML, but coming from Python, I actually appreciate that, since it helps keeping the file clear and easy to read.

*.config files for AWS Elastic Beanstalk support a number of features with different keywords, or short keys. The different functionalities are called “keys” in this context – since they are keys in the YAML file, I guess. This is all documented in detail in the AWS Elastic Beanstalk guide, so we will just present the main keys we use quickly:

  • packages to install Linux packages through various package managers
  • files to create files on the instance
  • commands to run arbitrary Linux commands – including running your own scripts, for example, which makes this perhaps the most powerful key
  • container_commands to run arbitrary commands, but in contrast to commands, this key is executed after the application code has been copied to the instance and can use application code

The differences between commands and container_commands can be confusing. Basically, commands is for general set up tasks, and container_commands is useful to run application code within the deployment environment, before the application is publicly accessible. In our case with a Django app, we rely on container_commands to execute some basic Django management scripts, like database migrations.

In this context, the leader_only option also becomes handy to ensure that migrations are only executed once per deployment, and not on every instance within a multi-instance cluster.

Besides standard Linux and application setup tasks, AWS Elastic Beanstalk also supports the option_settings key, which is being used to define the overall deployment environment, not just the compute instance. There are countless configurations possible using the option_settings key to tailor all the different components to your needs, including the network and load balancing.

Many settings that AWS supports in the different services are available to customise, but not all of them. Especially, when AWS introduces new services or features, it can take a while until the configuration can be adjusted using .ebextensions. This limitation becomes annoying at times, but it is understandable and the feature is still very powerful, while being relatively easy to manage.

Catching up with continuous innovation on AWS

One thing which can be a challenge is the pace of change at AWS. Luckily, at least in my experience, most changes do not need adjustments to existing deployments. However, when setting up the new AWS Elastic Beanstalk environment for UnscrewMe, I had to read up on some technical details and fiddle around with the network configuration to get it all play together nicely. I think that if I would not have set up some things myself independently of the Elastic Beanstalk environment to have more control, I would not have worried about this.

After learning the basics about VPC (Virtual Private Cloud)Route tables and the current options for auto-assignment of IP addresses, I could connect to the instance, and the instance could connect to the database. And again, while it required some effort, and you probably need basic IT knowledge to get on, the AWS documentation covered everything I needed to know go get my configuration working.

Also an area I had not looked into before was AWS “profiles” for the command line interface. When working on more than one AWS account via the command line tools, like the Python packages, setting up different connections or access profiles was necessary. Not difficult, but another thing to set up, before you can get started.

Somewhat counterintuitive, while AWS Elastic Beanstalk does have an integration for database instances, it is probably not a feature to use for a production environment, since it ties the database lifecycle to the application environment — and for many reasons you might well want to keep your database independent from your application environment. But launching an Amazon RDS for PostgreSQL instance separately is very easy.

Overcoming limitations and annoyances on AWS Elastic Beanstalk

A lot of things are truly excellent with AWS Elastic Beanstalk. But when trying to actually get a first development release of UnscrewMe online, we ran into a few issues.

Verifying domains for SSL certificates in AWS Certificate Manager

One thing that really annoys me is the way domains for SSL certificates are being validated. This partly depends on the registrar handling your particular top-level domain, but that doesn’t make the problem less annoying. And you usually cannot influence the general policy of a registry.

So with .co.uk, if you register a domain privately, which I have done when I started UnscrewMe, the registry does not publish an email address. What is potentially useful for spam protection is less useful for domain validation with AWS Certificate Manager. Since AWS Certificate Manager cannot use an email address provided by the domain registry, it simply sends confirmation emails to a number of more or less random email addresses @yourdomain. Now, the problem for us was that we only have a single mailbox set up, and it was not one of the AWS Certification Manager defaults. I do not quite understand, why AWS does not let the user specify an @yourdomain address to use for the verification.

As a result of this process, I could not verify the SSL certificate for our main domain. And that could not be resolved even after I had opened a support request about this.

Confusingly, another AWS service, Amazon SES to send emails, that also needs a domain verification, has an entirely different process in place, where you can verify the domain via DNS records. As a result, our main domain is verified with one AWS service, but not with another. To fix this, we will have to set up and pay for a second mailbox, which I find quite unfortunate, since I would immediately delete that mailbox again after I verified the domain to minimise our running costs.

Getting around outdated packages in Amazon Linux and EPEL 6

Another problem is the fact that Amazon Linux is based on a pretty dated Linux distribution, which means that for some Linux packages, you can only install rather outdated releases. As we wanted to take advantage of the geographic features in GeoDjango backed by the PostGIS extension for the PostgreSQL database, we needed to make sure we could install all relevant dependencies.

Most of the require packages were either provided by AWS directly, or in the “Extended Packages for Enterprise Linux” repository. Yet, with only EPEL 6 being compatible with Amazon Linux, we ended up with one package that we needed to install from source. Using .ebextensions, it is no problem to install packages from source. But it still cost me about a day to get it working, and compiling a package on a small instance can be pretty slow. At least the possibility to log into the instance directly via SSH made debugging easier.

One problem remains: compiling the application from source and installing it takes about 20 minutes, so deployments can take up to 20 minutes too.

The good news is that at the end of 2017, AWS finally announced Amazon Linux 2, which most likely will solve this issue and allow us to install all dependencies directly from repositories, without compilation required. Unfortunately, AWS has not yet updated the pre-built Elastic Beanstalk platforms, so for the moment, we still have to compile the package ourselves.

Accepting varying deployment durations

The outdated Linux packages most likely also cause the next annoyance. Deployment times are varying widely, and so far, I cannot predict how long a deployment will take.

At this early stage, without users having access to the web application, we simply reuse the instance and deploy new application versions directly. Most of the time, a new application is available after about two minutes. But sometimes, it takes more than 20 minutes. So when we make a new deployment, we never know if it will take just 85 seconds or 20 minutes, which can slow development down and just cause unproductive waiting times, when you least want it because you just changed a tiny thing in the user interface.

Celebrating the first successful deployment

After having solved – or ignored – the issues we encountered during the deployment, our application was live and running after around 20 attempts. During the process, our environment was once wrecked completely, so that we had to delete it and recreate it from scratch, the eb deploy command to deploy and update simply did not work. And once, we had to rebuild the environment to get it back into a functional state.

When we reached this major milestone on 1 Dec 2017, I was sitting once again at Campus London and could not stop smiling 🙂. The application looked dreadful at this stage, yes, but the successful deployment still meant that from now on, we could test our application online, and also share it with some friends for testing and to gauge their opinion.

Focusing on the MVP

Having reached this point where our application back end and front end proof of concept is actually working and running online was very important. If not from a functional perspective, then from a motivational one. Being able to see what you are working on online, makes it more tangible. Suddenly, it is more than an idea, but something you can actually point to.

Getting here took longer than we initially expected, but we are satisfied with the progress and now have a stable, scalable platform to build upon.

Moving forward, the next big challenge is to extend the proof of concept, and make it an actual, useful and easy-to-use product with content and a nice design. While this is certainly more work than just getting a blank page online, it is also more fun too 😀.


After two rather technical articles, we will turn to project management, look at our Lean Canvas, and write a bit about our Minimum Viable Product (MVP) with the goal to explain how we prioritise features to keep the scope of the MVP manageable.


(I wrote parts of the article at the cozy wine bar 28° – 50° Marylebone Lane, starting with a 2015 Bourgogne Blanc from Domaine Alain Chavy in Burgundy in France, which was just beautiful. The waitress recommended this as the white wine she personally likes most on the list by the glass, and I fully agree with her, it was like the perfect wine for that moment, not flat, not too complex, just right. I wanted to stay for a quick glass, and ended up having dinner, so after the white I had a 2016 Bardolino Classico from Guerrieri Rizzardi in Veneto in Italy – full of red fruits, raspberry, and a bit smokey. It had a depth, yet did not feel heavy and went very nicely with the rich chocolate in the tiramisu. On another evening, I enjoyed a glass of 2015 Blaufränkisch, Wohlmuth, Hochberg, Burgenland, Austria which had lovely fruit notes, a hint of smoke, and overall was a bit heavier than I usually prefer. After some great tasters at the “Discovery Theatre – Portugal, a journey through authenticity and diversity”, the only event I attended at the Decanter Spain and Portugal Fine Wine Encounter 2018, I added some details to the article over a flat white at Kaffeine.)

UnscrewMe

Find your taste in London ❤️ — Find ☑, Taste ⚡️, Enjoy . The easy wine tasting scheduler.

Goetz Buerkle

Written by

Wine 🍷 (WSET Level 3), coffee ☕️, food 🍽, words 📔, languages 🇬🇧🇸🇪🇩🇪, Python 🐍, Django 🦄 , 🖥 Vue.js, entrepreneurship 🤔, startups 🚀 — London, UK.

UnscrewMe

UnscrewMe

Find your taste in London ❤️ — Find ☑, Taste ⚡️, Enjoy . The easy wine tasting scheduler.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade