Using AWS when you have server load, but no money

Staply
4 min readMar 18, 2015

--

There is a lot written about preparing for “High” server load, but not that much about preparing for the “Medium” one. We’ll tell you which server configuration we used when we ended beta and launched Staply.

It’s a tricky period, since there is already lots of server load, but no money to pay for a fancy server configuration.

We had few simple goals: Staply has to be available 99.999% of time, architecture should be flexible and it all should not cost much.

Our results:

  • Service managed with the load from: HackerNews, Habrhabr(biggest Russian speaking IT community) and ProductHunt
  • Toughest load periods were > 400rps, and lasted for about 5 hours
  • max ~10 registrations per minute
  • Availability according to NewRelic: 96.092% during last 3 months and 99.991% in February

From the beginnig of Staply development we’ve used AWS E2. We chose it because it allows us to create our own infrastructure. Also there is no need to manage with hosting limitations, like with Heroku.

In general Amazon offering is quite impressive.

Configuration as recommended by Amazon:

Our configuration:

  • t1.micro instance as a balancer (atm is used more like a router) with Haproxy
  • t1.small and m3.medium instances with nginx passenger and redis
  • S3 storage for files
  • RDS m1.small instance with MySql

Daily cost: ~$8.21

Attention! Regulary check detalised invoices. Amazon invoices you for lots of small things. And a fee for an unused item might be an unpleasant surprise.

Server Configuration

Service is written on Rails but it doen’t matter in this article that much.

In the beginning of our development we had one t1.micro instance with turned on swap and it was enough. AWS Free Tier with trial period allowed us to lower cost to zero.

However when you launch your service it’s better not to be greedy. It’s good to include a possibility to increase the capacity of the architecture from the beginning.

Diverse multiserver architecture will allow you to work with each element separately, without influencing other. That comes in handy, for example, if you need to restart one of the instances or to increase their capacity.

TIP: Be sure to use a monitoring service, free plan from NewRelic would be enough.

We launched second server when we were preparing to make Staply publicly available. It was m3.medium (1 CPU Intel Xeon, 3.7 GB RAM, magnetic storage, with configured swappiness).

We were paying attention to the configuration of our Rails app, when we were calculating the capacity of the Server:

  • Ruby 2.1.0
  • Nginx + Passenger 4
  • Each flow consumes ~200mb of memory

To easily route requests between development and production servers, we used one t1.micro instance with installed Haproxy and configured SSL termination.

TIP: Private IP instead of Public in Haproxy configuration allows to lower the delay between the servers significantly.

Don’t be afraid to start optimizing your code to early. Beginning is the best time for that. Each bottleneck in the code, each saved millisecond will allow you to serve more clients. It will also save you money.

TIP: Pay special attention to loops in the code. In our case there was one of the biggest opportunities for the optimization. By taking everything unnecessary from the loops we were able to drastically lower the reply time.

To send emails we use Amazon SES. In Rails it integrates nicely with Action Mailer, and offers a free quota of 10000 messages a day.

Files

Files are stored at S3. We don’t suggest to store static data(scripts, styles, images) there. Latency will be bigger than if you store files on the instance itself.
Latency when accessing file:

  • S3: ~280 ms — 1.80 sec
  • CloudFront: ~60 ms — 200 ms

TIP: You can turn on content caching from S3 using this title:

{‘Cache-Control’=>’max-age=315576000'} 

To lower the latency, Amazon advises to use CloudFront that allocates content from S3 by regions. Traffic from CloudFront is cheaper than one from S3.

Database

For the database you can use the same instance as an app server. However that will make your architecture less agile.

For Database we use RDS db.m1.small instance with magnetic storage. It helps us with backups and configurations.

Since people are using Staply from the early beta version, we have data in the database that we have to keep safe.

Regions

It is necessary to account to geography of your potential clients and markets where you plan to operate. You can of course cover the whole world with instances in one region, but network latency will be pretty big.

All elements of the architecture should be located in the same region.

Upload time between regions:

In fact, latency from St-Petersburg, Russia to servers in region US-EAST can be up to 6 times bigger, than when connecting to EU-WEST.

Building simple and modular architecture from the beginning is a key. It will give you a potential for the steady growth and moving on to high loads.

Used resources:

If you find this article useful please recommend and share it so others can read it too. Thanks!

--

--

Staply

Interactive notebooks. A streamlined way to collect and keep track of everything that matters.