Choosing timeout values for external service calls isn’t easy. There’s usually black magic involved and sometimes we’re just guessing. This post details the approach we used at Bluestem to choose better values and make our systems more resilient to performance issues and outages in our backend services.

Where we were

Bluestem has transitioned from a monolithic web stack to microservices over the past two years. We use Hystrix to wrap our calls to external services, but we didn’t put much thought into tuning its parameters. We usually set timeouts in one of two ways:

  1. Copy/paste the nearest Hystrix settings from another service
  2. Set the timeouts to be super high so circuits never…


The Pit of Success: in stark contrast to a summit, a
peak, or a journey across a desert to find victory through many trials and
surprises, we want our customers to simply fall into winning practices by using
our platform and frameworks.
- Brad Abrams, MDSN blog

The Pit of Success is an idea that framework designers think about a lot, and the rest of us should too. Even if you’re not writing a framework for external use, your company’s codebase is a framework. Developers that come after you will mimic the patterns and practices in it when they add new features, and you can’t assume that everyone else knows what you do about it. …


(RIP Prince) If you’ve been following along on GitHub you may know that DockerUI is no longer DockerUI, it’s currently ‘not-dockers-ui’ until I can come up with a better name, and I’ve taken over as the owner of the repo from Michael Crosby.

Why? Three weeks ago two security vulnerabilities were disclosed for the project (that are now fixed) and it became clear that some people thought this was an official Docker project. To clear up the confusion and comply with Docker’s official brand guidelines I will rename the project. You can join the discussion here.

It’s not going away, I’m not going away. Once we pick a name I’ll get automated builds set up on the hub again.


I started using Docker about two years ago and in that time I’ve tried a lot of different things, these are some of the ones that weren’t very good ideas:

Different runtime images for each environment and app

Our main web app has a different configuration for each brand (3 but soon to be 16) and we have environment configs for production, staging, test/sandbox, and simulator. This turned into a combinatorial explosion really fast. Now we create one WAR per brand and include every environment config (minus sensitive info like keys and passwords) in the artifact. …


We all want to work with the latest and greatest Javascript tech. Tools like Babel, React, and ESLint make our lives as developers a lot easier but, like all established companies, Bluestem has a lot of existing code that makes transitioning to use these tools harder. In this post I’ll dive into Bluestem’s journey with Javascript development including where we came from, where we’re at, where we’re headed, and some challenges that lay in our path.

Note: I use the company “we” in this article. This history begins several years before I joined Bluestem.

The Beginning

Our legacy website platform went live in 2010, and some parts date back to 2002. In typical web 1.0 fashion this original platform had very little javascript, most interactions were powered by form submission and page refreshes. One notable exception to this is the admin interface which is a full-featured heavy client app written in 2002 (very impressive, but relies on some legacy proprietary browser behavior and doesn’t work in Chrome anymore). …


View a sample project with the ideas from this post here.

We want to use shiny new JS libraries that tend to only be available on NPM, and we want unit tests to be blazing fast.

Why it’s not easy

  • NPM is convenient, but lots of modern Javascript projects assume Browserify or webpack is in your build pipeline.
  • Grails and asset-pipeline have a powerful plugin architecture (e.g. plugins can provide assets but your project can override them), but that means you need to use asset-pipeline to resolve files. This makes it difficult to use with tools like Webpack and browserify. …


DockerUI is a web GUI for the Docker remote API. It’s like a swiss army knife: very flexible but can be unwieldy to use. Luckily it’s an Angular app that can be modified pretty easily. I want to share what we’ve done at Bluestem to streamline our development workflow. We use Docker and DockerUI for:

  • Test environments driven by Jenkins builds. Every feature branch builds a docker image that can be spun up for testing.
  • One-off apps for prototyping (Redis, Riak, Hystrix, anything). …


Updated Sept 22, 2015: fig is dead, long live docker-compose.

Selenium is a powerful tool for automated front-end testing, but it’s not it’s not known for its blazing fast speed. Fortunately, Selenium provides built-in cluster support to parallelize your tests across multiple machines and speed up the test cycle. Test runners communicate with a single hub that transparently distributes tests to worker nodes for execution.

This is an example of creating a Selenium Grid cluster consisting of containers for the app, test-runner, Selenium hub, and many Selenium nodes. You may see limited performance gains because all nodes in this virtual cluster will run on the same physical Docker host. To spread nodes across multiple physical hosts you can use a Docker clustering system such as swarm. docker-compose is used to scale the number of nodes in the cluster. …


If your containers work just fine when started through the CLI but exit immediately when started through the remote API, this may be for you. A common Dockerfile pattern for keeping service containers running is to start the services and then start something that waits for input on stdin, like bash:

CMD /bin/start_things.sh && /bin/bash

This works for the CLI, but if you don’t explicitly open stdin with the remote API then bash will exit immediately and the container will stop. To open stdin, include the OpenStdin parameter in your container creation call:

POST /containers/create
{
"OpenStdin": true
...
}

Full docs


This is an example of using the remote API to create and run a container with port bindings. This is the equivalent action through the docker CLI:

$ docker run -p host_ip:host_port:container_port image_name

The CLI’s run command corresponds to multiple remote API commands. When using the remote API you need to specify ExposedPorts when creating the container and PortBindings when starting it. (see here for more details)

Specifying ExposedPorts when creating containers:

POST /containers/create
{
"Image": image_id,
"ExposedPorts": {
"container_port/tcp": {}
}
}

Specifying PortBindings when starting containers:

POST /containers/(id)/start
{
"id": id,
"PortBindings": {
"container_port/tcp": [
{
"HostIp": "host_ip", // Strings, not numbers here
"HostPort": "host_port"
}
]
}
}

About

Kevan Ahlquist

Senior Software Development Engineer @ Amazon. Trumpet player, drum corps enthusiast.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store