Choosing timeout values for external service calls isn’t easy. There’s usually black magic involved and sometimes we’re just guessing. This post details the approach we used at Bluestem to choose better values and make our systems more resilient to performance issues and outages in our backend services.
Bluestem has transitioned from a monolithic web stack to microservices over the past two years. We use Hystrix to wrap our calls to external services, but we didn’t put much thought into tuning its parameters. We usually set timeouts in one of two ways:
The Pit of Success: in stark contrast to a summit, a
peak, or a journey across a desert to find victory through many trials and
surprises, we want our customers to simply fall into winning practices by using
our platform and frameworks.
- Brad Abrams, MDSN blog
The Pit of Success is an idea that framework designers think about a lot, and the rest of us should too. Even if you’re not writing a framework for external use, your company’s codebase is a framework. Developers that come after you will mimic the patterns and practices in it when they add new features, and you can’t assume that everyone else knows what you do about it. …
(RIP Prince) If you’ve been following along on GitHub you may know that DockerUI is no longer DockerUI, it’s currently ‘not-dockers-ui’ until I can come up with a better name, and I’ve taken over as the owner of the repo from Michael Crosby.
Why? Three weeks ago two security vulnerabilities were disclosed for the project (that are now fixed) and it became clear that some people thought this was an official Docker project. To clear up the confusion and comply with Docker’s official brand guidelines I will rename the project. You can join the discussion here.
It’s not going away, I’m not going away. Once we pick a name I’ll get automated builds set up on the hub again.
I started using Docker about two years ago and in that time I’ve tried a lot of different things, these are some of the ones that weren’t very good ideas:
Our main web app has a different configuration for each brand (3 but soon to be 16) and we have environment configs for production, staging, test/sandbox, and simulator. This turned into a combinatorial explosion really fast. Now we create one WAR per brand and include every environment config (minus sensitive info like keys and passwords) in the artifact. …
Note: I use the company “we” in this article. This history begins several years before I joined Bluestem.
View a sample project with the ideas from this post here.
We want to use shiny new JS libraries that tend to only be available on NPM, and we want unit tests to be blazing fast.
DockerUI is a web GUI for the Docker remote API. It’s like a swiss army knife: very flexible but can be unwieldy to use. Luckily it’s an Angular app that can be modified pretty easily. I want to share what we’ve done at Bluestem to streamline our development workflow. We use Docker and DockerUI for:
Updated Sept 22, 2015: fig is dead, long live docker-compose.
Selenium is a powerful tool for automated front-end testing, but it’s not it’s not known for its blazing fast speed. Fortunately, Selenium provides built-in cluster support to parallelize your tests across multiple machines and speed up the test cycle. Test runners communicate with a single hub that transparently distributes tests to worker nodes for execution.
This is an example of creating a Selenium Grid cluster consisting of containers for the app, test-runner, Selenium hub, and many Selenium nodes. You may see limited performance gains because all nodes in this virtual cluster will run on the same physical Docker host. To spread nodes across multiple physical hosts you can use a Docker clustering system such as swarm. docker-compose is used to scale the number of nodes in the cluster. …
If your containers work just fine when started through the CLI but exit immediately when started through the remote API, this may be for you. A common Dockerfile pattern for keeping service containers running is to start the services and then start something that waits for input on stdin, like bash:
CMD /bin/start_things.sh && /bin/bash
This works for the CLI, but if you don’t explicitly open stdin with the remote API then bash will exit immediately and the container will stop. To open stdin, include the OpenStdin parameter in your container creation call:
This is an example of using the remote API to create and run a container with port bindings. This is the equivalent action through the docker CLI:
$ docker run -p host_ip:host_port:container_port image_name
The CLI’s run command corresponds to multiple remote API commands. When using the remote API you need to specify ExposedPorts when creating the container and PortBindings when starting it. (see here for more details)
Specifying ExposedPorts when creating containers:
Specifying PortBindings when starting containers:
"HostIp": "host_ip", // Strings, not numbers here