depending on what you’re trying to build it might happen that part of it involves inspecting a Docker image from a registry but you can’t afford to pull it.
It turns out that there’s an API that allows you to perform exactly that — be it DockerHub or a private registry.
The Docker Registry HTTP API is the protocol to facilitate distribution of images to the docker engine. It interacts with instances of the docker registry, which is a service to manage information about docker images and enable their distribution.
today I happened to write a script to solve a specific problem that it looks like a good deal of people face: renaming a given Elasticsearch index. Naturally, there are documented solutions but I didn’t find quickly a script that would get me where I wanted — all the data from an index named
a now being queryable in an index named
b with all the properties set.
Note.: the following code is aimed at Elasticsearch 2.4.6.
Here it comes then.
There are four steps to get towards our goal:
Wondering about how Docker was using the
--pids-limit property made me understand a little bit about how CGroups work. Come with me to find out more about it 👍
If you run a recent version of Docker (in my case I'm on
17.06.0-ce ) you can verify that
docker run has a
--pids-limits option that is meant to limit the number of processes that a process can create. From the Kernel documentation:
The process number controller is used to allow a cgroup hierarchy to stop any new tasks from being fork()'d or clone()'d after a certain limit is reached. Since…
Hey, another quick tip regarding Docker, specially the Docker for AWS offering (edge edition) which ships with
cloudstor . That's a volume plugin that does enables containers to attach volumes that are saved into AWS EFS allowing one to benefit from both shared storage as well as durable storage (think of network filesystem that is backed by AWS and wrapped on a nice interface by Docker).
Here I go through how you can visualize how the plugin puts stuff into EFS.
tl;dr: the plugin runs inside a container with a mount point with mounts to EFS where for each named…
The default Debian sources doesn't include the latest OpenJDK but you can get it from
jessie-backports . Here's a quick tip to install it in a container.
First of all, get
debian:jessie into your Docker daemon:
docker pull debian:jessiejessie: Pulling from library/debian10a267c67f42: Pull completeDigest: sha256:476959...c758245937Status: Downloaded newer image for debian:jessie
Now run a container that uses
debian:jessie and and add
jessie-backports sources to our sources:
docker run -it debian:jessie /bin/bash...echo 'deb http://deb.debian.org/debian jessie-backports main' \
That is, insert
deb http://deb ... to the file
deb indicates that the archive contains…
Golang gives the developer a very consistent interface that can be used across SQL implementations. Some details are not super straightforward, though. Here I go through the basics of connecting to a SQL database, creating a table and listing its contents.
If you’re not very into pre-made ORM (Object Relational Mapping), here you can find a configuration that allows you to connect to a Postgres Database and write your own accessor/writer methods that reads from and insert into a table.
To have it started we use Go’s
database/sql package. It is the base to provide us a generic interface around…
With Lua support in NGINX, each request is inspectable and modifiable. To get the user and password we access the request headers and decode one in particular: the *Authorization: Basic* one. This article showcases how you can achieve that.
Resulting code at https://github.com/beldpro-ci/sample-basic-to-bearer-nginx.
A long time ago GitHub introduced a way of performing
git operations against an authenticated endpoint by providing a token to the remote (https://github.com/blog/1270-easier-builds-and-deployments-using-git-over-https-and-oauth). It’s pretty straightforward: add the token to the URL so that
git performs basic authentication when connecting to GitHub servers.
The weird thing is that you’re using a
Basic authorization for something that…
Hey, just in case you want to set up a PostgreSQL database with a default user and password using Docker, here’s a very simple way that you can do.
From the documentation at https://hub.docker.com/_/postgres/, we can see that it supports adding a set of custom *init* scripts that allow us to initialize databases/users during the bootstrap time. All you’ve gotta do is add your script there and it’ll be ran.
So, that’s what I’ve done: created a `01-filladb.sh` file under `init` and then added that to the right path (`/docker-entrypoint-initdb.d`):
└── 01-filladb.shcat ./DockerfileFROM…
Even though the common practice states that one must create containers that have a single process running, it's common so see people facing the need for a multi-process approach.
A pretty good solution that i've found is S6 (https://skarnet.org/software/s6/) "a collection of utilities revolving around process supervision and management, logging, and system initialization" (from the description) — after reading a blog post from Tutum (https://blog.tutum.co/2014/12/02/docker-and-s6-my-new-favorite-process-supervisor/).
My context was simple; needed to run several processes and ensure that: