Making packages available for other distributions
I wanted to start out this little article by talking about my love of Fedora. I have been working on Fedora since the beginning back to Fedora 1. I introduced SELinux in Fedora 3, thankfully Fedora has helped me and many others to make SELinux a hell of a lot better over the years. For many years I ran Rawhide, which is the daily release of Fedora. You really get to know an operating system when you run Rawhide, lots of crashes and fixing up the boot, but even Rawhide has improved greatly over the years. I now run Fedora Atomic Workstation, to truly embrace containers. Everything from Flatpaks to running my work environment in containers using atomic run. The great thing about Fedora is it does not compromise on its Open Source pedigree: nothing gets into the Fedora distribution that is not an Open Source project. While other Linux distributions would compromise this principle in order to get a little better user experience, Fedora over the years forced upstreams to open up and vendors to provide open device drivers. In my house my family only runs Fedora. One of my favorite lines at my talks.
“My house has no Windows, it is a little dark, but a lot more secure!”
Fedora is the distribution is where new technologies goes to ripen before we productize it for Red Hat Enterprise Linux (RHEL) systems. RHEL pays my salary, so I really really love it. The world’s infrastructure runs on RHEL, and I truly believe RHEL and some of the work my team does has made computing more secure. Years ago, I heard every rolling piece of US Military runs RHEL in them, and all of Wall Street runs RHEL.
Then there is CentOS, CentOS is a way for us, Red Hat, to get Open Source layered products out to the community on a OS that is slower moving than Fedora. Stuff like OpenStack RDO and OpenShift Origin. Customers can play with these technologies earlier and give us feedback before we get them out in supported releases on RHEL.
Lately my groups have been working on some new community projects to running and building containers in a Kubernetes world.
Skopeo is one of the first new projects the container team started. A couple of years ago we added LABEL support to the upstream Docker project as well as the OCI Image Specification. The idea with LABELS was to allow people to annotate container images with text describing the application, versioning information, etc. In the atomic tooling we wanted to be able to examine these labels at a container registry, to see if there was a new container image with a new version. If I was running 1.0 of the foobar container image, I wanted to see if 1.1 version was available. The only way to do this with upstream Docker was to actually pull down the image and examine the json file. We opened an issue with the upstream project which was rejected, and we were told to just build a separate command to do this. The `skopeo` command was born.
Skopeo means `remote viewing` in Greek. Over time skopeo has grown to be a command line utility for various operations on container images and image repositories. Basically you can examine remote registries but you can also pull and push container images into different type storage, you can pull and push to container registries including docker.io, you can also pull to local file systems, ostree, container storage for use with CRI-O, below. Lots of people are now using skopeo for moving container images between container registries, as well as preloading hosts to run containers.
Skopeo has been split into two projects github.com:containers/image which is a go library that can be used by tools like CRI-O to pull and push images, and skopeo is the CLI built on top of containers/image.
Last fall, after Kubernetes came out with the Container Runtime Interface (CRI). This API Standard defined how Kubernetes would interact with Container Runtimes. Red Hat and OpenShift have decided to use Kubernetes as the basis for Container Orchestration. My team was doing support for upstream Docker under kubernetes. Kubernetes would call into the Docker daemon to manage its containers, but we were having a hard time with upstream Docker stability. Upstream Docker was changing all the time and was increasingly unstable for kubernetes. Each release of Docker seemed to be a major redesign and would break kubernetes.
I directed Mrunal Patel and Antonio Murdaca, along with other great engineers on my team, to start working on a project eventually called CRI-O. CRI-O’s goal was to simplify the running of containers using the same underlying technology that upstream Docker uses — the OCI Runtime Specification and Runc. This project would just be for running Kubernetes runtimes. CRI-O now passes all Kubernetes end to end tests and is being prepared for a 1.0 release. We have setup CRI-O to always run Kubernetes tests on each pull request. This means we will never accept a modification to CRI-O that breaks a fundamental function or API, therefore it should be a much more stable runtime for Kubernetes. We see CRI-O as a replacement for upstream Docker/moby for kubernetes workloads.
Another effort that we started in February at Devconf.CZ, led by Nalin Dahyabhai, was Buildah. Nalin was working on the github.com:/container/storage project which was the storage for containers that was being used for CRI-O, and I suggested that it would be cool if we could use container/storage to build container images. I have always hated the idea of requiring a big fat daemon to be running for anyone to build a container image, which is really nothing more than a bunch of tar balls and some json files describing their contents. I called it container-coreutils — the idea was to just add a couple of commands like “from IMAGE” to pull an image from a container registry and store it in local storage, then create a new layer on the image and mount it onto a mount point so that other cli tools on the host could add content to the mount point. Finally execute a “commit NEWIMAGE”, which would write out a new image with some of the data used to populate the json files.
Nalin surprised me by doing a lightning talk by the end of Devconf.CZ that demonstrated this technology. He called it Buildah, making fun of my Boston accent. Buildah has now added lots of new features including support for traditional Dockerfile format, and can do `buildah run` to run commands inside of the buildroot inside of a container. It is really powerful. Buildah is now available as a 0.2 release.
We have made these packages available in Fedora and are working on getting them into RHEL and Centos. We have other contributors packaging them up for SUSE in rpm format. But we need to get these projects in the hands of other potential users in the Debian based distributions. We want to see these projects running on Debian and Ubuntu. Lokesh Mandvekar on my team has been learning how to package the products in Debian/Ubuntu format so users can simply Apt-Get them. We will be contributing these packaging formats back to the upstream projects and would love feedback on the packages. Making our packages available for all open source distributions is a driving goal for us at Red Hat and is good for everyone in the Open Source community.
I even hear that there are some SELinux packages available for these distributions…
How to get packages
% dnf install skopeo cri-o buildah
Ubuntu 16.04.2 LTS and 17.04
% add-apt-repository ppa:alexlarsson/flatpak
% add-apt-repository ppa:projectatomic/ppa
% apt-get update
% apt-get install cri-o buildah skopeo
Upstreaming the ubuntu packages into debian is being considered
CentOS (with Virt SIG repo)
% yum install https://lsm5.fedorapeople.org/centos-release-container-1-3.el7.noarch.rpm
% yum install cri-o buildah skopeo
On openSUSE Tumbleweed
% zypper in skopeo
On openSUSE Leap
% zypper ar -f obs://Virtualization:containers obs-vc
% zypper in skopeo
CRI-O is being worked on and Buildah is being considered.
Coming, probably be available in RHEL7.4.1 in fall.
All others welcome.
We would love to see these tools available for as many systems as possible.