According to their website:
Container Summit returns to NYC in February of 2016 to continue the conversation surrounding containers in the enterprise!
There are more and more events talking about containers, but what sets Container Summit apart is a focus on sharing actual best practices from enterprise users of containers in production. Some of the most innovative enterprises like Walmart, Twitter, and Netflix have been running containers in production for years and have experience with what solution stacks work best, what pitfalls to avoid, and how to get optimal performance without breaking the bank.
We’ll also look to key players in the ecosystem to share insights on how a containerized future will change application development and operational processes.
The event is organized by a number of impressive entrepreneurs, engineers, data scientists, and general evangelists who are excited about the world of containers. Granted, the entire concept of using containers in development is fairly new, and as a result, the summit’s attendance reflected this. Nevertheless, it was still an impressive turnout, probably aided by the numerous of prominent speakers who were invited to the summit.
The sessions convened in two separate tracks. Although I hadn’t attended Day Zero, also known as Docker Training Day, the first track was basically a quick introduction to containers. They were highly interactive, as expected, with a large projection screen displaying the code that the presenter was deploying so that audience members could follow along the process.
When I first arrived at the New World Stages, the excitement was very clear. I had only been to the venue once before for an Avenue Q performance, but the same enthusiasm from that evening carried over to the hall.
There were vendors from a variety of companies. Some of the more notable ones included CoreOS, Rancher, sysdig, Kismatic, Datera, InfoSiftr, and Datadog.
Apart from advertising their services and job openings, the software they demoed was also interesting.
Rancher’s software allows developers to run Docker applications in production. It monitors the performance of the different containers, while allowing users to control the networking, storage, and other factors of their applications.
Another was sysdig’s open-source software, which allows users to capture the system-level state and activity of their applications. The interactive UI is incredibly thorough and clean, allowing access to an immense amount of data. The amount of visibility it achieves is quite impressive.
Some of the bigger names at the summit included Joyent’s Bryan Cantrill, Nearform’s Peter Elger, Datadog’s Ilan Rabinovitch, Canonical’s Dustin Kirkland, and InfoSiftr’s Tianon Gravi.
A majority of them actually talked at the final panel. As for the initial panels, the biggest discussion seemed to be the health of containers in the industry and where they would be going in the next few years. There were a lot of arguments about the features and lack thereof, as well as just how important the cloud was for general applications and enterprise software.
Some of the bigger complaints involved the security of using containers, especially considering the degree of isolation. I’d heard the argument in the past when dealing with Docker and security implementations, but it was interesting to hear it from an industry perspective, from some of the engineers building software to counter this.
I’ve only worked with Docker, so their obvious security risks were the only ones obvious to me. However, it was clear that the problems Docker faced — including root privileges and the fact that containers don’t have their own namespaces, offering no user ID isolation and thus allowing interactions with individual kernels to handle connections to the underlying host.
There was also a lot of comparisons made to the state of containers compared to VLANs and virtual machines when they were first introduced. Just like the current security flaws of containers (of which there are many more than I have listed), the inherent flaws of VLANS cast a lot of doubt into their longevity. However, the sheer flexibility that they provide developers, the customizeability and convenience of their services, has instead allowed them to mature to a point where we don’t even doubt their use.
Containers have miles of merits of their own, after all. They make security patches so much easier to deploy over all applications, simply by controlling their flow, as well as minimizing the effort needed to validate compatibility between the subsequent patches and the applications. They also make running multiple instances of applications so much easier, and with systems monitoring software, issues in the code or disturbances in the network are also easily detected and dealt with.
Nevertheless, there was also a recognition of the effect containers will have on the broader ops and infosec community. Considering how much power new technologies like Node.js has given the independent developer, it’s almost like the development world has become more and more fragmented. Levels of application that used to require teams to tackle are now handled on a micro level by individual programmers who have access to creating end to end software, something that couldn’t have existed only a few years prior.
After the lightning talks and a quick break, panels resumed, as well as an advanced track, led by presenters who handled a unique aspect of containers. One that I enjoyed was a demo on building containers in Pure Bash and C, led by Docker’s own Jessica Frazelle.
She started off the talk by showing how to create and write a skeleton of a C binary that would use the clone syscall to run binaries already installed in the container.
The code, as I’m sure, is posted on Docker, although Michael Crosby also has a great writeup on his blog.
In general, the demo showed how to create containers from the system level, using namespaces to limit the processes as well as the user.
Some of the techniques and tools used include mount, network, user, PID, UTS, and IPC Linux namespaces. Most of the work was in adding flags that would act to create namespaces to distinguish the containers.
This was followed by a particularly deep discussion by the aforementioned high profile engineers on the future of containers. Obviously, this being the Container Summit, it was like any other ending panel, with a great amount of enthusiasm with some skepticism regarding the mirroring of past virtualization environments.
Nevertheless, the Summit was a very enjoyable and informative event. The speakers were all definitely qualified to speak on their topics, and the ideas that were presented by either panelists or speakers were thought provoking. I’m actually interested in how far containers will go in the next few years, or if they’ve already — in the words of Cantrill — peaked years ago, and we’re simply riding along with the rest of the journey.
Hopefully, I’ll be back next year to follow up with the container community. Even if I’m not, I’m sure Joyent, Docker, and the rest of the world will have a lot to release in the next few months.