“Make sure it doesn’t look ugly or something,” he advises at one point. Then, later, “The aesthetics of this one are not so great. It looks like a scared lizard.” And, in a characteristically wry moment, “When you land on Mars, you want the list of what you have to worry about to be small enough that you’re not dead.” Overall, there’s a theme to Musk’s feedback: First, things have to be useful, logical and scientifically possible. Then he looks to improve efficiency on every level: What are people accepting as an industry standard when there’s room for significant improvement? From there, Musk pushes for the end product to be aesthetically beautiful, simple, cool, sleek (“He hates seams,” says one staffer) and, as Musk puts it at one point in the meeting, “awesome.” Throughout this process, there’s an additional element that very few companies indulge in: personalization. This usually involves Musk adding Easter eggs and personal references to the products, such as making the Tesla sound-system volume go to 11 (in homage to Spinal Tap) or sending a “secret payload” into space in his first Dragon launch that turned out to be a wheel of cheese (in homage to Monty Python).
Ultimately, containerization allows companies of all sizes to write better software faster. But as with any platform shift, there is a learning curve and broad adoption is a function of ecosystem maturity. We’re just now beginning to see the emergence of best practices and standards via organizations like the Open Container Initiative and the Cloud Native Computing Foundation. The next step is for a hardened management toolchain to emerge which will allow devs and companies to begin building out powerful use cases. And it’s with those applications that we will start to unlock the power of container technology for the masses.
Over the last several years, we’ve seen the emergence new application architecture — dubbed “cloud native” — that is highly distributed, elastic and composable with the container as the modular compute abstraction. With that, a new breed of tools has emerged to help deploy, manage and scale these applications. Cluster management, service discovery, scheduling, etc. — terms that previously were unfamiliar or, at best, reserved for the realm of high-performance computing — are now becoming part of every IT organization’s lexicon. As the pace of innovation continues at breakneck speed, a taxonomy to help understand the elements of this new stack is helpful.