This is one of those historic periods in IT where the disruptive impact of a new technology isn’t just obvious in hindsight, but profound while it’s happening. Just like 10 years ago the when enterprises replatformed on virtual machines, today we’re witnessing the replatforming of the modern software stack based on the container format.
Composing applications as microservices, built to run cloud-native is no longer just for the web-scale elite (Google, Facebook, etc.)–it’s the hardening practice for building software that scales, and the container format is the key enabler. While containers have been evolving since the early 2000s and are hardly “new”, we’ve arrived at that perfect intersection where the requirements of modern software have met the game-changing format, and the rest will be history.
What’s dizzying for those of us who are watching the container evolution–and trying to anticipate who will be the winners and losers and place our bets–is the volume of new projects being introduced each week (Github has more than 280 “Linux Container” repository results!). Everyone watching containers knows Docker as the dominant container format, an increasing number have heard of Kubernetes–but what are the other key technologies that will gain the critical mass of adoption and commercial support and also find their place in the new container stack?
Let’s take a look at 10 strong emerging contenders to watch as this plays out:
Now that Docker has contributed the container run-time to the Linux Foundation, every commercial entity will have the ability to standardize on a single run-time (without being tied to a single vendor). There is a tremendous difference between the evolution of virtual machines (controlled by a single vendor: VMware) and the evolution of containers, where every major systems vendor has had equal opportunity to commercialize and support based on open standards. The positive for the industry and adoption is a constant influx of tooling and innovation. The challenge for production usage is an extremely fragmented market with immature standards and support options. Once the Linux community gets Containerd stable, Docker is basically on its own and everyone else will have the full container standard without having to rely on a single vendor. That’s a profound shift.
FlexVolume is not just another Kubernetes volume plugin; it enables third-party vendors to add support for their storage backends in Kubernetes. All third party vendors have to do is provide a Kubernetes storage driver which implements FlexVolume callouts. Whether FlexVolume morphs into Container Storage Interface or some other name over time, it gives storage vendors an open plugin API that promises to work with multiple container orchestration platforms. What matters here is there are now storage interfaces that allow the enterprise storage industry to bring solutions to containerized workloads, without being tied to a specific Kubernetes release. Promising collaboration with other orchestration communities including Mesos and Docker Swarm promise to standardize containre storage interfaces more broadly.
3. Container Network Interface
The Container Network Interface (CNI) allows the integration of broad network services and provisioning with Kubernetes and Mesos. CNI is a pluggable interface for networking, without having to modify the orchestration stack to support new networking capabilities. CNI will get richer over time by allowing users to express network and security requirements for containers–and is an important emerging standard to watch for anyone who wants to control how containers are interconnected (which is a problem domain that basically applies to anyone actually running containers in production).
Run-time security is the main FUD against containers today (by the way, this is the same FUD that people used against hypervisors 10 years ago). Keep an eye out on the Linux kernel community, because a lot of people are working on isolation and security for containers.
Every month that goes by, more people will become comfortable that containers provide sufficient security and isolation for their applications, according to the industry they are in and the types of applications they work with. It will take a while, but as comfort level grows and as container isolation improves, more people will find they can go container-native. SElinux (access control), seccomp (system call restriction), and AppArmor (security profiles) are three particular efforts to watch.
Helm is an open source project in the Kubernetes ecosystem that makes it easier to build complex applications with Kubernetes and control all of the configurations and different files that describe that application. Helm is a package manager–a way to represent an application that consists of many parts. Modern applications are more than just containers, and Helm helps the range of installation, upgrade, and rollback issues, in a way that more closely rivals the maturity of configuration management tools for the virtual machine world.
Ksonnet is a “configurable, typed templating system” that simplifies the deployment and management of Kubernetes clusters. It’s no secret that Kubernetes is complicated to deploy and manage. Ksonnet abstracts away some of that complexity and templatizes it and makes it easier to get Kubernetes up and running. One of the core challenges for container adoption is the complexity curve of operating containerized applications across clustered environments, so Ksonnet is an interesting project to watch.
Serverless computing is getting a lot of hype recently, with AWS Lambda as perhaps the most-known serverless technology. Serverless is really just an event API concept (another abstraction) that allows you to define a behavior (code) in response to a certain event (API call). Containers are probably the best suited tech to implementing these serverless computing events due to their lighweight nature and rapid spin up/down capability. When an event occurs, you can spin up the container, execute, then shut down. Some people say that you don’t need containers because you have serverless. But these are complementary technologies. It’s about usability and footprint for small, basic operations. When this event happens, run that code. It’s like a Tweet for developers– instead of building and deploying a long-lived chunk of code (i.e., blog post), you can just write a concise event handler (i.e., tweet).
Old school applications were client-server, where clients made requests to servers, which required understanding a set of APIs. Brute-force monitoring and logging can fall short when transactions traverse many micro-service layers. A request can go through dozens of services before it returns – how do you figure out what went wrong? Openstracing tracks discrete requests as they traverse a distributed system to help figure out where things go wrong and how to optimize application behavior. This is an important technology to watch in the context of how to manage and troubleshoot complex, distributed applications (of which containers and microservices are key enablers).
Single Root I/O Virtualization is an existing technology that’s gaining momentum in use cases where you want to take advantage of bare metal performance for high speed and low latency interactions between application code and physical resources like network interfaces and data storage volumes.. It allows virtualizing physical I/O resources into virtual elements without a hypervisor. SR-IOV is a very interesting technology to watch because it allows containers to share resources without a performance penalty.
10. Build vs. Buy
Everyone is trying to figure out: is the Red Hat model still valid in a world where everyone seems to be an open source expert? The Red Hat model is deploy open source, but pay for support. A lot of early adopters are asserting that model is dead because they will download free and build/support it themselves. Though alluring, many users realize that open source is free like a puppy – you’ll pay for it in daily care and feeding. Many vendors are offering alternatives beyond Red hat – witness Docker, CoreOS, Heptio, Mesosphere, and more. Still others, like Diamanti, are working with the open source community to provide fully-supported appliances for container applications that leave nothing to chance. This build vs. buy conundrum is one of the key adoption areas to watch as the container industry matures–what level of support will enterprises require, and whether they will build themselves or buy packaged solutions.
Jeff Chou is the CEO and Co-Founder of Diamanti.