There was a time when IT Operation meant managing farms of physical servers. A single server handled a single application or workload. To handle peak workloads, servers were over-provisioned, much more than the application demanded, causing very low CPU utilization. Any new project meant adding a new physical server. This required budget approval, purchasing, receiving into inventory, rack/power/network provisioning, operating system installation, networking setup, storage setup, dependency installation, application installation, etc. This process was a nightmare for IT to manage, delaying projects or even making them unfeasible to attempt.
Scaling, upgrading and migrating applications meant repeating the same process of adding new servers every time. When a machine went down, it used to take long time to get that node and application back up and running. Meanwhile recovering data from a failed node was another headache.
With advancements in computing power, the gap of CPU/memory/storage utilization kept growing. The bare metal physical machine approach was becoming inefficient and full of issues and overhead for IT.
VMs revolutionized infrastructure deployment
- A single physical server could now handle many virtual workloads, achieving higher CPU/memory/storage utilization.
- Provisioning of a new servers became much easier and could be automated to reduce the overhead of IT.
- Administrators and even end-users (in the case of self-service IT) have power to create or destroy the servers almost instantly.
- It became easier to migrate servers across different platforms, locations or clouds.
- Scaling, upgrading, and ensuring disaster recovery is much easier, as all it need is to create new Virtual Machine instance instead of buying a new machine.
- Servers are purchased in advance, bought relative to general growth patterns, and sit idle, and not acquired “on demand”.
So where is the problem?
With Three Tier virtualization of Compute/storage/networking, virtual machines revolutionized the way infrastructure is deployed. VM was a disruptive technology leading to further innovation and advancement in cloud technologies. But there were still many problems which were yet to be solved.
- VMs helped in infrastructure deployment, but not in application deployment. application deployment still needed many steps like setting up and configuring the server/OS, installing dependencies, installing application, configure application, setup scripts, start application. Even after all these steps, human errors and platform differences made application deployment process difficult for devops.
- VMs run on hypervisors and need full copy of OS for each virtual instance. So essentially each physical machine is wasting a large chunk of its CPU and memory resources on hypervisor and running an individual copy of the OS. This sometime referred as hypervisor tax. That’s why VMs are not a fit for modern workflows and application architectures (ie – scale out applications, data driven applications, etc.)
- VMs still take some time (minutes) to get created and are expected to run for long time, like a physical machine, which is not very suitable for a dynamic environment.
Containers revolutionized application deployment
The way VMs revolutionized infrastructure deployment, containers revolutionized application development and deployment process, while further simplifying infrastructure deployment.
Container for application deployment:
- Containerization allowed applications to be portable and almost independent of platform and infrastructure, which improves application development workflow. Develop anywhere, deploy anywhere.
- With freedom from infrastructure, portability, faster CI/CD and testing workflow, developers can focus solely on the development process. Reusability of container images allows developers to build something rapidly on top of already existing images, provided by large community of developers.
- Containers provide the easiest method of deploying applications from development to production in seconds
- As opposed to earlier, now application deployment means simply launching the container. Now all those complex steps are packaged in a single container image and developer can give single image for deployment with full confidence that it will work.
- It’s much easier to scale, upgrade, recover in seconds with containers as those container images will allow organizations to create the same container in seconds, guaranteeing that it will work. – –For example, capacity of a sport broadcast web server can be increased by 10x on game night and then brought back to previous capacity next day, all within seconds.
Container for Infrastructure deployment
- Containers are very lightweight with minimal CPU and memory footprint.
- Containers give freedom from hypervisor tax, as you don’t need to run hypervisor or full OS for each instance. Containers have minimal overhead on CPU/memory.
- Containers eliminate the need to patch each inpidual OS for security and functionality updates.
- Containers are more flexible to create and destroy in dynamic environments. Containers are useful for short jobs like big data processing, as well as always running jobs like web server.
- With the help of the correct tools, it’s much easier to setup networking and persistent storage to reduce the burden of IT.
But why do I need orchestration?
Containers allows you to virtualize the workload and make your application completely portable and almost platform independent. But to utilize the power of containers to the fullest, and allow users to manage containers at scale with flexibility, reliability, and ease, it’s important to use an orchestrator like Kubernetes. Orchestration is heart of container deployment. A container orchestrator helps in many ways:
- Manage your cluster which consist of some physical nodes or virtual machines.
- Manage and Deploy your containers across one or many clusters. Schedulers make sure that containers are deployed based on the needs of user.
- Re-configure, scale, upgrade, update, migrate your containers without disruption in your application and services.
- Assure your applications and services are always up and running even in case of failures.
- Enable networking among containers or to the external world with different plugins.
- Enable the usage of persistent or ephemeral volumes. These could be either local or remote.
There are still more problems to solve
Orchestrators like Kubernetes are useful in deploying and managing containers. But being open source and covering many bases at once, they come with their own set of challenges. For example, it takes expertise, time and resources to set up the cluster, networking and storage when you building your own Kubernetes cluster. There are many challenges for new and even existing adapter, like ease of use, network setup, persistent storage, quality of service on network and storage, user management and access control, enterprise level support etc. This is where full-stack turnkey container solution like “Diamanti” helps IT and developers to focus on development rather that operations. When selecting any turnkey solution, it’s important to understand that it should help you to overcome all the challenges discussed so far, but same time you should not get locked-in by their closed source technologies running over these open source technologies. You need to make sure that it’s actually reducing your TCO and time to market. Diamanti solves most of the problems seen in container orchestration world but same time let user fully realize the power of open source containers and kubernetes with significantly reduced TCO and time to market.
Conclusion
Both Virtual Machine and containers revolutionized the tech industry. Both are in existence for a long time now. Containers revolution is disrupting traditional datacenter architecture , bringing new era of rapid DevOp growth. But same time, for some legacy use cases container technology can also co-exist and complements the VM infrastructure. As usage of containers grows, more organizations will see the benefits of increased IT deployment productivity and overall better server efficiency. But in order to actually realize the potential of containers, and to be able to use it efficiently for production and scale at enterprises, it’s important to consider the full-stack turnkey solutions like Diamanti.