- Research:Figure how all the moving pieces fit. Containers, orchestration, management, ecosystem integration, monitoring, etc. You’ll have to put real engineers on doing the initial homework and eventually, work on implementation.
- Vendor Engagement: Containers, just like anything else, have to run on something, somewhere. You’ll need to get a hold of servers, network equipment, and allocate space/power. Your friendly neighborhood VAR can help with this and your local colo provider or facilities manager should be able to help out with the physical slotting.
- Install Equipment: The building blocks need to get physically racked and cabled up.
- Orchestration: A basic operating system needs to be loaded onto the servers. For containers, you’ll probably want to load in a container capable base OS – ideally using some network booting mechanism. Some people also use VMware to bootstrap so that they can run containers, but this is far from ideal. Bootstrapping container services onto hypervisors is extremely inefficient, negating one the key reasons for going to containers in the first place – bare metal performance.
- Configuration Management: Once you bootstrap the servers, you’ll want to make sure that you can bring additional nodes online without any real manual configuration work. Docker/Kubernetes services should be defined and started up as a part of this process as well as any other management services.
- Network/Storage: Docker defaults to NAT mode and doesn’t do persistent storage by default. If you plan on keeping your operation small or within the confines of a personal device, then that should be just fine. For everyone else, you’ll want to figure out how to make sure that the storage endures and that the containers get real network interfaces.
- Container Orchestration: As you move towards the lightweight microservices model, service/container counts go up, and so do the requirements for tracking and directing them. The old spreadsheet method or vSphere isn’t going to cut it anymore so this is where orchestrators come in. The problem is… there are so many of them. Kubernetes? Docker Swarm? Mesosphere? Other? So many choices… Pros/Cons?
- Network Overlay: If you’re running a production environment, odds are, you’ll need network interfaces. Here, there are various overlay options – Calico? Flannel? Weave? Pros/Cons?
- Persistent Storage: How do you maintain storage resiliency when Docker by default, doesn’t do persistent storage? There are again, third party options…
- Clustering: If you’re going to run containers/orchestrators in a production environment, you’ll probably want to cluster for high availability/redundancy. How will you handle the insertion or removal of cluster members? How will you write your CM templates and bootstrapping mechanisms to quickly deploy new nodes with the prerequisite services preloaded?
- Management: You’ll probably want a UI, performance profiles, and QoS controls if you want to truly manage the infrastructure with SLAs that can be delivered upon. How does one do this within a containers framework? Where are all of Docker or Kubernetes’ operations-focused tools?
- Access Controls: For true multitenancy and the self-service features your end users will eventually demand, you’ll need to be able to control who has access to what resources.
- Monitoring: How do you know who is using what resources? Are you able to monitor on a per-container basis vs just the host volume(s) or host network interface?
- Run Applications: At some point, you’ll need to test whether or not your application will run within a container framework. You’ll also need to do some tuning here to understand how applications fit within the resource constraints of the host device.
- Burn in Testing: Will your applications function under load, in a scaled-out fashion? Are all the interconnected pieces working properly?
- Upgrade Trials: Typical build cycles such as this oftentimes go on for weeks or months. In that time, major releases of Docker/Kubernetes/other will be released, and it’ll be a good opportunity to see what it’s like to upgrade to a new build. Fingers crossed!
- Tribal Knowledge: Once the house of cards has been built, step back and document everything that was done to build it. Praise the engineers that endured and delivered. Hope they never leave and as a failsafe, build a monitoring job that checks their LinkedIn pages for updates.
- Support/Handoff: Train the people that will be monitoring and running this from day to day. This usually means the Systems Administration team and/or Service Desk type folks. Hopefully, they can handle the occasional 3am alert but also have your implementation engineers on speed dial if needed. Pray that there’s nothing broken in the actual open source code of your particular release. In case it does go there, it can’t hurt to start thinking about getting the operations team to read/write/debug in Go and have both #docker and #kubernetes IRC channels open.
- Operationalize: Celebrate the end of a long development phase but always be vigilant about the fact that you are now on the hook to support the platform going forward. This doesn’t just mean being on call – but it also means continuously staying on top of requests to have the latest features in the container or orchestrator engines as they evolve. Think of the upgrade trials from earlier – but now out of the safety of a sandbox with actual users and production load mixed in.
Of course, there is a better way to do all this.
To really understand what it is that Diamanti does, it helps to go back in time a bit and understand why Docker is so hot right now. You see, Docker isn’t the first to the containers game. Before it, Solaris had Zones and Linux had OpenVZ and LXC. They all enjoyed modest successes, but it wasn’t until the appearance of Docker that containers really took off. Docker had done something that no one else had done before it – enable infrastructure as code via Docker File/Hub/Repos. Developers could now make changes to the containers via commits and push them into common repositories, complete with tagging and import/export functions that enabled portability. Configuration Management requirements were reduced and infrastructure suddenly got a huge boost in speed, flexibility, and usability. This was a fantastic evolution of the technology for developers in particular, who wanted self service capabilities and didn’t necessarily have network interface or storage persistence needs.
Fast forward to today, much of the core container and orchestrator work is still focused around the needs of the developers and the tools needed to operationalize it are still evolving. This is where Diamanti comes in, covering everything from the metal to the container orchestrator and everything in between. All the core components needed to run a containerized environment are included, fully installed/configured, and supported by Diamanti.
Piecing it all together:
- Docker and Kubernetes preloaded, preconfigured, and supported
- Layer 2 network interfaces without a network overlay
- Persistent storage
- Clustered storage via NVMoE
- QoS: Granular controls for compute, network, and storage
- Per-container reporting
- Role based access control
- Supported upgrade process