Secure Multi-Tenancy for Enterprises and Managed Service Providers

In earlier blog posts, we outlined the ability to manage multiple Kubernetes clusters that may span between on-premises and public cloud environments. The need for multiple Kubernetes clusters can be driven by many different reasons, but one of the reasons is to separate different projects or teams from one another. In this blog post, we’ll take a closer look at additional access controls and resource management controls in Diamanti Spektra 3.0 that help enterprises and managed service providers (MSPs) deliver secure and isolated multi-tenant environments within a single pane of glass.

Background into Kubernetes Resource Controls

Resources within a Kubernetes cluster can be divided between multiple users through namespaces. There are actually four namespaces created by default in every Kubernetes environment:  default, kube-system, kube-public, and kube-node-lease. Additional namespaces can be created but they don’t mean much until resources (eg. Pods, Services, Deployments) are associated with it. Typically, an organization will set up different namespaces for different teams and access controls are defined to limit a user’s ability to view or edit resources in only namespaces where they have been granted access. Users interacting with one namespace do not see the content in another namespace.

However this form of multi-tenancy has some key limitations when considering large enterprises or MSPs.

  • Namespaces are limited to one layer: You cannot nest namespaces within other namespaces. This means that you can create two namespaces like marketing-dev and marketing-prod but there is no way to create a marketing namespace that is further divided into dev and prod environments. This is fine when there are only a few users or teams, but not scalable for much larger and complex organizations.
  • Some cluster resources are shared across namespaces: Objects such as Kubelet, Kube Proxy, the controller manager and DNS are shared resources. This exposes tenant resources to all other tenants in the cluster or on the same worker node. While this is acceptable within certain departments and even some companies, this is unacceptable in MSP environments where clients cannot know what other clients are doing or even who those other clients may be. For this reason, the separation of users within a cluster is often referred to as “soft” multi-tenancy.
  • Namespaces are single-cluster concepts: The use of namespaces is only consistent within a single cluster as it’s maintained and controlled through the Kubernetes managers. For multi-cluster scenarios, a new way of mapping and assigning resources is necessary.

Secure Isolation with Diamanti Spektra 3.0

Diamanti Spektra 3.0 resolves the above issues by introducing new concepts of projects, tenants and domains that layer on top of the basic Kubernetes resource and access controls.

Spektra Global Management Console
Figure 1: Diamanti Spektra Global Management Console for Enterprises and MSPs


At the highest level is the Service Provider who sets up the overall Domain. This will typically be the team that is responsible for the provisioning and managing of physical data center and/or cloud environment. This would also be where MSPs would log into. The Service Provider manages the domain cluster which is, in some ways, a manager of Kubernetes managers. Service Providers also have the ability to create Tenants.

The Tenant structure provides an additional layer of separation above standard Kubernetes clusters. When defining Tenants in the top-level Domain, the Service Provider has two options based on whether they are in an Enterprise environment or an MSP environment. In an MSP, it can ensure that tenants are completely unaware of other tenants and the MSP is limited to what they can control inside the tenant. By providing this ‘hard’ multi-tenancy solution, an MSP can have Coke as one tenant and Pepsi as another, but the clients will have more control of what happens inside the Tenant. One additional important note is that different tenants cannot share clusters. That would introduce the soft multi-tenancy issue described above.

The Tenant Admin is then the head administrator of the client – eg. Coke. The Tenant Admin is able to decide which Kubernetes clusters to import or add to the tenant. These clusters can be Diamanti D20-based clusters or other clusters running in the public cloud, starting with clusters in Microsoft Azure. The Tenant Admin is also typically responsible for managing tasks within the cluster like storage classes, networks, or setting up certificates. Tenant Admins can also set up Projects and their respective Project Admins, Project Members and Project Viewers.

Projects are groupings of namespaces that can span multiple clusters. To continue the Coke example, the Coke Tenant Admin can create a Project for the Marketing team. That Project can include two different clusters – one hosted on-premises and one in Azure. This would allow the Marketing team to leverage one environment for testing and development and the other for production. This new concept of a Project helps to resolve the issue of namespaces that can only exist within a single cluster.

Lastly, Project Admins can also add Project Members and Project Viewers. Project Members are able to deploy and manage applications within their project space, but cannot add additional members or viewers. Project Viewers have read-only access to the project.

Users have the option to store user credentials locally or authenticate users remotely with an existing LDAP server.

Tenant Project Members
Figure 2: View of different users under a Tenant


Multi-Tenant Resource Management

In addition to the new role types and project/tenant architecture, one of the challenges with sharing infrastructure between users is resource management. How do Tenant Admins ensure that one project doesn’t hog up all the resources in the shared clusters within that tenant?

To help prevent this scenario, an additional feature in Spektra 3.0 allows Tenant Admins to assign resource caps to each project.

These are maximum thresholds of CPU and memory for a given project to ensure that all projects have their own space as shown in Figure 3.

Figure 3: Tenant Admins can assign maximum CPU and memory for the project


Tenant Admins can also assign/reserve a certain percentage of the total cluster capacity for the project as shown in Figure 4.

Figure 4: Tenant Admins can assign certain percentage of the total cluster capacity


When combined with the new hierarchy, Tenant Admins can truly scale out to dozens of projects spanning dozens of clusters without concerns over resource management.

In Summary

Diamanti Spektra 3.0 introduces advanced concepts for supporting enterprises and MSPs who support multiple users. These new innovations layer on top of vanilla Kubernetes providing additional security and isolation between tenants while still allowing developers and users to leverage their standard Kubernetes CLI scripts and commands. Domains, Tenants, and Projects extend basic Kubernetes resource controls to another level to support some of the most complex organizations as they broaden their adoption of Kubernetes.

Combined with Diamanti’s I/O acceleration technology and high performance hardware, Diamanti Spektra 3.0 also represents a new way for MSPs (and central IT organizations) to offer Kubernetes-as-a-Service at a reduced footprint and cost while supporting dozens of clients.


To learn more about Diamanti Spektra 3.0: