Architecting Robust Enterprise Application Network Services with NGINX and Diamanti

If you’re actively involved in architecting enterprise applications to run in production Kubernetes environments, or in deploying and managing the underlying container infrastructure, you know first hand how differently containers use IT resources, and how important it is to have an application-aware network that can adapt at the fast pace of change typical of containerized applications.

In this blog, we’ll dive into the topic of modern (application-centric) load balancing architectures on bare-metal container infrastructure. These are the two main synergistic components of the type of application network services enterprise organizations need.

Recently, Diamanti announced its technology partnership with NGINX, a leading developer of open-source load-balancing. Below, we’ll offer up a look at a few common use cases for modern load balancing and application network services for Kubernetes environments with NGINX, run on the Diamanti bare-metal container platform.

The combination of these two solutions enhances not only application delivery and scalability, but high availability as well, with Diamanti’s newly-released support for multi-zone Kubernetes clusters.

To get the most out of the content presented in this blog, it is recommended that you have a working knowledge of Kubernetes and an understanding of basic load balancing concepts.

Key Functional Requirements for Application Network Services

Network services that Kubernetes-based applications rely on need to leverage the following sets of functions:

Application exposure

  • Proxy: Hide an application from the outside world
  • Routing/ingress: Route requests to different applications and offer name-based virtual hosting

Performance optimization

  • Caching: Cache the static/dynamic files to improve performance and decrease the load on the back-end
  • Load balancing: Distribute network traffic across multiple application replicas
  • High availability: Ensure application uptime in the event of a load balancer or site outage

Security and simplified application management

  • Rate limiting: Limit the traffic to an application and protect it from Distributed Denial-of-Service (DDoS) attacks.
  • SSL/TLS termination: Secure applications with TLS termination at NGINX without any need to modify the application
  • Health checks: Health check on applications and take appropriate action.

Support for microservices

  • Central communications point for services: Enable services hidden behind the load balancer to be aware of each other
  • Dynamic scaling and service discovery: Scale and upgrade services seamlessly, in a manner that is completely transparent to users
  • Low-latency connectivity: Provide the lowest latency path to target microservices
  • API gateway capability: The load balancer can act as an API gateway for microservices-based applications
  • Inter-service caching: Caching between microservices.

Diamanti’s Bare-Metal Container Platform Eliminates Major Obstacles To Building Robust Network Services For Kubernetes

At best, container networking in Kubernetes is challenging. We’ve heard from every one of our customers who have attempted to build their own container infrastructure that not only is network configuration complex, but it is extremely difficult to establish predictable performance for applications requiring it; that is, without substantial overprovisioning of infrastructure. Quality-of-Service is also critical for stable multi-zone cluster operation, as it is highly sensitive to fluctuations in network throughput and storage I/O.

By virtue of the fact that the Diamanti platform is purpose-built for Kubernetes environments, it can be made operationally ready for production applications in a fraction of the time it would take to build a Kubernetes stack from legacy infrastructure. More importantly, it solves major challenges around building application network services with the following attributes and features:

Plug-and-play networking

  • Diamanti’s custom network processing allows for plug-and-play data center connectivity, and abstracts away the complexity of network constructs in Kubernetes
  • Built-in monitoring capabilities allow for clear visibility into a pod’s network usage
  • Diamanti assigns routable IPs to each pod, allowing for easy service discovery. A routable IP means that load balancers within the network are already accessible, and that no additional steps are required in order to expose them
  • Support for network segmentation, enabling multi-tenancy and isolation
  • Support for multiple endpoints to allow higher bandwidth and cross-segment communication

Quality-of-Service (QoS)

  • Diamanti provides Quality-of-Service (QoS) for each application. This guarantees bandwidth to Load Balancer pods ensuring Load Balancers are not bogged down by other pods co-hosted on same node.
  • Applications behind the Load Balancer can also be rate limited using pod level QoS.

Multi-zone support

  • Diamanti supports setting up a cluster across multiple zones, allowing you to distribute applications and Load Balancers across multiple zones for high availability, faster connectivity, and special access needs.

NGINX Ingress Controller And Load Balancer Are Key Building Blocks For Application Network Services

Traditionally, applications and services are exposed to the external world through physical load balancers. The Kubernetes Ingress API was introduced in order to expose applications run on a Kubernetes cluster, and can enable software-based Layer 7 (HTTP) load balancing through the use of a dedicated Ingress controller. A standard Kubernetes deployment does not include a native Ingress controller. Therefore, users have the option to employ any third-party Ingress controller, such as the NGINX Ingress controller, featured as a key component of NGINX Plus.

With advanced load-balancing features using a flexible software-based deployment model, NGINX provides an agile, cost-effective means of managing the needs of microservices-based applications. The NGINX Kubernetes Ingress controller provides enterprise‑grade delivery services for Kubernetes applications, with benefits for users of both open source NGINX and NGINX Plus. With the NGINX Kubernetes Ingress controller, you get basic load balancing, SSL/TLS termination, support for URI rewrites, and upstream SSL/TLS encryption.

Reference Architectures

There are many ways to provision the NGINX load balancer on a Kubernetes cluster running on the Diamanti platform. Here, we’ll focus on two important architectures which uniquely exemplify the joint value of Diamanti and NGINX.

Load Balancing And Service Discovery Across A Multi-Zone Cluster

Diamanti enables the distribution of Kubernetes cluster nodes across multiple zones. This configuration greatly enhances application high availability (HA) as it mitigates the risk of a single site outage.

In this multi-zone Diamanti cluster, the simplest approach is to deploy an NGINX Ingress controller in each zone. The benefits of this approach are as follows:

  • Multiple zones establishes HA. If a load balancer at one site fails, then another can take over to serve the requests
  • East-west load balancing within or across zones can be enabled. Users can also define a particular zone affinity so that requests tend to stay within same zone for low latency, or go across zones in absence of a local pod
  • Network traffic to the individual zones’ load balancers can be distributed via an external load balancer, or via another in-cluster NGINX Ingress controller.

An example of this architecture is shown in the diagram below.

In this example, the Kubernetes cluster is distributed across two zones and is serving the application ‘fruit.com’. A DNS server or external load balancer can be used to distribute inbound traffic in round-robin fashion to the ingress controller of each zone in the cluster. Through defined Ingress rules, the NGINX Ingress controller will discover all of the pods of ‘orange’, ‘blueberry’, and ‘strawberry’ services.

To minimize latency across zones, each zone’s Ingress controller can be configured to only discover the services within its own zone. In this example, however, Zone 1 does not have a local ‘strawberry’ service. Therefore, Zone 1’s Ingress Controller needs to be configured to discover ‘strawberry’ services across other zones as well. Alternatively, each zone’s load balancer can be configured to detect all pods in the cluster across the zones. In that case, load balancing can be executed based on the shortest possible latency to ensure preference for pods within the local zone. However, this approach increases the risk of a load imbalance within the cluster.

Once the Ingress controller is set up, services can be accessed via hostname (orange.fruit.com or blueberry.fruit.com or strawberry.fruit.com). The NGINX Ingress controller for each zone will load balance all pods of related services. Other client pods running on same cluster (such as orange.app), will access the ‘orange’ services within the local zone to avoid high network latency. However, client pods (such as strawberry.app), which do not have any services running within the local zone will have to serve requests across other zones.

Load Balancing And Service Discovery In A Fabric Mesh Architecture

Of the many possible modern load balancing approaches, the most flexible one involves running a load balancer in each pod to perform client-side load balancing. This architecture is also referred to as a service mesh, or fabric model. Here, each pod has its own Ingress controller that runs as a sidecar container so that each pod is fully capable of performing its own client-side load balancing. Ingress controller sidecars can be manually deployed to pods or can be set up to be automatically injected such as with Istio. In addition to per-pod Ingress controller, a cluster-level Ingress controller is required in order to expose the desired services to the external world. The benefits of a service mesh architecture are as follows:

  • Facilitates east-west load balancing and microservices architecture
  • Each pod is aware of its final destination, which helps to minimize hops and latency
  • Enables secure SSL/TLS communication between pods without modifying applications
  • Facilitates built-in health monitoring and cluster-wide visibility   
  • Facilitates HA for east-west load balancing

An example of the service mesh architecture is shown below.

In the diagram, the cluster is configured with the NGINX load balancer as part of the fabric, and serves the application fruit.com. A DNS server or external load balancer can be used to point fruit.com to the ingress controllers (NGINX load balancers) running on the cluster. According to Ingress rules, the cluster’s Ingress controller as well as every sidecar ingress controller will discover all pods for ‘orange’, ‘blueberry’, and ‘strawberry’ services.

Once the Ingress controller fabric is set up, services can be accessed via the the hostname (orange.fruit.com or blueberry.fruit.com or strawberry.fruit.com). Cluster-level NGINX Ingress controller will load balance for all pods of related services based on hostname. Other client pods running on same cluster (Orange.app, Blueberry.app, and Strawberry.app) can access their respective backend pods directly, via their respective sidecar Ingress controllers.

Conclusion

Robust application network services are critical for enterprise organizations seeking to improve availability, deployment, and scalability for production Kubernetes applications. For these purposes, the choice of container infrastructure and approach to modern load balancing are the key success factors. That being said, Diamanti and NGINX pair well together to mitigate the risks of downtime, manage network traffic, and optimize overall application performance.