Deploying Multiple Ingress Controllers in EKS


In previous posts, we reflected on the different scenarios that would benefit from having Multiple Ingress Controllers - for K8s in general and for Runtime Fabric in particular. In this post, we want to put that into practice and install a couple of Ingress Controllers for traffic segmentation. 

For our K8s cluster, we’ll use an EKS cluster and we will see how, relying on the AWS Load Balancers versatility, we can provide an ingress endpoint for internal traffic (external to the K8s cluster but within the same network segment) and another ingress endpoint for external traffic (internet traffic). For simplicity we will use NGINX for both controllers, but in a real case scenario both controllers could be from different vendors if required.

Prerequisites

Before we start, we will need:


Initial Setup

In recent posts we were Explaining the Different Types of Load Balancers in AWS and Which One Should we choose for Kubernetes. The main takeaway is that the choice between ALB and NLB primarily depends on whether you need layer 7 features (ALB) or high performance with layer 4 (NLB).

Following that approach, In the scenario we’re trying to build we’ve got our Ingress Controllers with NGINX which will take care of the routing within the K8s cluster. For the external traffic routing we will use NLBs - one for the traffic coming from internet and another one for the internal traffic (traffic within the network segment of the VPC where the EKS is installed).

Network Load Balancers in AWS have the ability of working internet-facing or internally. This is what AWS call Scheme. When we create an NLB with Internet-facing scheme, AWS will expose the Load Balancer to the internet and will create a public DNS record that would be available from anywhere. That public DNS record will be resolved to a Public IP address.

On the other hand, when we specify Internal for the scheme of our NLB, AWS will make the Load Balancer available only within the VPC where it is deployed and create a DNS record that will translate into a Private IP address of that network segment. Hence, it won’t be accessible via Internet.



This is what we will use to segregate External and Internal inbound traffic to our EKS cluster. What we’ll do in the installation of the NGINX controllers will be to customize the Service of the Ingress Controllers. As we know, the Service will be of type LoadBalancer, which will interact with the AWS API to create a load balancer. Using annotations in the Service manifest we will control the type of Load Balancer to be created (NLB) and its Scheme (internet-facing or internal).
Check out this site for more information on Annotations for the load balancer.



Create a dedicated namespace

  • We’ll create a dedicated namespace for each ingress controller so that the resources of both controllers are isolated
kubectl create ns ingress-nginx-internal
kubectl create ns ingress-nginx-external


Add the Helm repository

  • For this example we will use the Bitnami repository. To add the repo in Helm run the command
helm repo add bitnami https://charts.bitnami.com/bitnami
  • Verify that the new repo is in the list of repos of your Helm installation
helm repo list
  • update the new repo
helm repo update


Installation

Internal Ingress controller

  • Create our custom values.yaml file. For that we will create nginx-internal.yaml file:
replicaCount: 3
ingress:
enabled: true
ingressClassName: "nginx-internal"
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
service.beta.kubernetes.io/aws-load-balancer-scheme: 'internal'
service.beta.kubernetes.io/aws-load-balancer-internal: 'true'

  • Where:
    • replicaCount - We will specify 3 replicas for the controller. This is just to show you we can put more or less resources for each controller to match the volume of internal vs external traffic expected.
    • ingressClassName - This value specifies the ingress class of the controller. This will tell the internal controller to watch only ingress resources created with this class. In our RTF installation, this will tell the ingress controller whether or not the ingress resources created with each mule application will be managed by it.
  • Notice the annotations we need to control the type of Load Balancer to be created:
    • aws-load-balancer-type - specifies the type of Load Balancer. This is very important to include it, if not we will get a Classic Load balancer which is soon going to be deprecated
    • aws-load-balancer-scheme and aws-load-balancer-internal - With these 2 values the NLB will be created as Internal and won’t be publicly exposed.
  • Install
helm install nginx-internal bitnami/nginx -n ingress-nginx-internal -f nginx-internal.yaml
  • Where:
    • nginx-internal - it’s the name of the helm release
    • bitnami/nginx - It’s the chart to be installed
    • ingress-nginx-internal - it’s the namespace for the internal ingress controller
    • nginx-internal.yaml - it’s the file with the custom values
  • Verify the service has been installed correctly
  • kubectl get svc -n ingress-nginx-internal

  • Go to the AWS management console and verify that the Load Balancer created is of the type Network (NLB) and the Scheme is Internal

  • Notice also that AWS has created a DNS record, but this record it’s not public. If we do a ping from a host within the VPC we can see that it gets resolved to a private IP

External Ingress controller

  • Create our custom values.yaml file. For that we will create nginx-external.yaml file
replicaCount: 2
ingress:
enabled: true
ingressClassName: "nginx-external"
service:
annotations:
service.beta.kubernetes.io/aws-load-balancer-type: 'nlb'
service.beta.kubernetes.io/aws-load-balancer-scheme: 'internet-facing'
service.beta.kubernetes.io/aws-load-balancer-internal: 'false'

  • Where:
    • replicaCount - We will specify 2 replicas for the controller. Let’s say that we expect less traffic coming from the internet.
    • ingressClassName - This value will have to be different from the internal ingress controller. This is how we’ll differentiate which ingress controller manage internal or external ingress resources.
  • Notice the annotations we need to control the type of Load Balancer to be created:
    • aws-load-balancer-type - specifies the type of Load Balancer. As mentioned above, don’t forget to include it.
    • aws-load-balancer-scheme and aws-load-balancer-internal - With these 2 values the NLB will be created as internet-facing and will be publicly exposed.
  • Install
  • helm install nginx-external bitnami/nginx -n ingress-nginx-external -f nginx-external.yaml
  • Where:
    • nginx-external - it’s the name of the helm release
    • bitnami/nginx - It’s the chart to be installed
    • ingress-nginx-external - it’s the namespace for the external ingress controller
    • nginx-external.yaml - it’s the file with the custom values
  • Verify as well that we’ve got 2 replicas for the external ingress and 3 for the internal one
  • kubectl get pods -A
  • Verify that now we’ve got two services of LoadBalancer Type - one for the internal and another one for the external
kubectl get svc -A


  • Notice that now, the EXTERNAL-IP of the external ingress is a Public DNS record. Ping that DNS and you’ll see that it gets resolved to a public IP
  • Get back to the AWS Management console. Notice that:
    • We’ve got now 2 load balancers
    • The new balancer created is of type NLB and Internet-facing
    • The DNS record created for the new LB is a public DNS record


In the next post, we’ll get Runtime Fabric installed on this EKS and we’ll see how we can deploy mule apps and make them available externally or internally
Previous Post Next Post