![EKS services with AWS Load Balancer Controller](https://static.wixstatic.com/media/981170_b1362d2a3f03470aa1c95ce1bb5f3925~mv2.jpg/v1/fill/w_980,h_432,al_c,q_85,usm_0.66_1.00_0.01,enc_avif,quality_auto/981170_b1362d2a3f03470aa1c95ce1bb5f3925~mv2.jpg)
Like with any compute platforms, when you deploy your applications as services or containers on EKS, these services must be available to the external world or better to say in the internet. And it can be done in two ways, either your cluster and the nodes where the services run are public or exposed through a public facing load balancer. The latter is what we are going to discuss here.
For security and compliance, it is recommended that you create a private EKS cluster and nodes for production. And hence the services must be exposed via a public(internet) load balancer only on https(recommended).
Although a managed service, EKS does not come bundled with such load balancers which in my opinion they should like with other managed services like RDS. However, this action is left to be managed under the Kubernetes resources mostly during deployments.
How to create Application Load Balancer(ALB) for EKS services with AWS Load Balancer Controller ?
The most common approach of handling service exposure via load balancer in Kubernetes is to deploy an ingress resource pointing to the respective services (IP or Port). And this is managed by an addon called ingress controller. There are 3 types of ingress controllers supported by Kubernetes as below:
Nginx (most common and platform agnostic)
Since we are discussing EKS hence our controller is the second one, let’s call it AWS LBC (saves me from some typing 🙂)
AWS LBC is a necessary addon for EKS, without this you might have a running cluster but it will be of no use as you won't be able to to deploy and link your services with a load balancer. And there are really two ways you can use AWS LBC to deploy an EKS with ALB:
Through an ingress resources
Through a Target group binding
Let’s quickly understand what these two approaches are and then I will tell you the pros and cons of both approaches and which one I finally decided to use for the 10-factor infrastructure framework.
ALB with Ingress(ALB from inside K8S):
In this method the LBC, given all the permissions it needs, calls the AWS api and creates an AWS ALB when an ingress resource is deployed with the load balancer annotations as below:
"alb.ingress.kubernetes.io/group.name"
"alb.ingress.kubernetes.io/load-balancer-name"
"alb.ingress.kubernetes.io/scheme" = "internet-facing"
"alb.ingress.kubernetes.io/target-type" = "instance"
The first annotation is to create an ingress group so that it can be used for subsequent ingresses which can share the same ALB. Otherwise every ingress will start creating one ALB which might end up in emptying your whole budget. This is called ingress sharing for load balancers.
ALB with Target group binding(ALB from outside K8S):
In this method the LBC would not be given any write permission to AWS API, instead it will just have some specific read permissions so it can register the services or nodes to an externally or already created ALB. So basically here, the ALB is created outside of K8s and then the K8S service go and register to the target group of that ALB using the below config:
apiVersion: elbv2.k8s.aws/v1beta1
kind: TargetGroupBinding
metadata:
name: tgb-name
namespace: tgb-namespace
spec:
targetType: instance
targetGroupARN: ${var.target_group_arns}
serviceRef:
name: ${var.default_backend_service_name}
port: ${var.default_backend_service_port}
However, there are certain pros and cons in both approaches as below:
With Ingress:
All you need to do is to define an Ingress of type “loadbalancer” and with the right IAM permission for aws api, an ALB will be created. There are not much fine grained access control that can allow ingress deployment of a specific type such as ‘load balancer.
With this kind of wide access, you can end up deploying multiple load balancers which in turn will end up costing you a lot of money.
Some cluster operators may prefer to manually manage AWS Load Balancers to support use cases like:
Preventing accidental release of key IP addresses.
Supporting load balancers where the Kubernetes cluster is one of multiple targets.
Complying with organizational requirements on provisioning load balancers, for security or cost reasons.
Although such concerns may lead to choosing Target Group Binding for ALB creation, there are some disadvantages with that approach that you must be aware and prepared for.
With Target Group Binding:
It would be useful if the service deployment is totally under infra provisioning; it needs the arn of the target group to map to each service ip or nodeport.
This might become complicated for the users since with every service deployment they will have to deploy a target group binding mapping the backend service and port.
In addition, ingress sharing or path based routing does not seem to be possible under the target group as it directly adds either nodeport or service port.
To Summarise:
Use aws ingress to create the load balancer with path based routing and ingress group and ask users to attach to the said ingress group. That way users will be able to share a load balancer easily.
Whenever there is a need to create a new service load balancer for an EKS cluster, make sure you can optimally utilize it by sharing it with services using ingress group. You can use either the kubectl cli to deploy the ingress loadbalancer, however I will recommend using terraform k8s provider since you will be provisioning an infrastructure service and its better to use one standard mechanism as much as possible.
Some key points to remember for AWS LBC:
👉 LBC needs a vpc-cni add-on which is not installed by default.
👉 Assume role policies for both LBC and VPC-CNI must be accurate.
👉 By default EKS creates a security group which allows all traffic within the cluster and node.
👉 For private clusters additional security groups must be created for cluster as well as nodes to allow traffic from the ALB subnet to nodes and to access the api from other subnets within the vpc.
If you like this article, I am sure you will find the 10-Factor Infrastructure even more useful. It compiles all these tried and tested methodologies, design patterns & best practices into a complete framework for building secure, scalable and resilient modern infrastructure.
Don’t let your best-selling product suffer due to an unstable, vulnerable & mutable infrastructure.
Thanks & Regards
Kamalika Majumder
Comments