<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=278116885016877&amp;ev=PageView&amp;noscript=1">

, , , ,

Jun 10, 2024 | 7 Minute Read

Provisioning Kubernetes Clusters On AWS Using Terraform And EKS

Hanush Kumar - Marketing Associate

Table of Contents

Introduction

Efficient management and orchestration of Kubernetes clusters have become essential to ensuring the availability of modern applications. However, setting up Kubernetes clusters from scratch is laborious. Amazon Web Services (AWS) offers a robust solution for orchestrating Kubernetes clusters with its service, Amazon Elastic Kubernetes Service (EKS).

Terraform simplifies the infrastructure provisioning process. Without Terraform, developers would have to manually configure and provision infrastructure resources for Kubernetes clusters on AWS.

Manual processes are time-consuming, error-prone, and inconsistent across production, staging, and testing environments. Also, they hinder the consistency and repeatability of deployments.

What Are Kubernetes Clusters?

Kubernetes clusters are basically a group of computers that run containerized applications. They are inevitable when you use Kubernetes, which consists of a control plane and multiple compute machines or nodes. Clusters help you monitor the containers to check their status, and once a container goes down, it immediately relaunches, making it fault-tolerant.

What Does AWS EKS Offer?

Elastic Kubernetes Service is a managed service offered by AWS, making the Kubernetes cluster management easier to implement and scale. In other words, if you’re using AWS EKS, you are no longer required to install, operate, and maintain your own Kubernetes control plane on AWS.

How Terraform Comes Into The Picture?

Setting up AWS EKS involves a lot of steps. It may take away many developers’ time if done manually every time. With Terraform, you can write the script once and automate the configuration of the required infrastructure, depending on your application’s requirements. 

Simply put, Terraform provides and manages the infrastructure required to run an EKS cluster. Some of the resources include VPCs, subnets, IAM roles, security groups, and EC2 instances that are necessary for setting up an EKS cluster.

Here's how Terraform collaborates with AWS EKS for infrastructure provisioning and management:

  1. VPC And Networking: Terraform specifies the networking components required for an EKS cluster, such as VPCs, subnets, route tables, internet gateways, and NAT gateways. These resources provide the EKS cluster's network infrastructure and associated components.
  2. IAM Roles And Policies: You can define IAM roles and policies required for EKS, such as roles for the EKS service and worker nodes, enabling access to AWS resources.
  3. Security Groups: It creates security groups to control inbound and outbound traffic to the EKS cluster and its components, ensuring that only authorized traffic is allowed.
  4. Auto Scaling Groups: You can define auto-scaling groups for worker nodes in the EKS cluster, allowing for automatic scaling based on resource demand.
  5. Load Balancers: Terraform provisions and configures load balancers, such as Application or Network Load Balancers, to distribute traffic to the EKS cluster's applications.
  6. EKS Cluster Configuration: While Terraform itself doesn't manage the EKS cluster directly, it can be used to define data sources or external data that provide inputs to the EKS cluster configuration, such as the Kubernetes version, instance types for worker nodes and other cluster settings.

How To Create EKS Clusters Using Terraform

Follow these steps to provision Kubernetes clusters on AWS using Terraform and EKS:

  1. Install: Download and install Terraform on your local machine or CI/CD environment.
  2. Define: Write Terraform configuration files (.tf) to define Kubernetes clusters, node groups, networking, security, and other resources.
  3. Initialize: Run ‘terraform init’ to initialize the Terraform project and download provider plugins.
  4. Plan And Apply: Run ‘terraform plan’ to preview the changes and ‘terraform apply’ to provision the infrastructure on AWS.
  5. Deploy Apps: Once the Kubernetes cluster is provisioned, deploy containerized applications using tools like ‘kubectl’ or AWS CodeDeploy.

The below configuration sets up a highly available EKS cluster with managed node groups, VPC networking, and additional Kubernetes add-ons for enhanced functionality. It uses modules for VPC and EKS, making the infrastructure reusable and modular.

Step 1: Install Terraform

Follow the installation instructions for your operating system. For example, on macOS, you can use Homebrew:

 

$ brew install terraform

On Windows, you can download the binary, extract it, and add it to your PATH.

Verify the Installation:

After installation, verify that Terraform is installed correctly by running:

 

$ terraform -v

Step 2: Define

Terraform uses configuration files (.tf) to define the desired state of your infrastructure. For provisioning Kubernetes clusters on AWS, you will need to create several configuration files to define various resources like VPCs, EKS clusters, node groups, networking, and security settings. Here's a breakdown of the essential configuration files:

1 a. terraform.tf: Defines the required providers and the backend configuration.

 

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 5.47.0"
    }

    random = {
      source  = "hashicorp/random"
      version = "~> 3.6.1"
    }

    tls = {
      source  = "hashicorp/tls"
      version = "~> 4.0.5"
    }

    cloudinit = {
      source  = "hashicorp/cloudinit"
      version = "~> 2.3.4"
    }
  }

  backend "s3" {
    region = "us-west-2"
    bucket = "terraform-state"
    key    = "terraform.tfstate"
  }
  required_version = "~> 1.8"
}

This:

  • Specifies required providers with their versions.
  • Configures Terraform to use an S3 bucket for remote state storage.
    • `source = "hashicorp/aws": Indicates that the AWS provider should be downloaded from the HashiCorp registry.
    • `version = "~> 5.47.0": Specifies that the AWS provider version should be 5.47.0 or any minor patch version that is compatible with it (e.g., 5.47.1, but not 5.48.0).

    6
    Fig. 1. Configured S3 Buckets for Terraform Storage
  • Sets the required Terraform version
2

Fig. 2. Terraform Providers

1 b. variables.tf: Defines the input variables for your configuration.

 

variable "region" {
  description = "AWS region"
 type        = string
 default     = "us-west-2"
}

  • This file defines a variable for the AWS region with a default value of "us-west-2".

1 c. main.tf: Contains the main configuration for provisioning the EKS cluster and associated resources.

 

# provider configurations:
provider "aws" {
  region = var.region
}

  • This specifies the AWS provider with the region set from a variable

 

# Filter out local zones, which are not currently supported
# with managed node groups

# data source to get availability zones
data "aws_availability_zones" "available" {
 filter {
    name   = "opt-in-status"
    values = ["opt-in-not-required"]
  }
}

  • This fetches the list of available AWS Availability Zones that do not require opt-in.

 

#local variables
locals {
  vpc_name     = "demo-vpc-368"
  cluster_name = "demo-cluster368"
  vpc_cidr     = "10.0.0.0/16"
  azs          = slice(data.aws_availability_zones.available.names, 0, 3)
  tags = {
    ManagedBy = "terraform"
  }
}

  • This defines the local variables for the VPC name, cluster name, CIDR block for the VPC, and the first three availability zones.
  • This also defines the tags for resources.

 

# VPC module
module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "~> 5.0"

  name = local.vpc_name
  cidr = local.vpc_cidr

  azs             = local.azs
  private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)]
  public_subnets  = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 48)]

  enable_nat_gateway = true
  single_nat_gateway = true

  public_subnet_tags = {
    "kubernetes.io/role/elb" = 1
  }

  private_subnet_tags = {
    "kubernetes.io/role/internal-elb" = 1
  }

  tags = local.tags
}

This:

  • Configures a VPC with public and private subnets across three availability zones.
  • Enables a single NAT gateway.
  • Tags the subnets for Kubernetes ELB and internal ELB roles.

 

# EKS module
module "eks" {
  source  = "terraform-aws-modules/eks/aws"
  version = "20.8.5"

  cluster_name    = local.cluster_name
  cluster_version = "1.29"

  cluster_endpoint_public_access           = true
  enable_cluster_creator_admin_permissions = true

  vpc_id     = module.vpc.vpc_id
  subnet_ids = module.vpc.private_subnets

  eks_managed_node_group_defaults = {
    ami_type = "AL2_x86_64"

  }

  eks_managed_node_groups = {
    one = {
      name = "node-group-1"

      instance_types = ["t3.small"]

      min_size     = 1
      max_size     = 3
      desired_size = 2
    }

    two = {
      name = "node-group-2"

      instance_types = ["t3.small"]

      min_size     = 1
      max_size     = 2
      desired_size = 1
    }
  }
}

This:

  • Creates an EKS cluster with a specified version and public endpoint access.
  • Sets up managed node groups with specified instance types and scaling configurations.
  • Uses the VPC and subnets created in the VPC module.

 

# EKS blueprint add ons module
module "eks_blueprints_addons" {
  source  = "aws-ia/eks-blueprints-addons/aws"
  version = "~> 1.0" #ensure to update this to the latest/desired version

  cluster_name      = module.eks.cluster_name
  cluster_endpoint  = module.eks.cluster_endpoint
  cluster_version   = module.eks.cluster_version
  oidc_provider_arn = module.eks.oidc_provider_arn

  eks_addons = {
    aws-ebs-csi-driver = {
      most_recent = true
    }
    coredns = {
      most_recent = true
    }
    vpc-cni = {
      most_recent = true
    }
    kube-proxy = {
      most_recent = true
    }
  }

  enable_karpenter             = true
  enable_kube_prometheus_stack = true
  enable_metrics_server        = true
  tags = {
    Environment = "dev"
  }
}

This:

  • Adds common EKS addons like EBS CSI driver, CoreDNS, VPC CNI, and kube-proxy.
  • Enables additional tools like Karpenter (an open-source autoscaler), Prometheus stack, and metrics server.
  • Tags the addons with the "dev" environment.

 

# Kubernetes Provider Configuration
provider "kubernetes" {
  host                   = module.eks.cluster_endpoint
  cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

  exec {
    api_version = "client.authentication.k8s.io/v1beta1"
    command     = "aws"
    # This requires the awscli to be installed locally where Terraform is being executed
    args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--region", "us-west-2"]
  }
}

  • This configures the Kubernetes provider to interact with the EKS cluster using AWS authentication.

 

# Helm Provider Configuration
provider "helm" {
  kubernetes {
    host                   = module.eks.cluster_endpoint
    cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data)

    exec {
      api_version = "client.authentication.k8s.io/v1beta1"
      command     = "aws"
      # This requires the awscli to be installed locally where Terraform is executed
      args = ["eks", "get-token", "--cluster-name", module.eks.cluster_name, "--region", "us-west-2"]
    }
  }
}

  • This configures the Helm provider to manage Kubernetes applications using the Helm package manager.

1 d. output.tf

 

output "cluster_endpoint" {
  description = "Endpoint for EKS control plane"
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "Security group ids attached to the cluster control plane"
  value       = module.eks.cluster_security_group_id
}

output "region" {
  description = "AWS region"
  value       = var.region
}

output "cluster_name" {
  description = "Kubernetes Cluster Name"
  value       = module.eks.cluster_name
}

  • This defines outputs for the EKS cluster endpoint, security group ID, region, and cluster name to be easily accessible.

Step 3: Initialize The Project

Navigate to the directory containing your Terraform configuration files and run:

 

$ terraform init

This command will:

  • Download and install the necessary provider plugins.
  • Configure the backend specified in the terraform.tf file.
  • Prepare the working directory for other Terraform commands.

1

Fig. 3. Output of "terraform init"

Step 4: Plan And Apply

Run ‘terraform plan’ and ‘terraform apply’

  • Plan the Infrastructure:

The ‘terraform plan’ command creates an execution plan, showing the changes that will be made to your infrastructure. This step helps you understand what Terraform will do before actually making any changes.

$ terraform plan

The command will output a detailed list of actions Terraform will perform, such as creating new resources or updating existing ones. Review this output carefully to ensure it matches your expectations.4

Fig. 4. Processing "terraform plan"3

Fig. 5. Output of "terraform plan"

  • Apply the Configuration:

The ‘terraform apply’ command applies the changes required to reach the desired state of the configuration. Terraform will prompt you for confirmation before making any changes.

$ terraform apply

Confirm the apply action by typing yes when prompted. Terraform will then proceed to create the resources as defined in your configuration files. Depending on the complexity of your infrastructure, this process may take several minutes.

5
Fig. 6. Output of "terraform apply"

Step 5: Deploy Applications

Once your Kubernetes cluster is provisioned, you can deploy containerized applications using tools like kubectl or AWS CodeDeploy.

  • Configure kubectl:

Set up your ‘kubeconfig’ to access the EKS cluster. Terraform provides the cluster endpoint as an output, which you can use to configure kubectl.

 

$ aws eks --region $(terraform output -raw region) update-kubeconfig --name $(terraform output -raw cluster_name)

7

Fig. 7. Output of "configure kubectl"

With ‘kubectl’ configured, you can now deploy applications to your Kubernetes cluster. For example, to deploy a simple Nginx application:

 

$ kubectl create deployment nginx --image=nginx
$ kubectl expose deployment nginx --port=80 --type=LoadBalancer

8Fig. 8. Output of "kubectl create deployment nginx"

9Fig. 9. Output of "kubectl expose deployment nginx"

  • Monitor Your Applications:

Use ‘kubectl’ commands to monitor the status of your applications and resources. For example:

 

$ kubectl get pods
$ kubectl get services

10Fig. 10. Output of "terraform apply"11Fig. 11. Output of "terraform apply"

  • Automate Deployments with AWS CodeDeploy:

AWS CodeDeploy is a deployment service that automates application deployments to Amazon EC2 instances, on-premises instances, or Lambda functions. To deploy applications using AWS CodeDeploy, create an application and a deployment group, then configure your application’s deployment process using the CodeDeploy console or AWS CLI.

Conclusion

Provisioning Kubernetes clusters on AWS with Terraform and EKS offers a streamlined and efficient approach to deploying containerized applications in the cloud.

Using Terraform's infrastructure-as-code capabilities and EKS's managed Kubernetes service, businesses can automate the provisioning process, improve scalability, enhance security, and accelerate time-to-market for their applications.

Our expert digital engineers empower organizations to embrace cloud-native technologies and drive innovation at scale. Schedule a meeting with them to discuss your niche business requirements.

About the Author
Shahid Tariq - Senior Software Engineer
About the Author

Shahid Tariq - Senior Software Engineer

Shahid enjoys exploring new technologies and traveling, always eager to learn, help, guide, and mentor. He loves coffee, naps, and fresh air, which rejuvenates him to face challenges. Passionate about innovation, wellness, and travel, he loves meeting new people and dreams of island destinations.


Hanush_img

Hanush Kumar - Marketing Associate

Hanush finds joy in YouTube content on automobiles and smartphones, prefers watching thrillers, and enjoys movie directors' interviews where they give out book recommendations. His essential life values? Positivity, continuous learning, self-respect, and integrity.

Back to Top