Table of contents
- Prerequisites
- Step 1: Set Up AWS CLI and Terraform
- Step 3: Create a Terraform Configuration for EKS Cluster
- Step 4: Create a Terraform Configuration for Security Groups
- Step 5: Create Outputs File
- Step 6: Configure Remote Backend for State File
- Step 8: Execute Terraform Commands
- Step 9: Configure Backend and Reapply
- Step 10: Clean Up
- Conclusion
This tutorial will guide you through creating an Amazon EKS (Elastic Kubernetes Service) cluster inside a VPC (Virtual Private Cloud) using Terraform. We'll cover all necessary configurations, from setting up AWS CLI and Terraform to implementing auto-scaling and configuring security groups. By the end of this tutorial, you'll have a fully functional EKS cluster ready for your Kubernetes workloads.
Prerequisites
Before we begin, ensure you have the following installed on your machine:
AWS CLI: Installation Guide
Terraform: Installation Guide
Follow this GitHub Repo: https://github.com/nishankkoul/EKS-VPC-Terraform
Step 1: Set Up AWS CLI and Terraform
Install AWS CLI and Terraform using the links provided above.
Configure AWS CLI with your credentials:
aws configure
Enter your AWS Access Key, Secret Access Key, region, and output format when prompted.
Step 2: Create a Terraform Configuration for VPC
First, we'll create a Terraform configuration file for the VPC. This configuration sets up a VPC, subnets, and other necessary networking components.
provider "aws" {
region = var.aws_region
}
data "aws_availability_zones" "available" {}
resource "random_string" "suffix" {
length = 8
special = false
}
locals {
cluster_name = "eks-cluster-${random_string.suffix.result}"
}
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "3.0"
name = "eks-vpc"
cidr = var.vpc_cidr
azs = data.aws_availability_zones.available.names
private_subnets = var.private_subnets
public_subnets = var.public_subnets
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true
enable_dns_support = true
}
provider "aws": This block configures the AWS provider with the specified region, which is defined in a variable (var.aws
_region
). This ensures that all resources are created in the desired AWS region.
data "aws_availability_zones" "available": This block retrieves the list of available availability zones in the specified region. This data source ensures the subnets are distributed across different availability zones for high availability.
resource "random_string" "suffix": This resource generates a random string of 8 alphanumeric characters. This is useful for creating unique names for resources to avoid conflicts.
locals: This block defines a local variable cluster_name
that constructs the EKS cluster name by appending the random string generated in the previous step to "eks-cluster". This ensures the cluster name is unique.
module "vpc": This block uses the terraform-aws-modules/vpc/aws
module to create a VPC. The module simplifies VPC creation by providing predefined configurations.
source: Specifies the source of the VPC module.
version: Specifies the version of the VPC module.
name: Assigns a name to the VPC.
cidr: Specifies the CIDR block for the VPC, defined in a variable (
var.vpc_cidr
).azs: Uses the list of available availability zones to distribute the subnets.
private_subnets: Defines the CIDR blocks for private subnets.
public_subnets: Defines the CIDR blocks for public subnets.
enable_nat_gateway: Enables NAT gateways for internet access from private subnets.
single_nat_gateway: Creates a single NAT gateway to reduce costs.
enable_dns_hostnames: Enables DNS hostnames in the VPC.
enable_dns_support: Enables DNS support in the VPC.
This configuration sets up the foundational networking components necessary for an EKS cluster, ensuring a robust, scalable, and secure environment.
Step 3: Create a Terraform Configuration for EKS Cluster
Next, we'll create a Terraform configuration file for the EKS cluster.
module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.0"
cluster_name = local.cluster_name
cluster_version = var.kubernetes_version
subnet_ids = module.vpc.private_subnets
node_groups = {
eks_nodes = {
desired_capacity = 2
max_capacity = 6
min_capacity = 2
instance_type = "t3.medium"
key_name = var.key_name
ami_type = "AL2_x86_64"
}
}
}
The following Terraform configuration sets up an Amazon EKS (Elastic Kubernetes Service) cluster using a predefined module. This module simplifies the creation and management of EKS resources, ensuring a scalable and secure Kubernetes environment.
module "eks": This block uses the
terraform-aws-modules/eks/aws
module to create an EKS cluster. The module abstracts the complexity of setting up an EKS cluster and associated resources.source: Specifies the source of the EKS module.
version: Specifies the version of the EKS module.
cluster_name: Sets the name of the EKS cluster, using a local variable (
local.cluster_name
) for uniqueness.cluster_version: Defines the Kubernetes version for the cluster, sourced from a variable (
var.kubernetes_version
).subnet_ids: Lists the subnet IDs where the EKS cluster will be deployed, using the private subnets from the VPC module.
node_groups: Configures the node groups (worker nodes) for the EKS cluster.
eks_nodes: Defines the properties for the primary node group.
desired_capacity: Specifies the desired number of worker nodes.
max_capacity: Sets the maximum number of worker nodes.
min_capacity: Sets the minimum number of worker nodes.
instance_type: Specifies the EC2 instance type for the worker nodes (
t3.medium
).key_name: Defines the key pair name for SSH access to the nodes, sourced from a variable (
var.key_name
).ami_type: Specifies the Amazon Machine Image (AMI) type for the worker nodes (
AL2_x86_64
), which is Amazon Linux 2.
This configuration ensures that the EKS cluster is deployed in a highly available and scalable manner, with nodes distributed across multiple subnets for fault tolerance.
Step 4: Create a Terraform Configuration for Security Groups
Now, we need to set up security groups to manage access to the worker nodes.
resource "aws_security_group" "all_worker_mgmt" {
vpc_id = module.vpc.vpc_id
ingress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["10.0.0.0/8", "172.16.0.0/12", "192.168.0.0/16"]
}
egress {
from_port = 0
to_port = 0
protocol = "-1"
cidr_blocks = ["0.0.0.0/0"]
}
}
The following Terraform configuration defines a security group for managing access to the worker nodes in the EKS cluster. Security groups act as virtual firewalls that control inbound and outbound traffic to AWS resources.
This Terraform configuration sets up a security group and associated rules to manage the network access for worker nodes in an Amazon EKS cluster. It creates a security group named all_worker_mgmt within a specified VPC and configures it with an ingress rule to allow inbound traffic from private IP ranges (10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16) to facilitate internal communication within the cluster. Additionally, it includes an egress rule permitting unrestricted outbound traffic to any destination (0.0.0.0/0), enabling the worker nodes to communicate externally as needed. This setup ensures that the nodes have the necessary network access for both internal and external interactions.
Step 5: Create Outputs File
We'll define outputs to capture essential information about the EKS cluster.
output "cluster_id" {
description = "EKS cluster ID"
value = module.eks.cluster_id
}
output "cluster_endpoint" {
description = "EKS cluster endpoint"
value = module.eks.cluster_endpoint
}
output "cluster_security_group_id" {
description = "EKS cluster security group IDs"
value = module.eks.cluster_security_group_id
}
output "oidc_provider_arn" {
description = "OIDC provider ARN"
value = module.eks.oidc_provider_arn
}
This Terraform configuration defines four output values related to the Amazon EKS cluster. Each output provides specific information about the cluster, which can be used by other Terraform configurations or by users to access essential details about the deployed EKS cluster. Here's an explanation of each output:
description: Provides a brief explanation of the output, stating that this is the EKS cluster ID.
value: Fetches and outputs the cluster_id from the eks module. This is a unique identifier for the EKS cluster.
description: Explains that this output is the endpoint for the EKS control plane.
value: Fetches and outputs the cluster_endpoint from the eks module. This is the URL used to interact with the EKS cluster's control plane.
description: Indicates that this output provides the security group IDs attached to the EKS cluster control plane.
value: Fetches and outputs the cluster_security_group_id from the eks module. These security group IDs are used to manage network access to the cluster control plane.
value: Fetches and outputs the oidc_provider_arn from the eks module. This ARN is for the OpenID Connect (OIDC) provider associated with the EKS cluster, which is used for managing IAM roles for service accounts (IRSA) within the cluster.
Step 6: Configure Remote Backend for State File
For security and consistency, store the Terraform state file in an S3 bucket and implement state locking using DynamoDB.
resource "aws_s3_bucket" "terraform_state" {
bucket = "your-terraform-state-bucket"
}
resource "aws_dynamodb_table" "terraform_locks" {
name = "terraform-locks"
billing_mode = "PAY_PER_REQUEST"
hash_key = "LockID"
attribute {
name = "LockID"
type = "S"
}
}
This Terraform configuration sets up the necessary resources to store and manage the Terraform state file securely using AWS. It involves creating an S3 bucket for the state file and a DynamoDB table for state locking.
Step 7: Enable Remote Backend for the State File
Create a backend configuration file.
terraform {
backend "s3" {
bucket = "your-terraform-state-bucket"
key = "terraform/eks/terraform.tfstate"
region = "us-east-1"
dynamodb_table = "terraform-locks"
}
}
This Terraform configuration block sets up the backend for storing the Terraform state file in an S3 bucket and enables state locking using a DynamoDB table. Specifically, it configures the S3 bucket named nishankkoul-terraform-tfstate-file-783 located in the us-east-1 region to store the state file with the key nishank/terraform.tfstate. Additionally, it uses the DynamoDB table terraform-lock-table to implement state locking, ensuring that only one Terraform process can modify the state at a time. This setup enhances the security and consistency of managing infrastructure state in a shared environment.
Note: Comment out the backend.tf before the first execution of the entire project because the s3 bucket is not created yet and it will throw an error.
Step 8: Execute Terraform Commands
Initialize Terraform:
terraform init
Plan the infrastructure:
terraform plan
Apply the changes:
terraform apply --auto-approve
Step 9: Configure Backend and Reapply
After the initial setup, uncomment the backend configuration, delete the local state file, and reapply the Terraform configuration:
Uncomment the backend configuration.
Delete the local state file and backup.
Reapply the configuration:
terraform apply --auto-approve
Step 10: Clean Up
Finally, clean up the resources to avoid incurring charges:
terraform destroy
Conclusion
By following this guide, you've successfully set up an Amazon EKS cluster with a VPC using Terraform. This configuration ensures high availability, security, and scalability for your Kubernetes workloads. Keep this guide handy for future reference and modifications. Happy Terraforming!
Follow this GitHub Repo: https://github.com/nishankkoul/EKS-VPC-Terraform