Deploying an EKS (Elastic Kubernetes Service) cluster 1.31 with Terraform involves using Infrastructure as Code (IaC) to automate the creation and management of the EKS cluster and its associated resources. You'll typically define your EKS cluster configuration in Terraform modules, which are reusable and shareable code blocks. This approach allows for consistent, repeatable, and version-controlled deployments.
- AWS shared credentials/configuration files
- Environment variables
- Static credentials
- EC2 instance metadata
Login AWS console
- Open IAM Dashboard
- **** Create a user. username : ashish
- Attach AdministratorAccess policy.
- Create access and secret key.
Create a Ec2 machine
- Open a EC2 Dashboard.
- Launch instance
- Name and Tags : MyTest
- Application and OS Image ( AMI ) : Amazon Linux 2023 AMI
- Instance Type: t2.micro
- Keypair : ashish.pem
- Network Settings : VPC, subnet
- Security Group : 22 - SSH (inbound)
- Storage : Min 8 GiB , GP3
- Click Launch instance
Login EC2 instance and configure Access/Secret key.
Login to EC2 instance.
ssh -i "myeks.pem" [email protected]
Configure Access key and Secret key using AWS CLI.
[root@ip-172-31-88-31 ~]# aws configure
AWS Access Key ID ]: ****************4E4R
AWS Secret Access Key]: [****************HRJx]:
Default region name]: [Region Name]:
Default output format]: [None]:
Install terraform
sudo yum install -y yum-utils shadow-utils
sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/AmazonLinux/hashicorp.repo
sudo yum -y install terraform
Deploying an EKS Cluster 1.31 with Terraform
Create a folder
[root@ip-172-31-6-151 ~]# mkdir eks_terraform
[root@ip-172-31-6-151 ~]# cd eks_terraform
Create an AWS provider and give it a name eks_terraform/provider.tf
provider.tf
[root@ip-172-31-6-151 eks_terraform]# ls -lth
total 4.0K
-rw-r--r--. 1 root root 188 Jun 10 07:04 provider.tf
[root@ip-172-31-6-151 eks_terraform]# cat provider.tf
# Configure the AWS Provider
provider "aws" {
region = "us-east-1"
}
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.0"
}
}
}
Notes:
- Above file, you only need to specify the region where I want to create the VPC and EKS cluster.
- Additionally, you can set the constraints on the AWS provider and any other you wish to use in your code.
Output :
[root@ip-172-31-6-151 eks_terraform]# terraform init
Initializing the backend...
Initializing provider plugins...
- Finding hashicorp/aws versions matching "~> 4.0"...
- Installing hashicorp/aws v4.67.0...
- Installed hashicorp/aws v4.67.0 (signed by HashiCorp)
Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
[root@ip-172-31-6-151 eks_terraform]# terraform plan
No changes. Your infrastructure matches the configuration.
Terraform has compared your real infrastructure against your configuration and found no differences, so no changes are needed.
The next step is to create a virtual private cloud in AWS using the aws_vpc resource.
There is one required field you need to provide: the size of your network. and 10.0.0.0/16 will give you approximately 65 thousand IP addresses. For your convenience, you can also tag it, for example, myvpc.
Let's name it terraform
vpc.tf
# Create a VPC
resource "aws_vpc" "myvpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "myvpc"
}
}
Internet Gateway AWS using Terraform
- To provide internet access for your services, we need to have an internet gateway in our VPC. You need to attach it to the VPC that we just created. It will be used as a default route in public subnets. Give it a name eks_terraform/igw.tf
igw.tf
resource "aws_internet_gateway" "myvpc-igw" {
vpc_id = aws_vpc.myvpc.id
tags = {
Name = "myvpc-igw"
}
}
Create private and public subnets.
Now we need to create four subnets.
To meet EKS requirements, we must have two public and two private subnets in different availability zones.
subnets.tf
# Create a VPC
resource "aws_vpc" "myvpc" {
cidr_block = "10.0.0.0/16"
tags = {
Name = "myvpc"
}
}
[root@ip-172-31-6-151 eks_terraform]# cat igw.tf
resource "aws_internet_gateway" "myvpc-igw" {
vpc_id = aws_vpc.myvpc.id
tags = {
Name = "myvpc-igw"
}
}
[root@ip-172-31-6-151 eks_terraform]# cat subnets.tf
# private subnet 01
resource "aws_subnet" "private-us-east-1a" {
vpc_id = aws_vpc.myvpc.id
cidr_block = "10.0.1.0/24"
availability_zone = "us-east-1a"
tags = {
Name = "private-us-east-1a"
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/cluster/demo" = "owned"
}
}
# private subnet 02
resource "aws_subnet" "private-us-east-1b" {
vpc_id = aws_vpc.myvpc.id
cidr_block = "10.0.2.0/24"
availability_zone = "us-east-1b"
tags = {
Name = "private-us-east-1b"
"kubernetes.io/role/internal-elb" = "1"
"kubernetes.io/cluster/demo" = "owned"
}
}
# public subnet 01
resource "aws_subnet" "public-us-east-1a" {
vpc_id = aws_vpc.myvpc.id
cidr_block = "10.0.3.0/24"
availability_zone = "us-east-1a"
map_public_ip_on_launch = true
tags = {
Name = "public-us-east-1a"
"kubernetes.io/role/elb" = "1" #this instruct the kubernetes to create public load balancer in these subnets
"kubernetes.io/cluster/demo" = "owned"
}
}
# public subnet 02
resource "aws_subnet" "public-us-east-1b" {
vpc_id = aws_vpc.myvpc.id
cidr_block = "10.0.4.0/24"
availability_zone = "us-east-1b"
map_public_ip_on_launch = true
tags = {
Name = "public-us-east-1b"
"kubernetes.io/role/elb" = "1" #this instruct the kubernetes to create public load balancer in these subnets
"kubernetes.io/cluster/demo" = "owned"
}
}
In contrast to the VPC resources above we provide tags for our convenience, whereas EKS requires some tags on the subnet to function properly.
- so for private subnets, the Name is just a simple tag that displays when a subnet is created, and following the “kubernetes.io/role/internal-elb” tag is used by Kubernetes to discover subnets where a private load balancer will be created. also, you need to tag your subnet with the cluster equal to the EKS cluster name “kubernetes.io/cluster/ashish” In this case it's a demo and value can be owned if you only use it for Kubernetes.
- And also you can see availability_zone in the two private subnets is different for EKS requirements.
- And also note on cidr_block is different where one is starting from 10.0.0.0/24 CIDR block that will give you 8192 ip addresses and the last IP is 10.0.31.0.
- And for the public subnet, we will use the availability zone the same as the above two private subnets.
Create NAT Gateway
Now it's time to create a NAT gateway, it is used in a private subnet to allow services to connect to the internet and an important note is that we must make it inside the public subnets because it is required to send packets to the internet by the Internet gateway.
For NAT we need to allocate the elastic ip address first. Then we can use it in the aws_nat_gateway resource.
nat.tf
resource "aws_eip" "nat" {
vpc = true
tags = {
Name = "nat"
}
}
resource "aws_nat_gateway" "k8s-nat" {
allocation_id = aws_eip.nat.id
subnet_id = aws_subnet.public-us-east-1a.id
tags = {
Name = "k8s-nat"
}
depends_on = [aws_internet_gateway.myvpc-igw]
}
The important part above, you need to place it in the public subnet subnet_id = aws_subnet.public-us-east-1a.id . That subnet must have an internet gateway as a default route.
- By now, we have created subnets, an internet gateway, and a nat gateway. It’s time to create routing tables and associate subnets with them.
routes.tf
# routing table
resource "aws_route_table" "private" {
vpc_id = aws_vpc.myvpc.id
route {
cidr_block = "0.0.0.0/0"
nat_gateway_id = aws_nat_gateway.k8s-nat.id
}
tags = {
Name = "private"
}
}
resource "aws_route_table" "public" {
vpc_id = aws_vpc.myvpc.id
route {
cidr_block = "0.0.0.0/0"
gateway_id = aws_internet_gateway.myvpc-igw.id
}
tags = {
Name = "public"
}
}
# routing table association
resource "aws_route_table_association" "private-us-east-1a" {
subnet_id = aws_subnet.private-us-east-1a.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table_association" "private-us-east-1b" {
subnet_id = aws_subnet.private-us-east-1b.id
route_table_id = aws_route_table.private.id
}
resource "aws_route_table_association" "public-us-east-1a" {
subnet_id = aws_subnet.public-us-east-1a.id
route_table_id = aws_route_table.public.id
}
resource "aws_route_table_association" "public-us-east-1b" {
subnet_id = aws_subnet.public-us-east-1b.id
route_table_id = aws_route_table.public.id
}
The VPC configuration is complete. We have created the VPC using Terraform.
Directory structure :
[root@ip-172-31-6-151 eks_terraform]# tree
.
├── igw.tf
├── nat.tf
├── provider.tf
├── routes.tf
├── subnets.tf
└── vpc.tf
0 directories, 6 files
Terraform plan
[root@ip-172-31-6-151 eks_terraform]# tree
.
├── igw.tf
├── nat.tf
├── provider.tf
├── routes.tf
├── subnets.tf
└── vpc.tf
0 directories, 6 files
[root@ip-172-31-6-151 eks_terraform]# terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_eip.nat will be created
+ resource "aws_eip" "nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags = {
+ "Name" = "nat"
}
+ tags_all = {
+ "Name" = "nat"
}
+ vpc = true
}
# aws_internet_gateway.myvpc-igw will be created
+ resource "aws_internet_gateway" "myvpc-igw" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "myvpc-igw"
}
+ tags_all = {
+ "Name" = "myvpc-igw"
}
+ vpc_id = (known after apply)
}
# aws_nat_gateway.k8s-nat will be created
+ resource "aws_nat_gateway" "k8s-nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "k8s-nat"
}
+ tags_all = {
+ "Name" = "k8s-nat"
}
}
# aws_route_table.private will be created
+ resource "aws_route_table" "private" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ cidr_block = "0.0.0.0/0"
+ nat_gateway_id = (known after apply)
# (12 unchanged attributes hidden)
},
]
+ tags = {
+ "Name" = "private"
}
+ tags_all = {
+ "Name" = "private"
}
+ vpc_id = (known after apply)
}
# aws_route_table.public will be created
+ resource "aws_route_table" "public" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ cidr_block = "0.0.0.0/0"
+ gateway_id = (known after apply)
# (12 unchanged attributes hidden)
},
]
+ tags = {
+ "Name" = "public"
}
+ tags_all = {
+ "Name" = "public"
}
+ vpc_id = (known after apply)
}
# aws_route_table_association.private-us-east-1a will be created
+ resource "aws_route_table_association" "private-us-east-1a" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.private-us-east-1b will be created
+ resource "aws_route_table_association" "private-us-east-1b" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.public-us-east-1a will be created
+ resource "aws_route_table_association" "public-us-east-1a" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.public-us-east-1b will be created
+ resource "aws_route_table_association" "public-us-east-1b" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_subnet.private-us-east-1a will be created
+ resource "aws_subnet" "private-us-east-1a" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.1.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "private-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ tags_all = {
+ "Name" = "private-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.private-us-east-1b will be created
+ resource "aws_subnet" "private-us-east-1b" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.2.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "private-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ tags_all = {
+ "Name" = "private-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public-us-east-1a will be created
+ resource "aws_subnet" "public-us-east-1a" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.3.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "public-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ tags_all = {
+ "Name" = "public-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public-us-east-1b will be created
+ resource "aws_subnet" "public-us-east-1b" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.4.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "public-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ tags_all = {
+ "Name" = "public-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_vpc.myvpc will be created
+ resource "aws_vpc" "myvpc" {
+ arn = (known after apply)
+ cidr_block = "10.0.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = (known after apply)
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = (known after apply)
+ enable_dns_support = true
+ enable_network_address_usage_metrics = (known after apply)
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ ipv6_cidr_block_network_border_group = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "myvpc"
}
+ tags_all = {
+ "Name" = "myvpc"
}
}
Plan: 14 to add, 0 to change, 0 to destroy.
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
Create EKS cluster
Kubernetes cluster managed by Amazone EKS service and make calls to other AWS resources on your behalf to manage the resources that you use with the EKS service.
Before you can create Amazon EKS clusters, you must create an IAM role with the AmazonEKSClusterPolicy.
eks.tf
# IAM role for eks
resource "aws_iam_role" "demo" {
name = "ashish"
tags = {
tag-key = "ashish"
}
assume_role_policy = <<POLICY
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"eks.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
POLICY
}
# eks policy attachment
resource "aws_iam_role_policy_attachment" "demo-AmazonEKSClusterPolicy" {
role = aws_iam_role.demo.name
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
}
# bare minimum requirement of eks
resource "aws_eks_cluster" "ashish" {
name = "ashish"
version = "1.31"
role_arn = aws_iam_role.demo.arn
vpc_config {
subnet_ids = [
aws_subnet.private-us-east-1a.id,
aws_subnet.private-us-east-1b.id,
aws_subnet.public-us-east-1a.id,
aws_subnet.public-us-east-1b.id
]
}
depends_on = [aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy]
}
Notes:
- Here depends on in the aws_eks_cluster resource means that until the IAM role is ready EKS cluster won't be created.
- Next, we are going to create a single instance group for Kubernetes. Similar to the EKS cluster, it requires an IAM role as well.
nodes.tf
[root@ip-172-31-6-151 eks_terraform]# cat nodes.tf
# role for nodegroup
resource "aws_iam_role" "nodes" {
name = "eks-node-group-nodes"
assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}
# IAM policy attachment to nodegroup
resource "aws_iam_role_policy_attachment" "nodes-AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.nodes.name
}
resource "aws_iam_role_policy_attachment" "nodes-AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.nodes.name
}
resource "aws_iam_role_policy_attachment" "nodes-AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.nodes.name
}
# aws node group
resource "aws_eks_node_group" "private-nodes" {
cluster_name = aws_eks_cluster.demo.name
node_group_name = "private-nodes"
node_role_arn = aws_iam_role.nodes.arn
subnet_ids = [
aws_subnet.private-us-east-1a.id,
aws_subnet.private-us-east-1b.id
]
capacity_type = "ON_DEMAND"
instance_types = ["t2.medium"]
scaling_config {
desired_size = 1
max_size = 10
min_size = 0
}
update_config {
max_unavailable = 1
}
labels = {
node = "kubenode02"
}
# taint {
# key = "team"
# value = "devops"
# effect = "NO_SCHEDULE"
# }
# launch_template {
# name = aws_launch_template.eks-with-disks.name
# version = aws_launch_template.eks-with-disks.latest_version
# }
depends_on = [
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy,
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly,
]
}
Notes:
- The first policy is AmazonEKSWorkerNodePolicy, which is required to allow EC2 instances to interact with the EKS cluster.
- The second policy is AmazonEKS_CNI_Policy, which is needed for Kubernetes networking configuration.
- The last one is AmazonEC2ContainerRegistryReadOnly, which allows nodes to download and run Docker images from the ECR repository.
- In the aws_eks_node_group resource, you have many options to configure the Kubernetes worker nodes.
- Here, we specify the cluster name, node group name, and IAM role, along with two private subnets.
- If you need nodes with public IPs, simply replace the private subnet IDs with public ones.
- For capacity, you can choose between on-demand and spot instances (spot instances are much cheaper but can be terminated by AWS at any time).
- When it comes to scaling, it's important to understand the scaling configuration.
- By default, EKS will not auto-scale your nodes.
- To enable auto-scaling, you need to deploy an additional component in Kubernetes called the Cluster Autoscaler.
- You can define the minimum and maximum number of nodes using the min_size and max_size attributes.
- EKS uses these settings to create an Auto Scaling Group, and then the Cluster Autoscaler adjusts the desired_size based on load.
- You can also define labels and taints for your nodes.
- Labels can be used by the Kubernetes scheduler to place pods on specific node groups using node affinity or node selectors.
- To manage application permissions within Kubernetes, you can either attach IAM policies directly to the node role — in which case all pods will have the same access to AWS resources — or, a better option is to create an OpenID Connect (OIDC) provider.
- This allows granting IAM permissions based on the service account used by each pod.
- In our case, we'll use an OIDC provider to grant permissions specifically to the service account used by the Cluster Autoscaler so it can scale our nodes.
iam-oidc.tf
data "tls_certificate" "eks" {
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
resource "aws_iam_openid_connect_provider" "eks" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
autoscaler.tf
data "tls_certificate" "eks" {
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
resource "aws_iam_openid_connect_provider" "eks" {
client_id_list = ["sts.amazonaws.com"]
thumbprint_list = [data.tls_certificate.eks.certificates[0].sha1_fingerprint]
url = aws_eks_cluster.demo.identity[0].oidc[0].issuer
}
[root@ip-172-31-6-151 eks_terraform]# cp ../my_eks/autoscaler.tf .
[root@ip-172-31-6-151 eks_terraform]# cat autoscaler.tf
data "aws_iam_policy_document" "eks_cluster_autoscaler_assume_role_policy" {
statement {
actions = ["sts:AssumeRoleWithWebIdentity"]
effect = "Allow"
condition {
test = "StringEquals"
variable = "${replace(aws_iam_openid_connect_provider.eks.url, "https://", "")}:sub"
values = ["system:serviceaccount:kube-system:cluster-autoscaler"]
}
principals {
identifiers = [aws_iam_openid_connect_provider.eks.arn]
type = "Federated"
}
}
}
resource "aws_iam_role" "eks_cluster_autoscaler" {
assume_role_policy = data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy.json
name = "eks-cluster-autoscaler"
}
resource "aws_iam_policy" "eks_cluster_autoscaler" {
name = "eks-cluster-autoscaler"
policy = jsonencode({
Statement = [{
Action = [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions"
]
Effect = "Allow"
Resource = "*"
}]
Version = "2012-10-17"
})
}
resource "aws_iam_role_policy_attachment" "eks_cluster_autoscaler_attach" {
role = aws_iam_role.eks_cluster_autoscaler.name
policy_arn = aws_iam_policy.eks_cluster_autoscaler.arn
}
output "eks_cluster_autoscaler_arn" {
value = aws_iam_role.eks_cluster_autoscaler.arn
}
All things are good and now create infrastructure by using terraform apply command.
Directory Structure
[root@ip-172-31-6-151 eks_terraform]# tree
.
├── autoscaler.tf
├── eks.tf
├── iam-oidc.tf
├── igw.tf
├── nat.tf
├── nodes.tf
├── provider.tf
├── routes.tf
├── subnets.tf
└── vpc.tf
0 directories, 10 files
terraform plan
[root@ip-172-31-6-151 eks_terraform]# terraform plan
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "eks_cluster_autoscaler_assume_role_policy" {
+ id = (known after apply)
+ json = (known after apply)
+ statement {
+ actions = [
+ "sts:AssumeRoleWithWebIdentity",
]
+ effect = "Allow"
+ condition {
+ test = "StringEquals"
+ values = [
+ "system:serviceaccount:kube-system:cluster-autoscaler",
]
+ variable = (known after apply)
}
+ principals {
+ identifiers = [
+ (known after apply),
]
+ type = "Federated"
}
}
}
# data.tls_certificate.eks will be read during apply
# (config refers to values not yet known)
<= data "tls_certificate" "eks" {
+ certificates = (known after apply)
+ id = (known after apply)
+ url = (known after apply)
}
# aws_eip.nat will be created
+ resource "aws_eip" "nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags = {
+ "Name" = "nat"
}
+ tags_all = {
+ "Name" = "nat"
}
+ vpc = true
}
# aws_eks_cluster.demo will be created
+ resource "aws_eks_cluster" "demo" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ cluster_id = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ name = "ashish"
+ platform_version = (known after apply)
+ role_arn = (known after apply)
+ status = (known after apply)
+ tags_all = (known after apply)
+ version = "1.31"
+ kubernetes_network_config (known after apply)
+ vpc_config {
+ cluster_security_group_id = (known after apply)
+ endpoint_private_access = false
+ endpoint_public_access = true
+ public_access_cidrs = (known after apply)
+ subnet_ids = (known after apply)
+ vpc_id = (known after apply)
}
}
# aws_eks_node_group.private-nodes will be created
+ resource "aws_eks_node_group" "private-nodes" {
+ ami_type = (known after apply)
+ arn = (known after apply)
+ capacity_type = "ON_DEMAND"
+ cluster_name = "ashish"
+ disk_size = (known after apply)
+ id = (known after apply)
+ instance_types = [
+ "t2.medium",
]
+ labels = {
+ "node" = "kubenode02"
}
+ node_group_name = "private-nodes"
+ node_group_name_prefix = (known after apply)
+ node_role_arn = (known after apply)
+ release_version = (known after apply)
+ resources = (known after apply)
+ status = (known after apply)
+ subnet_ids = (known after apply)
+ tags_all = (known after apply)
+ version = (known after apply)
+ scaling_config {
+ desired_size = 1
+ max_size = 10
+ min_size = 0
}
+ update_config {
+ max_unavailable = 1
}
}
# aws_iam_openid_connect_provider.eks will be created
+ resource "aws_iam_openid_connect_provider" "eks" {
+ arn = (known after apply)
+ client_id_list = [
+ "sts.amazonaws.com",
]
+ id = (known after apply)
+ tags_all = (known after apply)
+ thumbprint_list = (known after apply)
+ url = (known after apply)
}
# aws_iam_policy.eks_cluster_autoscaler will be created
+ resource "aws_iam_policy" "eks_cluster_autoscaler" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = "eks-cluster-autoscaler"
+ name_prefix = (known after apply)
+ path = "/"
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "autoscaling:DescribeAutoScalingGroups",
+ "autoscaling:DescribeAutoScalingInstances",
+ "autoscaling:DescribeLaunchConfigurations",
+ "autoscaling:DescribeTags",
+ "autoscaling:SetDesiredCapacity",
+ "autoscaling:TerminateInstanceInAutoScalingGroup",
+ "ec2:DescribeLaunchTemplateVersions",
]
+ Effect = "Allow"
+ Resource = "*"
},
]
+ Version = "2012-10-17"
}
)
+ policy_id = (known after apply)
+ tags_all = (known after apply)
}
# aws_iam_role.demo will be created
+ resource "aws_iam_role" "demo" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = [
+ "eks.amazonaws.com",
]
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "ashish"
+ name_prefix = (known after apply)
+ path = "/"
+ role_last_used = (known after apply)
+ tags = {
+ "tag-key" = "ashish"
}
+ tags_all = {
+ "tag-key" = "ashish"
}
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
# aws_iam_role.eks_cluster_autoscaler will be created
+ resource "aws_iam_role" "eks_cluster_autoscaler" {
+ arn = (known after apply)
+ assume_role_policy = (known after apply)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "eks-cluster-autoscaler"
+ name_prefix = (known after apply)
+ path = "/"
+ role_last_used = (known after apply)
+ tags_all = (known after apply)
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
# aws_iam_role.nodes will be created
+ resource "aws_iam_role" "nodes" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ec2.amazonaws.com"
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "eks-node-group-nodes"
+ name_prefix = (known after apply)
+ path = "/"
+ role_last_used = (known after apply)
+ tags_all = (known after apply)
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
# aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy will be created
+ resource "aws_iam_role_policy_attachment" "demo-AmazonEKSClusterPolicy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
+ role = "ashish"
}
# aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach will be created
+ resource "aws_iam_role_policy_attachment" "eks_cluster_autoscaler_attach" {
+ id = (known after apply)
+ policy_arn = (known after apply)
+ role = "eks-cluster-autoscaler"
}
# aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly will be created
+ resource "aws_iam_role_policy_attachment" "nodes-AmazonEC2ContainerRegistryReadOnly" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
+ role = "eks-node-group-nodes"
}
# aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy will be created
+ resource "aws_iam_role_policy_attachment" "nodes-AmazonEKSWorkerNodePolicy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
+ role = "eks-node-group-nodes"
}
# aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy will be created
+ resource "aws_iam_role_policy_attachment" "nodes-AmazonEKS_CNI_Policy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
+ role = "eks-node-group-nodes"
}
# aws_internet_gateway.myvpc-igw will be created
+ resource "aws_internet_gateway" "myvpc-igw" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "myvpc-igw"
}
+ tags_all = {
+ "Name" = "myvpc-igw"
}
+ vpc_id = (known after apply)
}
# aws_nat_gateway.k8s-nat will be created
+ resource "aws_nat_gateway" "k8s-nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "k8s-nat"
}
+ tags_all = {
+ "Name" = "k8s-nat"
}
}
# aws_route_table.private will be created
+ resource "aws_route_table" "private" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ cidr_block = "0.0.0.0/0"
+ nat_gateway_id = (known after apply)
# (12 unchanged attributes hidden)
},
]
+ tags = {
+ "Name" = "private"
}
+ tags_all = {
+ "Name" = "private"
}
+ vpc_id = (known after apply)
}
# aws_route_table.public will be created
+ resource "aws_route_table" "public" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ cidr_block = "0.0.0.0/0"
+ gateway_id = (known after apply)
# (12 unchanged attributes hidden)
},
]
+ tags = {
+ "Name" = "public"
}
+ tags_all = {
+ "Name" = "public"
}
+ vpc_id = (known after apply)
}
# aws_route_table_association.private-us-east-1a will be created
+ resource "aws_route_table_association" "private-us-east-1a" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.private-us-east-1b will be created
+ resource "aws_route_table_association" "private-us-east-1b" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.public-us-east-1a will be created
+ resource "aws_route_table_association" "public-us-east-1a" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.public-us-east-1b will be created
+ resource "aws_route_table_association" "public-us-east-1b" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_subnet.private-us-east-1a will be created
+ resource "aws_subnet" "private-us-east-1a" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.1.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "private-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ tags_all = {
+ "Name" = "private-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.private-us-east-1b will be created
+ resource "aws_subnet" "private-us-east-1b" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.2.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "private-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ tags_all = {
+ "Name" = "private-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public-us-east-1a will be created
+ resource "aws_subnet" "public-us-east-1a" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.3.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "public-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ tags_all = {
+ "Name" = "public-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public-us-east-1b will be created
+ resource "aws_subnet" "public-us-east-1b" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.4.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "public-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ tags_all = {
+ "Name" = "public-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_vpc.myvpc will be created
+ resource "aws_vpc" "myvpc" {
+ arn = (known after apply)
+ cidr_block = "10.0.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = (known after apply)
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = (known after apply)
+ enable_dns_support = true
+ enable_network_address_usage_metrics = (known after apply)
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ ipv6_cidr_block_network_border_group = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "myvpc"
}
+ tags_all = {
+ "Name" = "myvpc"
}
}
Plan: 26 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ eks_cluster_autoscaler_arn = (known after apply)
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if you run "terraform apply" now.
terraform apply
[root@ip-172-31-6-151 eks_terraform]# terraform apply
Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
+ create
<= read (data resources)
Terraform will perform the following actions:
# data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy will be read during apply
# (config refers to values not yet known)
<= data "aws_iam_policy_document" "eks_cluster_autoscaler_assume_role_policy" {
+ id = (known after apply)
+ json = (known after apply)
+ statement {
+ actions = [
+ "sts:AssumeRoleWithWebIdentity",
]
+ effect = "Allow"
+ condition {
+ test = "StringEquals"
+ values = [
+ "system:serviceaccount:kube-system:cluster-autoscaler",
]
+ variable = (known after apply)
}
+ principals {
+ identifiers = [
+ (known after apply),
]
+ type = "Federated"
}
}
}
# data.tls_certificate.eks will be read during apply
# (config refers to values not yet known)
<= data "tls_certificate" "eks" {
+ certificates = (known after apply)
+ id = (known after apply)
+ url = (known after apply)
}
# aws_eip.nat will be created
+ resource "aws_eip" "nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ carrier_ip = (known after apply)
+ customer_owned_ip = (known after apply)
+ domain = (known after apply)
+ id = (known after apply)
+ instance = (known after apply)
+ network_border_group = (known after apply)
+ network_interface = (known after apply)
+ private_dns = (known after apply)
+ private_ip = (known after apply)
+ public_dns = (known after apply)
+ public_ip = (known after apply)
+ public_ipv4_pool = (known after apply)
+ tags = {
+ "Name" = "nat"
}
+ tags_all = {
+ "Name" = "nat"
}
+ vpc = true
}
# aws_eks_cluster.demo will be created
+ resource "aws_eks_cluster" "demo" {
+ arn = (known after apply)
+ certificate_authority = (known after apply)
+ cluster_id = (known after apply)
+ created_at = (known after apply)
+ endpoint = (known after apply)
+ id = (known after apply)
+ identity = (known after apply)
+ name = "ashish"
+ platform_version = (known after apply)
+ role_arn = (known after apply)
+ status = (known after apply)
+ tags_all = (known after apply)
+ version = "1.31"
+ kubernetes_network_config (known after apply)
+ vpc_config {
+ cluster_security_group_id = (known after apply)
+ endpoint_private_access = false
+ endpoint_public_access = true
+ public_access_cidrs = (known after apply)
+ subnet_ids = (known after apply)
+ vpc_id = (known after apply)
}
}
# aws_eks_node_group.private-nodes will be created
+ resource "aws_eks_node_group" "private-nodes" {
+ ami_type = (known after apply)
+ arn = (known after apply)
+ capacity_type = "ON_DEMAND"
+ cluster_name = "ashish"
+ disk_size = (known after apply)
+ id = (known after apply)
+ instance_types = [
+ "t2.medium",
]
+ labels = {
+ "node" = "kubenode02"
}
+ node_group_name = "private-nodes"
+ node_group_name_prefix = (known after apply)
+ node_role_arn = (known after apply)
+ release_version = (known after apply)
+ resources = (known after apply)
+ status = (known after apply)
+ subnet_ids = (known after apply)
+ tags_all = (known after apply)
+ version = (known after apply)
+ scaling_config {
+ desired_size = 1
+ max_size = 10
+ min_size = 0
}
+ update_config {
+ max_unavailable = 1
}
}
# aws_iam_openid_connect_provider.eks will be created
+ resource "aws_iam_openid_connect_provider" "eks" {
+ arn = (known after apply)
+ client_id_list = [
+ "sts.amazonaws.com",
]
+ id = (known after apply)
+ tags_all = (known after apply)
+ thumbprint_list = (known after apply)
+ url = (known after apply)
}
# aws_iam_policy.eks_cluster_autoscaler will be created
+ resource "aws_iam_policy" "eks_cluster_autoscaler" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = "eks-cluster-autoscaler"
+ name_prefix = (known after apply)
+ path = "/"
+ policy = jsonencode(
{
+ Statement = [
+ {
+ Action = [
+ "autoscaling:DescribeAutoScalingGroups",
+ "autoscaling:DescribeAutoScalingInstances",
+ "autoscaling:DescribeLaunchConfigurations",
+ "autoscaling:DescribeTags",
+ "autoscaling:SetDesiredCapacity",
+ "autoscaling:TerminateInstanceInAutoScalingGroup",
+ "ec2:DescribeLaunchTemplateVersions",
]
+ Effect = "Allow"
+ Resource = "*"
},
]
+ Version = "2012-10-17"
}
)
+ policy_id = (known after apply)
+ tags_all = (known after apply)
}
# aws_iam_role.demo will be created
+ resource "aws_iam_role" "demo" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = [
+ "eks.amazonaws.com",
]
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "ashish"
+ name_prefix = (known after apply)
+ path = "/"
+ role_last_used = (known after apply)
+ tags = {
+ "tag-key" = "ashish"
}
+ tags_all = {
+ "tag-key" = "ashish"
}
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
# aws_iam_role.eks_cluster_autoscaler will be created
+ resource "aws_iam_role" "eks_cluster_autoscaler" {
+ arn = (known after apply)
+ assume_role_policy = (known after apply)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "eks-cluster-autoscaler"
+ name_prefix = (known after apply)
+ path = "/"
+ role_last_used = (known after apply)
+ tags_all = (known after apply)
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
# aws_iam_role.nodes will be created
+ resource "aws_iam_role" "nodes" {
+ arn = (known after apply)
+ assume_role_policy = jsonencode(
{
+ Statement = [
+ {
+ Action = "sts:AssumeRole"
+ Effect = "Allow"
+ Principal = {
+ Service = "ec2.amazonaws.com"
}
},
]
+ Version = "2012-10-17"
}
)
+ create_date = (known after apply)
+ force_detach_policies = false
+ id = (known after apply)
+ managed_policy_arns = (known after apply)
+ max_session_duration = 3600
+ name = "eks-node-group-nodes"
+ name_prefix = (known after apply)
+ path = "/"
+ role_last_used = (known after apply)
+ tags_all = (known after apply)
+ unique_id = (known after apply)
+ inline_policy (known after apply)
}
# aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy will be created
+ resource "aws_iam_role_policy_attachment" "demo-AmazonEKSClusterPolicy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
+ role = "ashish"
}
# aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach will be created
+ resource "aws_iam_role_policy_attachment" "eks_cluster_autoscaler_attach" {
+ id = (known after apply)
+ policy_arn = (known after apply)
+ role = "eks-cluster-autoscaler"
}
# aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly will be created
+ resource "aws_iam_role_policy_attachment" "nodes-AmazonEC2ContainerRegistryReadOnly" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
+ role = "eks-node-group-nodes"
}
# aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy will be created
+ resource "aws_iam_role_policy_attachment" "nodes-AmazonEKSWorkerNodePolicy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
+ role = "eks-node-group-nodes"
}
# aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy will be created
+ resource "aws_iam_role_policy_attachment" "nodes-AmazonEKS_CNI_Policy" {
+ id = (known after apply)
+ policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
+ role = "eks-node-group-nodes"
}
# aws_internet_gateway.myvpc-igw will be created
+ resource "aws_internet_gateway" "myvpc-igw" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "myvpc-igw"
}
+ tags_all = {
+ "Name" = "myvpc-igw"
}
+ vpc_id = (known after apply)
}
# aws_nat_gateway.k8s-nat will be created
+ resource "aws_nat_gateway" "k8s-nat" {
+ allocation_id = (known after apply)
+ association_id = (known after apply)
+ connectivity_type = "public"
+ id = (known after apply)
+ network_interface_id = (known after apply)
+ private_ip = (known after apply)
+ public_ip = (known after apply)
+ subnet_id = (known after apply)
+ tags = {
+ "Name" = "k8s-nat"
}
+ tags_all = {
+ "Name" = "k8s-nat"
}
}
# aws_route_table.private will be created
+ resource "aws_route_table" "private" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ cidr_block = "0.0.0.0/0"
+ nat_gateway_id = (known after apply)
# (12 unchanged attributes hidden)
},
]
+ tags = {
+ "Name" = "private"
}
+ tags_all = {
+ "Name" = "private"
}
+ vpc_id = (known after apply)
}
# aws_route_table.public will be created
+ resource "aws_route_table" "public" {
+ arn = (known after apply)
+ id = (known after apply)
+ owner_id = (known after apply)
+ propagating_vgws = (known after apply)
+ route = [
+ {
+ cidr_block = "0.0.0.0/0"
+ gateway_id = (known after apply)
# (12 unchanged attributes hidden)
},
]
+ tags = {
+ "Name" = "public"
}
+ tags_all = {
+ "Name" = "public"
}
+ vpc_id = (known after apply)
}
# aws_route_table_association.private-us-east-1a will be created
+ resource "aws_route_table_association" "private-us-east-1a" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.private-us-east-1b will be created
+ resource "aws_route_table_association" "private-us-east-1b" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.public-us-east-1a will be created
+ resource "aws_route_table_association" "public-us-east-1a" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_route_table_association.public-us-east-1b will be created
+ resource "aws_route_table_association" "public-us-east-1b" {
+ id = (known after apply)
+ route_table_id = (known after apply)
+ subnet_id = (known after apply)
}
# aws_subnet.private-us-east-1a will be created
+ resource "aws_subnet" "private-us-east-1a" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.1.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "private-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ tags_all = {
+ "Name" = "private-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.private-us-east-1b will be created
+ resource "aws_subnet" "private-us-east-1b" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.2.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = false
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "private-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ tags_all = {
+ "Name" = "private-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/internal-elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public-us-east-1a will be created
+ resource "aws_subnet" "public-us-east-1a" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1a"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.3.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "public-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ tags_all = {
+ "Name" = "public-us-east-1a"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_subnet.public-us-east-1b will be created
+ resource "aws_subnet" "public-us-east-1b" {
+ arn = (known after apply)
+ assign_ipv6_address_on_creation = false
+ availability_zone = "us-east-1b"
+ availability_zone_id = (known after apply)
+ cidr_block = "10.0.4.0/24"
+ enable_dns64 = false
+ enable_resource_name_dns_a_record_on_launch = false
+ enable_resource_name_dns_aaaa_record_on_launch = false
+ id = (known after apply)
+ ipv6_cidr_block_association_id = (known after apply)
+ ipv6_native = false
+ map_public_ip_on_launch = true
+ owner_id = (known after apply)
+ private_dns_hostname_type_on_launch = (known after apply)
+ tags = {
+ "Name" = "public-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ tags_all = {
+ "Name" = "public-us-east-1b"
+ "kubernetes.io/cluster/demo" = "owned"
+ "kubernetes.io/role/elb" = "1"
}
+ vpc_id = (known after apply)
}
# aws_vpc.myvpc will be created
+ resource "aws_vpc" "myvpc" {
+ arn = (known after apply)
+ cidr_block = "10.0.0.0/16"
+ default_network_acl_id = (known after apply)
+ default_route_table_id = (known after apply)
+ default_security_group_id = (known after apply)
+ dhcp_options_id = (known after apply)
+ enable_classiclink = (known after apply)
+ enable_classiclink_dns_support = (known after apply)
+ enable_dns_hostnames = (known after apply)
+ enable_dns_support = true
+ enable_network_address_usage_metrics = (known after apply)
+ id = (known after apply)
+ instance_tenancy = "default"
+ ipv6_association_id = (known after apply)
+ ipv6_cidr_block = (known after apply)
+ ipv6_cidr_block_network_border_group = (known after apply)
+ main_route_table_id = (known after apply)
+ owner_id = (known after apply)
+ tags = {
+ "Name" = "myvpc"
}
+ tags_all = {
+ "Name" = "myvpc"
}
}
Plan: 26 to add, 0 to change, 0 to destroy.
Changes to Outputs:
+ eks_cluster_autoscaler_arn = (known after apply)
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
aws_eip.nat: Creating...
aws_vpc.myvpc: Creating...
aws_iam_policy.eks_cluster_autoscaler: Creating...
aws_iam_role.demo: Creating...
aws_iam_role.nodes: Creating...
aws_iam_policy.eks_cluster_autoscaler: Creation complete after 0s [id=arn:aws:iam::256050093938:policy/eks-cluster-autoscaler]
aws_iam_role.nodes: Creation complete after 0s [id=eks-node-group-nodes]
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly: Creating...
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy: Creating...
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy: Creating...
aws_iam_role.demo: Creation complete after 0s [id=ashish]
aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy: Creating...
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy: Creation complete after 0s [id=eks-node-group-nodes-20250610082751337600000001]
aws_eip.nat: Creation complete after 1s [id=eipalloc-0eea3bf78b492fbfd]
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy: Creation complete after 1s [id=eks-node-group-nodes-20250610082751375100000002]
aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy: Creation complete after 1s [id=ashish-20250610082751453900000003]
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly: Creation complete after 1s [id=eks-node-group-nodes-20250610082751582900000004]
aws_vpc.myvpc: Creation complete after 2s [id=vpc-0ba03a84ccfd83d30]
aws_subnet.private-us-east-1a: Creating...
aws_subnet.public-us-east-1a: Creating...
aws_internet_gateway.myvpc-igw: Creating...
aws_subnet.public-us-east-1b: Creating...
aws_subnet.private-us-east-1b: Creating...
aws_internet_gateway.myvpc-igw: Creation complete after 0s [id=igw-00d4e76abce23a7bd]
aws_route_table.public: Creating...
aws_subnet.private-us-east-1a: Creation complete after 0s [id=subnet-0a24ca86181eef50c]
aws_subnet.private-us-east-1b: Creation complete after 1s [id=subnet-040a887feb7b2af36]
aws_route_table.public: Creation complete after 2s [id=rtb-0f1236cd61c6b3915]
aws_subnet.public-us-east-1a: Still creating... [00m10s elapsed]
aws_subnet.public-us-east-1b: Still creating... [00m10s elapsed]
aws_subnet.public-us-east-1b: Creation complete after 11s [id=subnet-0ff0bdd792e4a95cb]
aws_route_table_association.public-us-east-1b: Creating...
aws_route_table_association.public-us-east-1b: Creation complete after 1s [id=rtbassoc-0e840b6fa06c1c731]
aws_subnet.public-us-east-1a: Creation complete after 12s [id=subnet-00a93e051039f58ee]
aws_route_table_association.public-us-east-1a: Creating...
aws_nat_gateway.k8s-nat: Creating...
aws_eks_cluster.demo: Creating...
aws_route_table_association.public-us-east-1a: Creation complete after 1s [id=rtbassoc-0cd739f5e147182e9]
aws_nat_gateway.k8s-nat: Still creating... [00m10s elapsed]
aws_eks_cluster.demo: Still creating... [00m10s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m20s elapsed]
aws_eks_cluster.demo: Still creating... [00m20s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m30s elapsed]
aws_eks_cluster.demo: Still creating... [00m30s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m40s elapsed]
aws_eks_cluster.demo: Still creating... [00m40s elapsed]
aws_eks_cluster.demo: Still creating... [00m50s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [00m50s elapsed]
aws_eks_cluster.demo: Still creating... [01m00s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m00s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m10s elapsed]
aws_eks_cluster.demo: Still creating... [01m10s elapsed]
aws_eks_cluster.demo: Still creating... [01m20s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m20s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m30s elapsed]
aws_eks_cluster.demo: Still creating... [01m30s elapsed]
aws_nat_gateway.k8s-nat: Still creating... [01m40s elapsed]
aws_eks_cluster.demo: Still creating... [01m40s elapsed]
aws_nat_gateway.k8s-nat: Creation complete after 1m45s [id=nat-01f8aa45e5dd791b0]
aws_route_table.private: Creating...
aws_route_table.private: Creation complete after 1s [id=rtb-0541bac1f61ca686b]
aws_route_table_association.private-us-east-1a: Creating...
aws_route_table_association.private-us-east-1b: Creating...
aws_route_table_association.private-us-east-1b: Creation complete after 1s [id=rtbassoc-05822d5b565210dcc]
aws_eks_cluster.demo: Still creating... [01m50s elapsed]
aws_route_table_association.private-us-east-1a: Still creating... [00m10s elapsed]
aws_eks_cluster.demo: Still creating... [02m00s elapsed]
aws_route_table_association.private-us-east-1a: Creation complete after 14s [id=rtbassoc-035354c28db8e553c]
aws_eks_cluster.demo: Still creating... [02m10s elapsed]
aws_eks_cluster.demo: Still creating... [02m20s elapsed]
aws_eks_cluster.demo: Still creating... [04m00s elapsed]
aws_eks_cluster.demo: Still creating... [06m50s elapsed]
aws_eks_cluster.demo: Creation complete after 6m54s [id=ashish]
data.tls_certificate.eks: Reading...
aws_eks_node_group.private-nodes: Creating...
data.tls_certificate.eks: Read complete after 0s [id=922877a0975ad078a65b8ff11ebc47b8311945c7]
aws_iam_openid_connect_provider.eks: Creating...
aws_iam_openid_connect_provider.eks: Creation complete after 1s [id=arn:aws:iam::256050093938:oidc-provider/oidc.eks.us-east-1.amazonaws.com/id/E07152AC5B9A7239FB346A9681C1994E]
data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy: Reading...
data.aws_iam_policy_document.eks_cluster_autoscaler_assume_role_policy: Read complete after 0s [id=119306707]
aws_iam_role.eks_cluster_autoscaler: Creating...
aws_iam_role.eks_cluster_autoscaler: Creation complete after 0s [id=eks-cluster-autoscaler]
aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Creating...
aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Creation complete after 0s [id=eks-cluster-autoscaler-20250610083500259000000007]
aws_eks_node_group.private-nodes: Still creating... [00m10s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m20s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m30s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m40s elapsed]
aws_eks_node_group.private-nodes: Still creating... [00m50s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m00s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m10s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m20s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m30s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m40s elapsed]
aws_eks_node_group.private-nodes: Still creating... [01m50s elapsed]
aws_eks_node_group.private-nodes: Creation complete after 1m57s [id=ashish:private-nodes]
Apply complete! Resources: 26 added, 0 changed, 0 destroyed.
Outputs:
eks_cluster_autoscaler_arn = "arn:aws:iam::256050093938:role/eks-cluster-autoscaler"
Now update the kubeconfig file of your system by following the command:
aws eks --region us-east-1 update-kubeconfig --name ashish
Output:
[ec2-user@ip-172-31-6-151 ~]$ aws eks --region us-east-1 update-kubeconfig --name ashish
Added new context arn:aws:eks:us-east-1:256050093938:cluster/ashish to /home/ec2-user/.kube/config
To verify the EKS cluster apply the following command:
[ec2-user@ip-172-31-6-151 ~]$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-jppdr 2/2 Running 0 6m53s
kube-system coredns-789f8477df-lgzbp 1/1 Running 0 8m43s
kube-system coredns-789f8477df-lw56r 1/1 Running 0 8m43s
kube-system kube-proxy-mmvfk 1/1 Running 0 6m53s
To verify the EKS cluster apply the following command:
kubectl get svc
[ec2-user@ip-172-31-6-151 ~]$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 172.20.0.1 <none> 443/TCP 11m
create cluster-autoscler.yaml file and specify arn number of roles in the service account:
cluster-autoscler.yaml
`[ec2-user@ip-172-31-6-151 ~]$ cat cluster-autoscler.yaml`
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: cluster-autoscaler
namespace: kube-system
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<your-account-id>:role/eksctl-cluster-autoscaler
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: cluster-autoscaler
rules:
- apiGroups: [""]
resources: ["events", "endpoints"]
verbs: ["create", "patch"]
- apiGroups: [""]
resources: ["pods/eviction"]
verbs: ["create"]
- apiGroups: [""]
resources: ["pods/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["endpoints"]
resourceNames: ["cluster-autoscaler"]
verbs: ["get", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["watch", "list", "get", "update"]
- apiGroups: [""]
resources: ["namespaces", "pods", "services", "replicationcontrollers", "persistentvolumeclaims", "persistentvolumes"]
verbs: ["watch", "list", "get"]
- apiGroups: ["extensions"]
resources: ["replicasets", "daemonsets"]
verbs: ["watch", "list", "get"]
- apiGroups: ["policy"]
resources: ["poddisruptionbudgets"]
verbs: ["watch", "list"]
- apiGroups: ["apps"]
resources: ["statefulsets", "replicasets", "daemonsets"]
verbs: ["watch", "list", "get"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses", "csinodes", "csidrivers", "csistoragecapacities"]
verbs: ["watch", "list", "get"]
- apiGroups: ["batch", "extensions"]
resources: ["jobs"]
verbs: ["get", "list", "watch", "patch"]
- apiGroups: ["coordination.k8s.io"]
resources: ["leases"]
verbs: ["create"]
- apiGroups: ["coordination.k8s.io"]
resourceNames: ["cluster-autoscaler"]
resources: ["leases"]
verbs: ["get", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: cluster-autoscaler
namespace: kube-system
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create","list","watch"]
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["cluster-autoscaler-status", "cluster-autoscaler-priority-expander"]
verbs: ["delete", "get", "update", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-autoscaler
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cluster-autoscaler
namespace: kube-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cluster-autoscaler
subjects:
- kind: ServiceAccount
name: cluster-autoscaler
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cluster-autoscaler
namespace: kube-system
labels:
app: cluster-autoscaler
spec:
replicas: 1
selector:
matchLabels:
app: cluster-autoscaler
template:
metadata:
labels:
app: cluster-autoscaler
spec:
serviceAccountName: cluster-autoscaler
containers:
- image: k8s.gcr.io/autoscaling/cluster-autoscaler:v1.21.0
name: cluster-autoscaler
resources:
limits:
cpu: 100m
memory: 600Mi
requests:
cpu: 100m
memory: 600Mi
# https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/cloudprovider/aws/README.md
command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/cluster-autoscaler/cluster-irsa # Update cluster
- --balance-similar-node-groups
- --skip-nodes-with-system-pods=false
volumeMounts:
- name: ssl-certs
mountPath: /etc/ssl/certs/ca-certificates.crt
readOnly: true
imagePullPolicy: "Always"
volumes:
- name: ssl-certs
hostPath:
path: "/etc/ssl/certs/ca-bundle.crt"
Verify the infrastructure
Outputs:
[ec2-user@ip-172-31-6-151 ~]$ kubectl apply -f cluster-autoscler.yaml
serviceaccount/cluster-autoscaler created
clusterrole.rbac.authorization.k8s.io/cluster-autoscaler created
role.rbac.authorization.k8s.io/cluster-autoscaler created
clusterrolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
rolebinding.rbac.authorization.k8s.io/cluster-autoscaler created
deployment.apps/cluster-autoscaler created
[ec2-user@ip-172-31-6-151 ~]$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-jppdr 2/2 Running 0 15m
kube-system cluster-autoscaler-6748b474f6-hc8n4 1/1 Running 0 7s
kube-system coredns-789f8477df-lgzbp 1/1 Running 0 17m
kube-system coredns-789f8477df-lw56r 1/1 Running 0 17m
kube-system kube-proxy-mmvfk 1/1 Running 0 15m
[ec2-user@ip-172-31-6-151 ~]$ kubectl get po -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-jppdr 2/2 Running 0 15m
kube-system cluster-autoscaler-6748b474f6-hc8n4 1/1 Running 0 11s
kube-system coredns-789f8477df-lgzbp 1/1 Running 0 17m
kube-system coredns-789f8477df-lw56r 1/1 Running 0 17m
kube-system kube-proxy-mmvfk 1/1 Running 0 15m
EKS Cluster
terraform destroy
aws_vpc.myvpc: Refreshing state... [id=vpc-0ba03a84ccfd83d30]
aws_iam_role.nodes: Refreshing state... [id=eks-node-group-nodes]
aws_iam_role.demo: Refreshing state... [id=ashish]
aws_eip.nat: Refreshing state... [id=eipalloc-0eea3bf78b492fbfd]
aws_iam_policy.eks_cluster_autoscaler: Refreshing state... [id=arn:aws:iam::256050093938:policy/eks-cluster-autoscaler]
aws_iam_role_policy_attachment.demo-AmazonEKSClusterPolicy: Refreshing state... [id=ashish-20250610082751453900000003]
aws_iam_role_policy_attachment.nodes-AmazonEKSWorkerNodePolicy: Refreshing state... [id=eks-node-group-nodes-20250610082751375100000002]
aws_iam_role_policy_attachment.nodes-AmazonEKS_CNI_Policy: Refreshing state... [id=eks-node-group-nodes-20250610082751337600000001]
aws_iam_role_policy_attachment.nodes-AmazonEC2ContainerRegistryReadOnly: Refreshing state... [id=eks-node-group-nodes-20250610082751582900000004]
aws_subnet.private-us-east-1a: Refreshing state... [id=subnet-0a24ca86181eef50c]
aws_subnet.public-us-east-1a: Refreshing state... [id=subnet-00a93e051039f58ee]
aws_internet_gateway.myvpc-igw: Refreshing state... [id=igw-00d4e76abce23a7bd]
aws_subnet.public-us-east-1b: Refreshing state... [id=subnet-0ff0bdd792e4a95cb]
aws_subnet.private-us-east-1b: Refreshing state... [id=subnet-040a887feb7b2af36]
aws_route_table.public: Refreshing state... [id=rtb-0f1236cd61c6b3915]
aws_nat_gateway.k8s-nat: Refreshing state... [id=nat-01f8aa45e5dd791b0]
aws_eks_cluster.demo: Refreshing state... [id=ashish]
.......
........
Plan: 0 to add, 0 to change, 26 to destroy.
Changes to Outputs:
- eks_cluster_autoscaler_arn = "arn:aws:iam::256050093938:role/eks-cluster-autoscaler" -> null
Do you really want to destroy all resources?
Terraform will destroy all your managed infrastructure, as shown above.
There is no undo. Only 'yes' will be accepted to confirm.
Enter a value: yes
aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Destroying... [id=eks-cluster-autoscaler-20250610083500259000000007]
aws_route_table_association.public-us-east-1b: Destroying... [id=rtbassoc-0e840b6fa06c1c731]
aws_eks_node_group.private-nodes: Destroying... [id=ashish:private-nodes]
aws_route_table_association.private-us-east-1b: Destroying... [id=rtbassoc-05822d5b565210dcc]
aws_route_table_association.private-us-east-1a: Destroying... [id=rtbassoc-035354c28db8e553c]
aws_route_table_association.public-us-east-1a: Destroying... [id=rtbassoc-0cd739f5e147182e9]
aws_iam_role_policy_attachment.eks_cluster_autoscaler_attach: Destruction complete after 0s
aws_iam_policy.eks_cluster_autoscaler: Destroying... [id=arn:aws:iam::256050093938:policy/eks-cluster-autoscaler]
aws_iam_role.eks_cluster_autoscaler: Destroying... [id=eks-cluster-autoscaler]
aws_route_table_association.private-us-east-1a: Destruction complete after 0s
aws_route_table_association.public-us-east-1a: Destruction complete after 0s
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 01m40s elapsed]
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 01m50s elapsed]
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 02m00s elapsed]
aws_eks_node_group.private-nodes: Still destroying... [id=ashish:private-nodes, 02m10s elapsed]
aws_subnet.private-us-east-1a: Destruction complete after 1s
aws_vpc.myvpc: Destroying... [id=vpc-078475545ade76529]
aws_vpc.myvpc: Destruction complete after 1s
Destroy complete! Resources: 26 destroyed.
Conclusion
In this step-by-step guide, you’ve learned how to efficiently deploy an Amazon EKS 1.31 cluster using Terraform, implement IAM Roles for Service Accounts (IRSA) for secure and fine-grained permission control, and configure the Cluster Autoscaler to automatically adjust your cluster size based on real-time demand.
By combining these powerful tools, you now have:
- A production-ready, scalable Kubernetes environment
- Infrastructure defined as code for repeatability and version control
- Secure workload access to AWS services through IRSA
- Automated scaling to optimize cost and performance
This setup lays a strong foundation for running resilient and efficient containerized applications on AWS. Going forward, you can extend this architecture with monitoring, CI/CD pipelines, and additional security policies tailored to your workloads.
*Reference : *
Top comments (1)
Really helpful walkthrough, thanks for sharing!