EKS - NodeGroup 생성

목차

참고

1. LauchTemplate 생성

1. UserData 생성을 위한 EKS Cluster 정보 확인

aws eks describe-cluster 명령어를 이용해 필요한 EKS 클러스터의 정보를 가져오도록 한다.

  • 클러스터 Certificate Authority 확인
aws eks describe-cluster \
--query "cluster.certificateAuthority.data" \
--output text \
--name [클러스터 이름] \
--region [지역] \
--profile [aws-profile]
  • 결과
LS0tLS1CRUd....RS0tLS0tCg==
  • 클러스터 EndPoint 확인
aws eks describe-cluster \
--query "cluster.endpoint" \
--output text \
--name my-cluster \
--region region-code
https://3224AD3EC8CEA69EDAE9C57D9792EBB2.yl4.ap-northeast-2.eks.amazonaws.com
  • 서비스 IP 주소 범위 확인
aws eks describe-cluster \
--query "cluster.kubernetesNetworkConfig.serviceIpv4Cidr" \
--output text \
--name my-cluster \
--region region-code
172.20.0.0/16

2. UserData 생성

MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="==MYBOUNDARY=="

--==MYBOUNDARY==
Content-Type: text/x-shellscript; charset="us-ascii"

#!/bin/bash
set -ex
/etc/eks/bootstrap.sh my-cluster \
--b64-cluster-ca [클러스터의 certificate-authority] \
--apiserver-endpoint [클러스터의 api-server-endpoint] \
--dns-cluster-ip [클러스터의 service-cidr] \
--container-runtime containerd \
--kubelet-extra-args '--max-pods=my-[최대 Pod 개수]' \
--use-max-pods false

--==MYBOUNDARY==--

3. UserData 를 Base64 로 인코딩

TUlNRS1WZXJzaW...VJZPT0tLQ==

4. 완성된 Launch Template 정보

{
"BlockDeviceMappings": [
{
"DeviceName": "/dev/xvda",
"Ebs": {
"Encrypted": true,
"DeleteOnTermination": true,
"SnapshotId": "snap-0f505c7c316673bd3",
"VolumeSize": 30,
"VolumeType": "gp3"
}
}
],
"ImageId": "ami-07d1f1e1f9eaaf855", // Amazon Machine Image
"InstanceType": "t3.medium", // Instance Type
"KeyName": "Key-Pair-Name", // EC2 Key Pair 이름
"UserData": "TUlNRS1WZXJzaW...VJZPT0tLQ==", // [base64로 encoding된 UserData]
"SecurityGroupIds": ["sg-0b59ab7daf120b3aa"]
}

5. AWS Cli 를 이용해 Launch Template 생성

aws ec2 create-launch-template \
--launch-template-name skcc-dev-floot-template-eks-cicd \
--version-description version1 \
--launch-template-data file:///Users/dongwoo-yang/vitality/devops/devops/eks-cicd/launch-template-data-cicd.json

6. 결과

  • Launch Template 이 정상적으로 생성된 경우
{
"LaunchTemplate": {
"LaunchTemplateId": "lt-0a34f65e6a5f56f26",
"LaunchTemplateName": "skcc-dev-floot-template-eks-cicd",
"CreateTime": "2023-03-05T04:35:54+00:00",
"DefaultVersionNumber": 1,
"LatestVersionNumber": 1
}
}
  • Launch Template 생성시 문제가 있는 경우
{
"LaunchTemplate": {
"LaunchTemplateId": "lt-0f71298159e2cb0b9",
"LaunchTemplateName": "skcc-dev-floot-template-eks-cicd",
"CreateTime": "2023-03-05T04:24:31+00:00",
"DefaultVersionNumber": 1,
"LatestVersionNumber": 1
},
"Warning": {
"Errors": [
{
"Code": "InvalidSecurityGroupID.NotFound",
"Message": "The security group 'sg-044a9f8e5e9ad6aea' does not exist"
}
]
}
}

7. 잘 못 만들었을 경우 삭제

aws ec2 delete-launch-template --launch-template-id lt-0f71298159e2cb0b9
  • 삭제 반환 값
{
"LaunchTemplate": {
"LaunchTemplateId": "lt-032f99015c21e4006",
"LaunchTemplateName": "skcc-dev-floot-template-eks-cicd",
"CreateTime": "2023-03-05T04:30:59+00:00",
"DefaultVersionNumber": 1,
"LatestVersionNumber": 1
}
}

2. Nodegroup 생성

1. Nodegroup 생성을 위한 Config 파일 작성

apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: skcc-dev-floot-eks
region: ap-northeast-2
vpc:
id: "vpc-0b743e88067ce8f6f"
subnets:
private:
ap-northeast-2a:
id: "subnet-0f80a51bc0046c935"
ap-northeast-2c:
id: "subnet-0acaea413f9bd4192"
securityGroup: "sg-0b59ab7daf120b3aa" # this is the ControlPlaneSecurityGroup
managedNodeGroups:
- name: skcc-dev-floot-nodegroup-cicd
launchTemplate:
id: lt-0a34f65e6a5f56f26
version: "1"
labels: { nodegroup-role: cicd }
availabilityZones: ["ap-northeast-2a", "ap-northeast-2c"]
desiredCapacity: 1
# instanceType: t3.medium
iam:
instanceRoleARN: <Arn 정보>
privateNetworking: true
tags:
"cz-stage": "dev"
"cz-owner": "양동우"
"cz-appl": "cicd"
"cz-project": "floot"
"cz-org": "Vitality사업팀"

2. Eksctl 를 이용한 NodeGroup 생성

eksctl create nodegroup -f ./eks-nodegroup-cicd.yaml
  • Node Group 생성 로그
2023-03-05 13:38:34 [ℹ]  will use version 1.24 for new nodegroup(s) based on control plane version
2023-03-05 13:38:34 [!] no eksctl-managed CloudFormation stacks found for "skcc-dev-floot-eks", will attempt to create nodegroup(s) on non eksctl-managed cluster
2023-03-05 13:38:35 [ℹ] nodegroup "skcc-dev-floot-nodegroup-cicd" will use "" [AmazonLinux2/1.24]
2023-03-05 13:38:35 [ℹ] 4 existing nodegroup(s) (skcc-dev-floot-nodegroup-logging,skcc-dev-floot-nodegroup-monitoring,skcc-dev-floot-nodegroup-monitoring2,skcc-uat-floot-nodegroup-monitoring) will be excluded
2023-03-05 13:38:35 [ℹ] 1 nodegroup (skcc-dev-floot-nodegroup-cicd) was included (based on the include/exclude rules)
2023-03-05 13:38:35 [ℹ] will create a CloudFormation stack for each of 1 managed nodegroups in cluster "skcc-dev-floot-eks"
2023-03-05 13:38:35 [ℹ] 1 task: { 1 task: { 1 task: { create managed nodegroup "skcc-dev-floot-nodegroup-cicd" } } }
2023-03-05 13:38:35 [ℹ] building managed nodegroup stack "eksctl-skcc-dev-floot-eks-nodegroup-skcc-dev-floot-nodegroup-cicd"
2023-03-05 13:38:35 [ℹ] deploying stack "eksctl-skcc-dev-floot-eks-nodegroup-skcc-dev-floot-nodegroup-cicd"
2023-03-05 13:38:36 [ℹ] waiting for CloudFormation stack "eksctl-skcc-dev-floot-eks-nodegroup-skcc-dev-floot-nodegroup-cicd"
2023-03-05 13:39:06 [ℹ] waiting for CloudFormation stack "eksctl-skcc-dev-floot-eks-nodegroup-skcc-dev-floot-nodegroup-cicd"
2023-03-05 13:39:44 [ℹ] waiting for CloudFormation stack "eksctl-skcc-dev-floot-eks-nodegroup-skcc-dev-floot-nodegroup-cicd"
2023-03-05 13:41:32 [ℹ] waiting for CloudFormation stack "eksctl-skcc-dev-floot-eks-nodegroup-skcc-dev-floot-nodegroup-cicd"
2023-03-05 13:41:32 [ℹ] no tasks
2023-03-05 13:41:32 [✔] created 0 nodegroup(s) in cluster "skcc-dev-floot-eks"
2023-03-05 13:41:32 [ℹ] nodegroup "skcc-dev-floot-nodegroup-cicd" has 1 node(s)
2023-03-05 13:41:32 [ℹ] node "ip-10-180-19-19.ap-northeast-2.compute.internal" is ready
2023-03-05 13:41:32 [ℹ] waiting for at least 1 node(s) to become ready in "skcc-dev-floot-nodegroup-cicd"
2023-03-05 13:41:32 [ℹ] nodegroup "skcc-dev-floot-nodegroup-cicd" has 1 node(s)
2023-03-05 13:41:32 [ℹ] node "ip-10-180-19-19.ap-northeast-2.compute.internal" is ready
2023-03-05 13:41:32 [✔] created 1 managed nodegroup(s) in cluster "skcc-dev-floot-eks"
2023-03-05 13:41:33 [ℹ] checking security group configuration for all nodegroups
2023-03-05 13:41:33 [ℹ] all nodegroups have up-to-date cloudformation templates

3. 필요한 경우 Nodegroup 삭제

eksctl delete nodegroup --cluster=skcc-dev-floot-eks --name=skcc-dev-floot-nodegroup-cicd
  • Node Group 삭제 로그
2023-03-03 23:10:16 [ℹ]  1 nodegroup (skcc-dev-floot-nodegroup-cicd) was included (based on the include/exclude rules)
2023-03-03 23:10:16 [ℹ] will drain 1 nodegroup(s) in cluster "skcc-dev-floot-eks"
2023-03-03 23:10:16 [ℹ] starting parallel draining, max in-flight of 1
2023-03-03 23:10:17 [ℹ] cordon node "ip-10-180-18-62.ap-northeast-2.compute.internal"
2023-03-03 23:10:17 [✔] drained all nodes: [ip-10-180-18-62.ap-northeast-2.compute.internal]
2023-03-03 23:10:17 [ℹ] will delete 1 nodegroups from cluster "skcc-dev-floot-eks"
2023-03-03 23:10:17 [ℹ] 1 task: { 1 task: { delete nodegroup "skcc-dev-floot-nodegroup-cicd" [async] } }
2023-03-03 23:10:17 [ℹ] will delete stack "eksctl-skcc-dev-floot-eks-nodegroup-skcc-dev-floot-nodegroup-cicd"
2023-03-03 23:10:17 [ℹ] will delete 0 nodegroups from auth ConfigMap in cluster "skcc-dev-floot-eks"
2023-03-03 23:10:17 [✔] deleted 1 nodegroup(s) from cluster "skcc-dev-floot-eks"

EKS 에서 사용하는 instance 타입 확인하기

aws eks describe-nodegroup \
--cluster-name <cluster name> \
--nodegroup-name skcc-uat-chat-nodegroup-monitoring \
--query 'nodegroup.launchTemplate.instanceType'
Share