All of this hands-on will be carried out in "Northern Virginia".
Open "Cloud9" and create it with the following requirements.
Step 1 Name environment
Name: Specify any name.
Step 2 Configure settings
The basics are as follows, all are default.
Environment type:Create a new EC2 instance for environment (direct access) Instance type:t2.micro (1 GiB RAM + 1 vCPU) Platform:Amazon Linux Network (VPC): Specify any VPC to specify the Public subnet.
Step 3 Review
Click the "Create environment" button.
Create an IAM role with the following IAM policy and attach it to EC2.
AdministratorAccess
Cloud9 has the ability to automatically set temporary credentials for IAM users, This temporary credential is limited to some actions such as IAM, so Disable this temporary credential so that the IAM role you assigned to your EC2 instance is used.
Open the gear-shaped icon in the upper right, open the AWS Settings menu, and disable AWS managed temporary credentials.
leomaro7:~/environment $ rm -vf ${HOME}/.aws/credentials
leomaro7:~/environment $ aws --version
aws-cli/1.18.162 Python/3.6.12 Linux/4.14.193-113.317.amzn1.x86_64 botocore/1.19.2
leomaro7:~/environment $ AWS_REGION="us-east-1"
leomaro7:~/environment $ aws configure set default.region ${AWS_REGION}
leomaro7:~/environment $ aws configure get default.region
us-east-1
leomaro7:~/environment $ aws sts get-caller-identity
{
"UserId": "",
"Account": "",
"Arn": ""
}
eksctl: Command used to create the Kubernetes cluster itself
leomaro7:~/environment $ curl -L "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
leomaro7:~/environment $ sudo mv /tmp/eksctl /usr/local/bin
leomaro7:~/environment $ eksctl version
0.30.0
eksctl - The official CLI for Amazon EKS
kubectl: Commands used to operate the created Kubernetes cluster
leomaro7:~/environment $ sudo curl -L -o /usr/local/bin/kubectl https://amazon-eks.s3.us-west-2.amazonaws.com/1.17.7/2020-07-08/bin/linux/amd64/kubectl
leomaro7:~/environment $ sudo chmod +x /usr/local/bin/kubectl
leomaro7:~/environment $ kubectl version --short --client
Client Version: v1.17.7-eks-bffbac
AWS_REGION=$(aws configure get default.region)
eksctl create cluster \
--name=ekshandson \
--version 1.17 \
--nodes=3 --managed \
--region ${AWS_REGION} --zones ${AWS_REGION}a,${AWS_REGION}c
--In addition to the method of passing the setting option as an argument, it is also possible to describe the setting in YAML and specify that YAML as an argument. --You can also specify an existing VPC to create a cluster. This time, a new VPC is created. --eksctl uses CloudFormation to create AWS resources such as control plane VPCs, EKS clusters, and worker node Auto Scaling Groups. (It seems better to actually open CloudFormation and check the created resource.)
--Insufficient resources in Availability Zone --The AWS CLI version is old --IAM role is not assigned
jq: Convenient command to process json data bash-completion: Completes commands on the bash shell
sudo yum -y install jq bash-completion
sudo curl -L -o /etc/bash_completion.d/docker https://raw.githubusercontent.com/docker/cli/master/contrib/completion/bash/docker
sudo curl -L -o /usr/local/bin/docker-compose "https://github.com/docker/compose/releases/download/1.26.2/docker-compose-$(uname -s)-$(uname -m)"
sudo chmod +x /usr/local/bin/docker-compose
sudo curl -L -o /etc/bash_completion.d/docker-compose https://raw.githubusercontent.com/docker/compose/1.26.2/contrib/completion/bash/docker-compose
kubectl completion bash > kubectl_completion
sudo mv kubectl_completion /etc/bash_completion.d/kubectl
eksctl completion bash > eksctl_completion
sudo mv eksctl_completion /etc/bash_completion.d/eksctl
cat <<"EOT" >> ${HOME}/.bash_profile
alias k="kubectl"
complete -o default -F __start_kubectl k
EOT
kube-ps1###
kube-ps1: Prompts for the current kubectl context and Namespace.
git clone https://github.com/jonmosco/kube-ps1.git ~/.kube-ps1
cat <<"EOT" >> ~/.bash_profile
source ~/.kube-ps1/kube-ps1.sh
function get_cluster_short() {
echo "$1" | cut -d . -f1
}
KUBE_PS1_CLUSTER_FUNCTION=get_cluster_short
KUBE_PS1_SUFFIX=') '
PS1='$(kube_ps1)'$PS1
EOT
kubectx / kubens###
kubectx and kubens: kubectx and kubens It makes it easy to switch the context and Namespace of kubectl.
git clone https://github.com/ahmetb/kubectx.git ~/.kubectx
sudo ln -sf ~/.kubectx/completion/kubens.bash /etc/bash_completion.d/kubens
sudo ln -sf ~/.kubectx/completion/kubectx.bash /etc/bash_completion.d/kubectx
cat <<"EOT" >> ~/.bash_profile
export PATH=~/.kubectx:$PATH
EOT
stern###
stern: Check container logs
sudo curl -L -o /usr/local/bin/stern https://github.com/wercker/stern/releases/download/1.11.0/stern_linux_amd64
sudo chmod +x /usr/local/bin/stern
Close the tab in the current terminal and open the tab in the new terminal for the settings added to ~ / .bash_profile to take effect.
Shows the current cluster and basic information about the cluster.
leomaro7:~/environment $ eksctl get cluster
NAME REGION
ekshandson us-east-1
leomaro7:~/environment $ kubectl cluster-info
Kubernetes master is running at https://25FF2316ECD9ED0E8D621ED7DCFD6263.gr7.us-east-1.eks.amazonaws.com
CoreDNS is running at https://25FF2316ECD9ED0E8D621ED7DCFD6263.gr7.us-east-1.eks.amazonaws.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Check the nodes that belong to the cluster, the capacity of the nodes, and the running pods.
leomaro7:~/environment $ kubectl get node
NAME STATUS ROLES AGE VERSION
ip-192-168-4-184.ec2.internal Ready <none> 13m v1.17.11-eks-cfdc40
ip-192-168-56-48.ec2.internal Ready <none> 13m v1.17.11-eks-cfdc40
ip-192-168-6-63.ec2.internal Ready <none> 13m v1.17.11-eks-cfdc40
leomaro7:~/environment $ kubectl describe node ip-192-168-4-184.ec2.internal
Namespace: A grouping of Kubernetes resources such as Pods and Services.
Check Namespace
leomaro7:~/environment $ kubectl get namespace
NAME STATUS AGE
default Active 24m
kube-node-lease Active 24m
kube-public Active 24m
kube-system Active 24m
Pod: The smallest unit of deployment in Kubernetes, with one or more containers running in a pod.
↓ is the pod of namespace: default (of course not yet)
leomaro7:~/environment $ kubectl get pod -n default
No resources found in default namespace.
Changed the default Namespace to kube-system using kubens.
leomaro7:~/environment $ kubens kube-system
Context "[email protected]" modified.
Active namespace is "kube-system".
In kube-system, pods like ↓ are running.
leomaro7:~/environment $ kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
aws-node-2p7s2 1/1 Running 0 26m
aws-node-cmmkc 1/1 Running 0 26m
aws-node-vbp6f 1/1 Running 0 26m
coredns-75b44cb5b4-cktx9 1/1 Running 0 31m
coredns-75b44cb5b4-lq58q 1/1 Running 0 31m
kube-proxy-c6td9 1/1 Running 0 26m
kube-proxy-jxwc6 1/1 Running 0 26m
Use the -A option to get information for all Namespaces.
leomaro7:~/environment $ kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system aws-node-2p7s2 1/1 Running 0 27m
kube-system aws-node-cmmkc 1/1 Running 0 27m
kube-system aws-node-vbp6f 1/1 Running 0 27m
kube-system coredns-75b44cb5b4-cktx9 1/1 Running 0 32m
kube-system coredns-75b44cb5b4-lq58q 1/1 Running 0 32m
kube-system kube-proxy-c6td9 1/1 Running 0 27m
kube-system kube-proxy-jxwc6 1/1 Running 0 27m
kube-system kube-proxy-z454c 1/1 Running 0 27m
aws dynamodb create-table --table-name 'messages' \
--attribute-definitions '[{"AttributeName":"uuid","AttributeType": "S"}]' \
--key-schema '[{"AttributeName":"uuid","KeyType": "HASH"}]' \
--provisioned-throughput '{"ReadCapacityUnits": 1,"WriteCapacityUnits": 1}'
If you have moved the directory, move it back to the ~ / environment / directory.
cd ~/environment/
DL the sample application and unzip it.
wget https://eks-for-aws-summit-online.workshop.aws/sample-app.zip
unzip sample-app.zip
Build with docker-compose.
cd sample-app
docker-compose build
Confirm that it was built.
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
backend latest 412ec271d5e7 11 seconds ago 107MB
frontend latest fa4eba7cd29c 20 seconds ago 57.7MB
python 3-alpine dc68588b1801 6 days ago 44.3MB
Create an ECR repository.
aws ecr create-repository --repository-name frontend
aws ecr create-repository --repository-name backend
Get the URL of the repository and store it in a variable.
frontend_repo=$(aws ecr describe-repositories --repository-names frontend --query 'repositories[0].repositoryUri' --output text)
backend_repo=$(aws ecr describe-repositories --repository-names backend --query 'repositories[0].repositoryUri' --output text)
Alias the image you just built with the URL name of the ECR repository.
docker tag frontend:latest ${frontend_repo}:latest
docker tag backend:latest ${backend_repo}:latest
Check the image with the alias.
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
4.dkr.ecr.us-east-1.amazonaws.com/backend latest 412ec271d5e7 2 minutes ago 107MB
4.dkr.ecr.us-east-1.amazonaws.com/frontend latest fa4eba7cd29c 2 minutes ago 57.7MB
Log in to ECR.
ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
AWS_REGION=$(aws configure get default.region)
aws ecr get-login-password | docker login --username AWS --password-stdin https://${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
Push the image to ECR.
docker push ${frontend_repo}:latest
docker push ${backend_repo}:latest
Creating a working directory.
mkdir -p ~/environment/manifests/
cd ~/environment/manifests/
Create a Namespace for Application 1. Also changed the default Namespace.
kubectl create namespace frontend
kubens frontend
Creating a Deployment
python
frontend_repo=$(aws ecr describe-repositories --repository-names frontend --query 'repositories[0].repositoryUri' --output text)
cat <<EOF > frontend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
selector:
matchLabels:
app: frontend
replicas: 2
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: ${frontend_repo}:latest
imagePullPolicy: Always
ports:
- containerPort: 5000
env:
- name: BACKEND_URL
value: http://backend.backend:5000/messages
EOF
kubectl apply -f frontend-deployment.yaml -n frontend
Check the created deployment.
kubectl get deployment -n frontend
NAME READY UP-TO-DATE AVAILABLE AGE
frontend 2/2 2 2 13s
Check the pod.
kubectl get pod -n frontend
NAME READY STATUS RESTARTS AGE
frontend-84ccd456fb-l6kjl 1/1 Running 0 53s
frontend-84ccd456fb-wdhwr 1/1 Running 0 53s
Creating a Service.
Service: Provides name resolution and load balancing capabilities to access the pods launched by Deployment.
python
cat <<EOF > frontend-service-lb.yaml
apiVersion: v1
kind: Service
metadata:
name: frontend
spec:
type: LoadBalancer
selector:
app: frontend
ports:
- protocol: TCP
port: 80
targetPort: 5000
EOF
kubectl apply -f frontend-service-lb.yaml -n frontend
Check the created Service.
kubectl get service -n frontend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
frontend LoadBalancer 10.100.219.241 dd7c90ab25e44a939b065e566aa5432-1872056256.us-east-1.elb.amazonaws.com 80:30374/TCP 10s
Access EXTERNAL-IP and check if it is displayed. (It takes a few minutes to resolve the name)
Create Namespace for Application 2. Also changed the default Namespace.
kubectl create namespace backend
kubens backend
Creating a Deployment.
AWS_REGION=$(aws configure get default.region)
backend_repo=$(aws ecr describe-repositories --repository-names backend --query 'repositories[0].repositoryUri' --output text)
cat <<EOF > backend-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: backend
spec:
selector:
matchLabels:
app: backend
replicas: 2
template:
metadata:
labels:
app: backend
spec:
containers:
- name: backend
image: ${backend_repo}:latest
imagePullPolicy: Always
ports:
- containerPort: 5000
env:
- name: AWS_DEFAULT_REGION
value: ${AWS_REGION}
- name: DYNAMODB_TABLE_NAME
value: messages
EOF
kubectl apply -f backend-deployment.yaml -n backend
Check the pod.
kubectl get pod -n backend
NAME READY STATUS RESTARTS AGE
backend-7544ddcd98-7lxcx 1/1 Running 0 12s
backend-7544ddcd98-bn5jq 1/1 Running 0 12s
Creating a Service.
cat <<EOF > backend-service.yaml
apiVersion: v1
kind: Service
metadata:
name: backend
spec:
type: ClusterIP
selector:
app: backend
ports:
- protocol: TCP
port: 5000
targetPort: 5000
EOF
kubectl apply -f backend-service.yaml -n backend
Check the created Service.
kubectl get service -n backend
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
backend ClusterIP 10.100.60.141 <none> 5000/TCP 13s
Try accessing EXTERNAL-IP again.
5.IAM ROLES FOR SERVICE ACCOUNTS#
Use an EKS feature called IAM Roles for Service Accounts to grant IAM roles to application 2 pods to allow access to DynamoDB.
Create an OIDC identity provider and associate it with your cluster.
eksctl utils associate-iam-oidc-provider \
--cluster ekshandson \
--approve
Create an IAM policy that allows full access to the DynamoDB messages table.
cat <<EOF > dynamodb-messages-fullaccess-policy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListAndDescribe",
"Effect": "Allow",
"Action": [
"dynamodb:List*",
"dynamodb:DescribeReservedCapacity*",
"dynamodb:DescribeLimits",
"dynamodb:DescribeTimeToLive"
],
"Resource": "*"
},
{
"Sid": "SpecificTable",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGet*",
"dynamodb:DescribeStream",
"dynamodb:DescribeTable",
"dynamodb:Get*",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWrite*",
"dynamodb:CreateTable",
"dynamodb:Delete*",
"dynamodb:Update*",
"dynamodb:PutItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/messages"
}
]
}
EOF
aws iam create-policy \
--policy-name dynamodb-messages-fullaccess \
--policy-document file://dynamodb-messages-fullaccess-policy.json
Create and associate an IAM role with a ServiceAccount to use to run Application 2.
ACCOUNT_ID=$(aws sts get-caller-identity --output text --query Account)
eksctl create iamserviceaccount \
--name dynamodb-messages-fullaccess \
--namespace backend \
--cluster ekshandson \
--attach-policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/dynamodb-messages-fullaccess \
--override-existing-serviceaccounts \
--approve
Check the created ServiceAccount.
kubectl get serviceaccount -n backend
default 1 24m
dynamodb-messages-fullaccess 1 35s
Modify the Deployment definition of Application 2 and run the pod with the created ServiceAccount.
Open backend-deployment.yaml in Cloud9 by double-clicking, add the serviceAccountName specification as follows, and save.
spec:
+ serviceAccountName: dynamodb-messages-fullaccess
containers:
kubectl apply -f backend-deployment.yaml -n backend
A new pod will be launched automatically, so check it.
kubectl get pod -n backend
NAME READY STATUS RESTARTS AGE
backend-647595dd78-jjmml 1/1 Running 0 70s
backend-647595dd78-w7f6t 1/1 Running 0 72s
If you try to access EXTERNAL-IP again, it should be displayed correctly.