I am trying to host an application in AWS Elastic Kubernetes Service(EKS). I have configured the EKS cluster using the AWS Console using an IAM user (user1). Configured the Node Group and added a Node to the EKS Cluster and everything is working fine.
In order to connect to the cluster, I had spin up an EC2 instance (Centos7) and configured the following:
1. Installed docker, kubeadm, kubelet and kubectl.
2. Installed and configured AWS Cli V2.
I had used the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY_ID of user1 to configure AWS Cli from within the EC2 Instance in order to connect to the cluster using kubectl.
I ran the below commands in order to connect to the cluster as user1:
1. aws sts get-caller-identity
2. aws eks update-kubeconfig –name trojanwall –region ap-south-1
I am able to do each and every operations in the EKS cluster as user1.
However, I have now create a new user named ‘user2‘ and I have replaced the current AWS_ACCESS_KEY_ID and AWS_SECRET_KEY_ID with that of user2. Did the same steps and when I try to run ‘kubectl get pods‘, I am getting the following error:
error: You must be logged in to the server (Unauthorized)
Result after running kubectl describe configmap -n kube-system aws-auth as user1:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::XXXXXXXXXXXX:role/AWS-EC2-Role
username: system:node:{{EC2PrivateDNSName}}
BinaryData
====
Events: <none>
Does anyone know how to resolve this?