Common errors when setting up EKS for the first time

-error Connection refused localhost:8080:

export not well done in .bash_profile, the path exported has to match the name of the config file name under .kube directory

execute kubectl config view to check if it’s actually fetching the config

-error you must be logged into the server or doesn’t have a resource type:

$ kubectl get all
error: the server doesn’t have a resource type “cronjobs”
$ kubectl get nodes
error: You must be logged in to the server (Unauthorized)

check the following:

- IAM user ACCESS KEY in ~/.aws/credentials workstation where you execute kubectl commands must be the same you see in the when you describe the IAM , it must be an active ACCESS KEY. I stumbled on this and many others did.

- Check name of the cluster is the same in kubectl config file as in EKS

- Check API endpoint is set for server: and certificatedData:

- Verify you have last aws cli version

On the other hand, .kube/http-cache contains the last api invocations where you can check the api answer codes to get a more accurate insight

$ cat ~/.bash_profile
export PATH=$HOME/bin:$PATH
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config-ferpablocluster

cat ~/.kube/
cache/ config-ferpablocluster http-cache/

$ cat ~/.kube/config-ferpablocluster 
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3FoTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXdOakUxTWpFME0xb1hEVEk0TURZd016RTFNakUwTTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTExiCmdRcUpTOU5KdzltSng3MThBQjB6NTIyMTlVd0ZmdGR5dlAreU5wL0RqVFZrUHVrOUZja2VWWVNINE9CajNyN3UKQTg0anpyeE5VMFZiMnZXNDM3b1Urb3FLczQ3Wnh1YnAvTkN3c0ZYTEZtUHR0c2dYYXdoZ2JFUUd1clkyY1VIdwp3ZDU0aEFKUU0yQWR4cVVIaUxac2RIalc5cmlXRzdGSVptQXVkRnZ4Umsvc3Y0a3pScGdKOUVGb3BmdGY0c0RlClRqWmJtTXNodHpxOGduL1c3S0xYaDMwSHM2NEtlMFpGSTNuV1p6Ry8xQ1Aza3AxekVTN3pqNXFJam43VFJpZ1MKS3owNWNnYkw5dCt1aVpYSDFnN3M3Nk9nSTlpWjFYODk3bVo4a2ZJVkFXWkdibEdpYTBFVnFPRm1GTmZGUGJtWAowa0Uya2tOc0tOV1NOaTVuU3VzQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHMmFoekJPdUhCSkl6aXdhb1J3M1FKWDlwcksKdzBxOGtweEp6eUxRTjhWNWJhc2Z1MkNOcS9pWnBFSlNZNU16K2ZaQlRVMGxaRzRjTGV3TGZkOStqWU1TNEY3cApONWd2VXhvQ1Y0dGlNa3RDa1JLRzN0TkVGQUhJN1NjY2ZTZ0NLejZlcEJFWTdaWlp5VnVFRXl1Vkd0VldtSGMzCittemY4ZjdHdmRuTkhBMFBFbWxZMVVzQ2dLTXFyNVEvS04yaE0wU21tcW5kQm94M1R2S08ybERwd0ZHTUpsVmMKc2UxUTZndi9pU2pOMTF0eWU4WG9KeWdCMVFRQThIZGZYK3hpL2pCejBiMU9PQnpnSis2SmJJUWdZeWpDUWJ3YgpSTFFLdkxpREI5aUlqMjhTNzNyRmhsaG9LS0JyYmljOEUvcmFqakRLR250YUZlTmV4cFlMSFdqU2tZWT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
server: https://3648988C03150EFCA3D30250B12D3B.yl4.us-east-1.eks.amazonaws.com
name: ferpablocluster
contexts:
- context:
cluster: ferpablocluster
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- token
- -i
- ferpablocluster
command: heptio-authenticator-aws
env: null

It’s very important to understand, that you can only use the -r flag in the kubectl config file if you have created the EKS cluster with a role. Bear in mind that if you created the EKS cluster with an IAM user the -r flag won’t work, for that use case you will need to create a config map.

- Verify kubectl version is >1.10 , previous versions didn’t support authentication plugins.

- Worker Nodes -> No Resources found

Double check name of the cluster passed to the worked nodes is the same, and that you have executed kubectl apply -f aws-auth-cm.yaml therefore you only modify the role in this yaml file with the value from the worker nodes stack NodeInstanceRole under the Outputs tab.