Pablo Perez
Published in

Pablo Perez

Common errors when setting up EKS for the first time

-error Connection refused localhost:8080:

export not well done in .bash_profile, the path exported has to match the name of the config file name under .kube directory

execute kubectl config view to check if it’s actually fetching the config

-error you must be logged into the server or doesn’t have a resource type:

$ kubectl get allerror: the server doesn’t have a resource type “cronjobs”$ kubectl get nodeserror: You must be logged in to the server (Unauthorized)

check the following:

- IAM user ACCESS KEY in ~/.aws/credentials workstation where you execute kubectl commands must be the same you see in the when you describe the IAM , it must be an active ACCESS KEY. I stumbled on this and many others did.

- Check name of the cluster is the same in kubectl config file as in EKS

- Check API endpoint is set for server: and certificatedData:

- Verify you have last aws cli version

On the other hand, .kube/http-cache contains the last api invocations where you can check the api answer codes to get a more accurate insight

$ cat ~/.bash_profile
export PATH=$HOME/bin:$PATH
export KUBECONFIG=$KUBECONFIG:$HOME/.kube/config-ferpablocluster

cat ~/.kube/
cache/ config-ferpablocluster http-cache/
$ cat ~/.kube/config-ferpablocluster
apiVersion: v1
- cluster:
name: ferpablocluster
- context:
cluster: ferpablocluster
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
- name: aws
- token
- -i
- ferpablocluster
command: heptio-authenticator-aws
env: null

It’s very important to understand, that you can only use the -r flag in the kubectl config file if you have created the EKS cluster with a role. Bear in mind that if you created the EKS cluster with an IAM user the -r flag won’t work, for that use case you will need to create a config map.

- Verify kubectl version is >1.10 , previous versions didn’t support authentication plugins.

- Worker Nodes -> No Resources found

Double check name of the cluster passed to the worked nodes is the same, and that you have executed kubectl apply -f aws-auth-cm.yaml therefore you only modify the role in this yaml file with the value from the worker nodes stack NodeInstanceRole under the Outputs tab.




Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Pablo Perez

Pablo Perez

Cloud Engineer

More from Medium

Shoreline: Production Ops, Finally

Bursting MongoDB to a Remote Kubernetes Clusters in Minutes — Part 3

Architecture Example

Clusterpedia v0.1.0 Release — — Complex resource search across Multi-Cluster