Thanks Nilesh for your quick and positive response. I am struggling to connect with pod under deployment. It seems my pod is not coming up under deployement
===================================(precise-mystery-212509)$ kubectl describe deployment nfs-busybox
Name: nfs-busybox
Namespace: default
CreationTimestamp: Tue, 04 Sep 2018 14:43:43 +0530
Labels: name=nfs-busybox
Annotations: deployment.kubernetes.io/revision=1
kubectl.kubernetes.io/last-applied-configuration={“apiVersion”:”extensions/v1beta1",”kind”:”Deployment”,”metadata”:{“annotations”:{},”name”:”nfs-busybox”,”namespace”:”default”},”spec”:{“replicas”:1,”s…
Selector: name=nfs-busybox
Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 1 max unavailable, 1 max surge
Pod Template:
Labels: name=nfs-busybox
Containers:
busybox:
Image: busybox
Port: <none>
Environment: <none>
Mounts:
/mnt from my-pvc-nfs (rw)
Volumes:
my-pvc-nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs
ReadOnly: false
Conditions:
Type Status Reason
— — — — — — — —
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: nfs-busybox-7b77c64c47 (1/1 replicas created)
Events:
Type Reason Age From Message
— — — — — — — — — — — — -
Normal ScalingReplicaSet 3m deployment-controller Scaled up replica set nfs-busybox-7b77c64c47 to 1
=======================================
(precise-mystery-212509)$ kubectl describe pod nfs-busybox-7b77c64c47–4sgjn
Name: nfs-busybox-7b77c64c47–4sgjn
Namespace: default
Node: gke-cluster-1-default-pool-75d0d686-f9pm/10.128.0.2
Start Time: Tue, 04 Sep 2018 14:43:43 +0530
Labels: name=nfs-busybox
pod-template-hash=3633720703
Annotations: kubernetes.io/limit-ranger=LimitRanger plugin set: cpu request for container busybox
Status: Running
IP: 10.8.2.11
Controlled By: ReplicaSet/nfs-busybox-7b77c64c47
Containers:
busybox:
Container ID: docker://c21500329f731006db10740e2ee3193f44b34a16e7b6e04d361e9a9a2962ad6a
Image: busybox
Image ID: docker-pullable://busybox@sha256:5e8e0509e829bb8f990249135a36e81a3ecbe94294e7a185cc14616e5fad96bd
Port: <none>
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Tue, 04 Sep 2018 14:49:30 +0530
Finished: Tue, 04 Sep 2018 14:49:30 +0530
Ready: False
Restart Count: 6
Requests:
cpu: 100m
Environment: <none>
Mounts:
/mnt from my-pvc-nfs (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-ttq4h (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
my-pvc-nfs:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: nfs
ReadOnly: false
default-token-ttq4h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-ttq4h
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
— — — — — — — — — — — — -
Normal Scheduled 6m default-scheduler Successfully assigned nfs-busybox-7b77c64c47–4sgjn to gke-cluster-1-default-pool-75d0d686-f9pm
Normal SuccessfulMountVolume 6m kubelet, gke-cluster-1-default-pool-75d0d686-f9pm MountVolume.SetUp succeeded for volume “default-token-ttq4h”
Normal SuccessfulMountVolume 6m kubelet, gke-cluster-1-default-pool-75d0d686-f9pm MountVolume.SetUp succeeded for volume “nfs”
Normal Pulling 5m (x4 over 6m) kubelet, gke-cluster-1-default-pool-75d0d686-f9pm pulling image “busybox”
Normal Pulled 5m (x4 over 6m) kubelet, gke-cluster-1-default-pool-75d0d686-f9pm Successfully pulled image “busybox”
Normal Created 5m (x4 over 6m) kubelet, gke-cluster-1-default-pool-75d0d686-f9pm Created container
Normal Started 5m (x4 over 6m) kubelet, gke-cluster-1-default-pool-75d0d686-f9pm Started container
Warning BackOff 1m (x23 over 6m) kubelet, gke-cluster-1-default-pool-75d0d686-f9pm Back-off restarting failed container