Skip to content
Ben Sooraj

Kubernetes By Example (GKE)

kubernetes8 min read

Note: This is a journal of my learnings as I walk through the entire Kubernetes By Example exercises on Google Kubernetes Engine. I had writted this originally on Github.

Contents

  1. Check config details
  2. Spin-up a k8s cluster (GKE)
  3. Pods
  4. Labels
  5. Deployments
  6. Services
  7. Service Discovery
  8. Port Forward
  9. Health Checks
  10. Environment Variables
  11. Namespaces
  12. Volumes
  13. Secrets
  14. Logging
  15. Jobs
  16. StatefulSet
  17. Init Containers
  18. Nodes
  19. API Server access

Check config details

1# List the active accounts:
2$ gcloud auth list
3 Credentialed Accounts
4ACTIVE ACCOUNT
5* xxyyzz@gmail.com
6
7To set the active account, run:
8 $ gcloud config set account `ACCOUNT`
9
10# Checkout the project we are currently in
11$ gcloud config list project
12[core]
13project = kubernetes-practice-219913
14
15Your active configuration is: [default]
16
17# List the default/current config values (I wanted the zone and region details):
18$ gcloud config configurations list
19NAME IS_ACTIVE ACCOUNT PROJECT DEFAULT_ZONE DEFAULT_REGION
20default True xxyyzz@gmail.com kubernetes-practice-219913 asia-south1-a asia-south1

[GO TO TOP]

Spin-up a k8s cluster (GKE)

1# Create a 3-node cluster and set kubectl context
2$ gcloud container clusters create k8s-by-example --num-nodes=3
3
4# Creating cluster k8s-by-example in asia-south1-a... Cluster is being health-checked (master is healthy)...done.
5# Created [https://container.googleapis.com/v1/projects/kubernetes-practice-219913/zones/asia-south1-a/clusters/k8s-by-example].
6# To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/asia-south1-a/k8s-by-example?project=kubernetes-practice-219913
7
8kubeconfig entry generated for k8s-by-example.
9NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
10k8s-by-example asia-south1-a 1.11.7-gke.4 35.200.190.186 n1-standard-1 1.11.7-gke.4 3 RUNNING

Creating a GKE cluster using gcloud automatically makes an entry in the kubconfig file and also set the current context for kubectl.

[GO TO TOP]

Pods

A pod is a collection of containers sharing a network and mount namespace and is the basic unit of deployment in Kubernetes. All containers in a pod are scheduled on the same node.

A dry-run kubectl run sise --image=mhausenblas/simpleservice:0.5.0 --port=9876 --dry-run=true -o yaml, gives the following yaml output:

1apiVersion: apps/v1beta1
2kind: Deployment
3metadata:
4 creationTimestamp: null
5 labels:
6 run: sise
7 name: sise
8spec:
9 replicas: 1
10 selector:
11 matchLabels:
12 run: sise
13 strategy: {}
14 template:
15 metadata:
16 creationTimestamp: null
17 labels:
18 run: sise
19 spec:
20 containers:
21 - image: mhausenblas/simpleservice:0.5.0
22 name: sise
23 ports:
24 - containerPort: 9876
25 resources: {}
26status: {}

Let's run the pod using the image mhausenblas/simpleservice:0.5.0:

1$ kubectl run sise --image=mhausenblas/simpleservice:0.5.0 --port=9876
2kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
3deployment.apps/sise created
4
5# List out the pod
6$ kubectl get po -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8sise-bf8d99689-qgkkk 1/1 Running 0 43s 10.12.1.6 gke-k8s-by-example-default-pool-f7f7edae-09cs <none>
9
10# Grab the IP address
11$ kubectl describe pods sise-bf8d99689-qgkkk | grep IP
12IP: 10.12.1.6
13
14# Get inside the pod and access the API using the IP address.
15# This is accessible from the cluster as well
16$ kubectl exec -it sise-bf8d99689-qgkkk sh
17> curl localhost:9876/info
18{"host": "localhost:9876", "version": "0.5.0", "from": "127.0.0.1"}#
19
20> curl 10.12.1.6:9876/info
21{"host": "10.12.1.6:9876", "version": "0.5.0", "from": "10.12.1.6"}#
22
23# List the deployments
24$ kubectl get deployments.
25NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
26sise 1 1 1 1 20m
27
28# And delete it
29$ kubectl delete deployments sise
30deployment.extensions "sise" deleted
Using a configuration file
1# Apply a configuration to a resource by filename or stdin. The resource name must be specified. This resource will be created if it doesn't exist yet. JSON and YAML formats are accepted.
2$ kubectl apply -f pod/pod.yaml
3pod/twocontainers created
4
5# List the pods
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8twocontainers 2/2 Running 0 1m 10.12.1.7 gke-k8s-by-example-default-pool-f7f7edae-09cs <none>
9
10# Get inside the container named 'shell' within the pod named 'twocontainers'
11$ kubectl exec -it twocontainers -c shell -- bash
12[root@twocontainers /]# curl localhost:9876/info
13{"host": "localhost:9876", "version": "0.5.0", "from": "127.0.0.1"}
14
15[root@twocontainers /]# curl 10.12.1.7:9876/info
16{"host": "10.12.1.7:9876", "version": "0.5.0", "from": "10.12.1.7"}
17
18# Clean up
19$ kubectl delete pods twocontainers
20pod "twocontainers" deleted
Creating pods with resource limits

Set the cpu and memory limits at spec.containers[].resources.limits.cpu and spec.containers[].resources.limits.memory respectively.

Basics/04-Kubernetes-By-Example/pod/constraint-pod.yaml
1apiVersion: v1
2kind: Pod
3metadata:
4 name: containers-constraint
5spec:
6 containers:
7 - name: sise
8 image: mhausenblas/simpleservice:0.5.0
9 ports:
10 - containerPort: 9876
11 resources:
12 limits:
13 memory: "64Mi"
14 cpu: "500m"

Create the pod with the resource limits set above

1# Create the pod
2kubectl apply -f pod/constraint-pod.yaml
3
4# List the pods
5$ kubectl get pods
6NAME READY STATUS RESTARTS AGE
7containers-constraint 1/1 Running 0 14m
8
9# Clean up
10$ kubectl delete pods containers-constraint
11pod "containers-constraint" deleted

[GO TO TOP]

Labels

Labels are the mechanism you use to organize Kubernetes objects. A label is a key-value pair with certain restrictions concerning length and allowed values but without any pre-defined meaning.

1# Create the pod using labels/labels-1.yaml
2$ kubectl create -f labels/labels-1.yaml
3pod/labelex created
4
5# Check the pods created
6$ kubectl get pods
7NAME READY STATUS RESTARTS AGE
8labelex 0/1 ImagePullBackOff 0 31s

Oops! Looks like I made some mistake while specifying the image for the container. Let me checkout what went wrong using the describe command:

1$ kubectl describe pods labelex
2Name: labelex
3Namespace: default
4Priority: 0
5PriorityClassName: <none>
6Node: gke-k8s-by-example-default-pool-41076e94-4n53/10.160.0.12
7Start Time: Fri, 15 Mar 2019 18:12:37 +0530
8Labels: env=development
9Annotations: <none>
10Status: Pending
11IP: 10.12.2.8
12.
13.
14.
15Events:
16 Type Reason Age From Message
17 ---- ------ ---- ---- -------
18 Normal Scheduled 5m10s default-scheduler Successfully assigned default/labelex to gke-k8s-by-example-default-pool-41076e94-4n53
19 Normal SandboxChanged 5m1s (x2 over 5m3s) kubelet, gke-k8s-by-example-default-pool-41076e94-4n53 Pod sandbox changed, it will be killed and re-created.
20 Normal Pulling 4m10s (x3 over 5m9s) kubelet, gke-k8s-by-example-default-pool-41076e94-4n53 pulling image "mhausenblas/simpleservice:0.5."
21 Warning Failed 4m5s (x3 over 5m4s) kubelet, gke-k8s-by-example-default-pool-41076e94-4n53 Failed to pull image "mhausenblas/simpleservice:0.5.": rpc error: code = Unknown desc = Error response from daemon: manifest for mhausenblas/simpleservice:0.5. not found
22 Warning Failed 4m5s (x3 over 5m4s) kubelet, gke-k8s-by-example-default-pool-41076e94-4n53 Error: ErrImagePull
23 Normal BackOff 3m26s (x7 over 5m2s) kubelet, gke-k8s-by-example-default-pool-41076e94-4n53 Back-off pulling image "mhausenblas/simpleservice:0.5."
24 Warning Failed 3s (x19 over 5m2s) kubelet, gke-k8s-by-example-default-pool-41076e94-4n53 Error: ImagePullBackOff

Skimming through the Events section I found:

Failed to pull image "mhausenblas/simpleservice:0.5.": rpc error: code = Unknown desc = Error response from daemon: manifest for mhausenblas/simpleservice:0.5. not found

Lol! I mentioned the wrong image name(mhausenblas/simpleservice:0.5. instead of mhausenblas/simpleservice:0.5.0). Let me correct that and apply the changes:

1# This time the image is successfully pulled
2$ kubectl describe pods labelex
3# Events:
4# Type Reason Age From Message
5# ---- ------ ---- ---- -------
6# Normal Pulled 68s kubelet, gke-k8s-by-example-default-pool-41076e94-4n53 Successfully pulled image "mhausenblas/simpleservice:0.5.0"
7
8# List the pod created
9$ kubectl get pods
10NAME READY STATUS RESTARTS AGE
11labelex 1/1 Running 0 11m
12
13# Show the labels as well
14$ kubectl get pods --show-labels
15NAME READY STATUS RESTARTS AGE LABELS
16labelex 1/1 Running 0 15m env=development
17
18# Filter by the label now
19$ kubectl get pods -l env=development
20NAME READY STATUS RESTARTS AGE
21labelex 1/1 Running 0 16m
22
23# Add a label to the pod
24$ kubectl label pods labelex ownwer=bensooraj
25pod/labelex labeled
26
27# List them out again
28$ kubectl get pods --show-labels
29NAME READY STATUS RESTARTS AGE LABELS
30labelex 1/1 Running 0 17m env=development,ownwer=bensooraj
31
32# Filter by the new label.
33$ kubectl get pods --selector ownwer=bensooraj
34NAME READY STATUS RESTARTS AGE
35labelex 1/1 Running 0 19m

I am really sorry for the spelling mistake with the label ownwer=bensooraj. It hurts my eyes.

Anyways, --selector and -l mean the same thing.

Set based selectors

Kubernetes objects also support set-based selectors

We will launch another pod that has two labels (env=production and owner=bensooraj)

1# Create a new pod using labels/labels-2.yaml
2$ kubectl apply -f labels/labels-2.yaml
3
4# List out all the pods along with the labels
5$ kubectl get pods --show-labels
6NAME READY STATUS RESTARTS AGE LABELS
7labelex 1/1 Running 0 57m env=development,ownwer=bensooraj
8labelex2 1/1 Running 0 2m env=production,owner=bensooraj
9
10# Let's get fancy here with selecting the labels
11$ kubectl get pods --show-labels -l 'env in (development)'
12NAME READY STATUS RESTARTS AGE LABELS
13labelex 1/1 Running 0 57m env=development,ownwer=bensooraj
14
15# The following lists all pods that are either labelled with env=development or with env=production
16$ kubectl get pods --show-labels -l 'env in (development, production)'
17NAME READY STATUS RESTARTS AGE LABELS
18labelex 1/1 Running 0 57m env=development,ownwer=bensooraj
19labelex2 1/1 Running 0 3m env=production,owner=bensooraj

I can even delete pods like that:

1$ kubectl delete pods -l 'env in (development, production)'
2pod "labelex" deleted
3pod "labelex2" deleted
4
5# You can see them getting terminated
6$ kubectl get pods -w
7NAME READY STATUS RESTARTS AGE
8labelex 1/1 Terminating 0 61m
9labelex2 1/1 Terminating 0 6m34s

[GO TO TOP]

Deployments

A deployment is a supervisor for pods, giving you fine-grained control over how and when a new pod version is rolled out as well as rolled back to a previous state.

1# Create a deploment called sise-deploy using
2$ kubectl apply -f deployments/deployment-1.yaml
3deployment.apps/sise-deployment created
4
5# The deployment has started creating the pods
6$ kubectl get pods
7NAME READY STATUS RESTARTS AGE
8sise-deployment-6b9688f8f5-8xgr4 0/1 ContainerCreating 0 25s
9sise-deployment-6b9688f8f5-cwlvc 0/1 ContainerCreating 0 25s
10
11# After a while
12$ kubectl get pods
13NAME READY STATUS RESTARTS AGE
14sise-deployment-6b9688f8f5-8xgr4 1/1 Running 0 63s
15sise-deployment-6b9688f8f5-cwlvc 1/1 Running 0 63s
16
17# Check the deployment as well
18$ kubectl get deployments -o wide
19NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
20sise-deployment 2 2 2 2 3m sise mhausenblas/simpleservice:0.5.0 app=sise
21
22# List out the replica sets
23$ kubectl get rs -o wide
24NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
25sise-deployment-6b9688f8f5 2 2 2 4m sise mhausenblas/simpleservice:0.5.0 app=sise,pod-template-hash=2652449491

Note the naming of the pods and replica set, derived from the deployment name.

Check the app using the pod IPs

1# Get the pod IPs
2$ kubectl describe pod sise-deployment-6b9688f8f5-8xgr4 | grep IP
3IP: 10.12.1.6
4
5$ kubectl describe pod sise-deployment-6b9688f8f5-cwlvc | grep IP
6IP: 10.12.2.5

SSH into one of the nodes of the cluster:

  • Navigate to GCE > Compute Engine > VM instances. Select one of the nodes
  • Under the Connect (against any one of the nodes), click on the SSH drop-down and select View gcloud command as shown below: Deployment 1
  • You will be presented with a command similar to: gcloud compute --project "kubernetes-practice-219913" ssh --zone "asia-south1-a" "gke-k8s-by-example-default-pool-5574bdde-7k75"
1# From within the cluster, access the app running inside the pods
2Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.12.1.6:9876/info
3{"host": "10.12.1.6:9876", "version": "0.9", "from": "10.160.0.13"}
4
5Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.12.2.5:9876/info
6{"host": "10.12.2.5:9876", "version": "0.9", "from": "10.160.0.13"}

Rolling out an update

1# Update the value of the environment variable SIMPLE_SERVICE_VERSION from "0.9" to "1.0"
2$ kubectl apply -f deployments/deployment-2.yaml
3deployment.apps/sise-deployment configured
4
5# You can see the roll-out happening
6$ kubectl get pods -w
7NAME READY STATUS RESTARTS AGE
8sise-deployment-6b9688f8f5-8xgr4 1/1 Terminating 0 41m
9sise-deployment-6b9688f8f5-cwlvc 1/1 Terminating 0 41m
10sise-deployment-6c7b7f88c5-8mwr2 1/1 Running 0 16s
11sise-deployment-6c7b7f88c5-zxfgm 1/1 Running 0 18s
12
13# After a while
14$ kubectl get pods
15NAME READY STATUS RESTARTS AGE
16sise-deployment-6c7b7f88c5-8mwr2 1/1 Running 0 2m
17sise-deployment-6c7b7f88c5-zxfgm 1/1 Running 0 2m
18
19# Check out the replication set as well. A new replication set will be created
20$ kubectl get rs -w
21NAME DESIRED CURRENT READY AGE
22sise-deployment-6b9688f8f5 0 0 0 42m
23sise-deployment-6c7b7f88c5 2 2 2 57s
24
25# Check out the roll-out status
26$ kubectl rollout status deployment sise-deployment
27deployment "sise-deployment" successfully rolled out

Remember, the value change can also be rolled out using the command: kubectl edit deploy sise-deployment.

Verify the change made to the value of the environment variable by pinging the app

1# Get the new set of pod IPs
2$ kubectl describe pods sise-deployment-6c7b7f88c5-8mwr2 | grep IP
3IP: 10.12.2.6
4
5$ kubectl describe pods sise-deployment-6c7b7f88c5-zxfgm | grep IP
6IP: 10.12.1.7
7
8# Curl the IPs from the node we SSHed into above
9Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.12.2.6:9876/info
10{"host": "10.12.2.6:9876", "version": "1.0", "from": "10.160.0.13"}
11
12Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.12.1.7:9876/info
13{"host": "10.12.1.7:9876", "version": "1.0", "from": "10.160.0.13"}

Undo the roll-out

1# Check-out the roll-out history
2$ kubectl rollout history deployment sise-deployment
3deployment.extensions/sise-deployment
4REVISION CHANGE-CAUSE
51 <none>
62 <none>
7
8# Undo the roll-out
9$ kubectl rollout undo deployment sise-deployment
10deployment.extensions/sise-deployment
11
12# The roll-back has begun
13$ kubectl get pods -o wide -w
14NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
15sise-deployment-6b9688f8f5-74fnz 1/1 Running 0 8s 10.12.2.7 gke-k8s-by-example-default-pool-5574bdde-gkhk <none>
16sise-deployment-6b9688f8f5-gglnk 1/1 Running 0 10s 10.12.1.8 gke-k8s-by-example-default-pool-5574bdde-dnnl <none>
17sise-deployment-6c7b7f88c5-8mwr2 1/1 Terminating 0 13m 10.12.2.6 gke-k8s-by-example-default-pool-5574bdde-gkhk <none>
18sise-deployment-6c7b7f88c5-zxfgm 1/1 Terminating 0 13m 10.12.1.7 gke-k8s-by-example-default-pool-5574bdde-dnnl <none>
19
20# List the roll-out history one more time
21$ kubectl rollout history deployment sise-deployment
22deployment.extensions/sise-deployment
23REVISION CHANGE-CAUSE
242 <none>
253 <none>
26
27# Get the new IP addresses
28$ kubectl describe pods sise-deployment-6b9688f8f5-74fnz | grep IP
29IP: 10.12.2.7
30
31$ kubectl describe pods sise-deployment-6b9688f8f5-gglnk | grep IP
32IP: 10.12.1.8
33
34# Ping the app again from withing the cluster
35Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.12.2.7:9876/info
36{"host": "10.12.2.7:9876", "version": "0.9", "from": "10.160.0.13"}
37
38Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.12.1.8:9876/info
39{"host": "10.12.1.8:9876", "version": "0.9", "from": "10.160.0.13"}

You can see the version rolled-back from "version": "1.0" to "version": "0.9".

Also, you can explicitly roll back to a specific revision using the flag --to-revision. For example: kubectl rollout undo deployment sise-deployment

Time to clean up!

1$ kubectl delete deployment sise-deployment
2deployment.extensions "sise-deployment" deleted
3
4# Pods going down! :P
5$ kubectl get pods -w
6NAME READY STATUS RESTARTS AGE
7sise-deployment-6b9688f8f5-74fnz 1/1 Terminating 0 6m27s
8sise-deployment-6b9688f8f5-gglnk 1/1 Terminating 0 6m29s

[GO TO TOP]

Services

A service is an abstraction for pods, providing a stable, so called virtual IP (VIP) address. While pods may come and go and with it their IP addresses, a service allows clients to reliably connect to the containers running in the pod using the VIP. The virtual in VIP means it is not an actual IP address connected to a network interface, but its purpose is purely to forward traffic to one or more pods. Keeping the mapping between the VIP and the pods up-to-date is the job of kube-proxy, a process that runs on every node, which queries the API server to learn about new services in the cluster.

Create the ReplicationController from rc.yaml:

1$ kubectl apply -f services/rc.yaml
2
3# Check the ReplicationController created
4$ kubectl get replicationcontrollers -o wide
5NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
6rc-sise 2 2 2 50s rc-sise mhausenblas/simpleservice:0.5.0 app=rc-sise
7
8# And the pods
9$ kubectl get pod --show-labels -o wide
10NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE LABELS
11rc-sise-24vg4 1/1 Running 0 1m 10.12.1.9 gke-k8s-by-example-default-pool-5574bdde-dnnl <none> app=rc-sise
12rc-sise-dm4p8 1/1 Running 0 1m 10.12.2.8 gke-k8s-by-example-default-pool-5574bdde-gkhk <none> app=rc-sise

Create the Service from svc.yaml:

1$ kubectl apply -f services/svc.yaml
2service/simple-service created
3
4# Get the service
5$ kubectl get service -o wide
6NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
7kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 3h <none>
8simple-service ClusterIP 10.15.255.188 <none> 80/TCP 29s app=rc-sise
9
10# Get the pods
11$ kubectl get pods -l app=rc-sise -o wide
12NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
13rc-sise-24vg4 1/1 Running 0 7m 10.12.1.9 gke-k8s-by-example-default-pool-5574bdde-dnnl <none>
14rc-sise-dm4p8 1/1 Running 0 7m 10.12.2.8 gke-k8s-by-example-default-pool-5574bdde-gkhk <none>
15
16# Describe one of the pods and grab one of their IPs
17$ kubectl describe pods rc-sise-24vg4 | grep IP
18IP: 10.12.1.9
19
20# This can be accessed from one of three nodes running in the cluster
21Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.12.1.9:9876/info
22{"host": "10.12.1.9:9876", "version": "0.5.0", "from": "10.160.0.13"}

However, remember that pod IPs are ephemeral in nature and exist only as long as the pod exists. So, relying on pod IPs is not the right approach.

The service keeps track of the pods it forwards traffic to through the label, in our case app=sise.

Let's review the service that we created one more time:

1$ kubectl get svc -o wide
2NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
3kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 3h <none>
4simple-service ClusterIP 10.15.255.188 <none> 80/TCP 6m app=rc-sise
5
6# Describe them
7$ kubectl describe svc simple-service
8Name: simple-service
9Namespace: default
10Labels: <none>
11Annotations: kubectl.kubernetes.io/last-applied-configuration:
12 {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"simple-service","namespace":"default"},"spec":{"ports":[{"port":8...
13Selector: app=rc-sise
14Type: ClusterIP
15IP: 10.15.255.188
16Port: <unset> 80/TCP
17TargetPort: 9876/TCP
18Endpoints: 10.12.1.9:9876,10.12.2.8:9876
19Session Affinity: None
20Events: <none>

Note that the Endpoints are actually pod IPs along with the port on which the application is running.

1# The application can now be accessed using the clusterIP, from within the cluster
2Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ curl 10.15.255.188/info
3{"host": "10.15.255.188", "version": "0.5.0", "from": "10.160.0.13"}

IPtables makes the VIP 10.15.255.188 forward the traffic to the pods. IPtables is a long list of rules that tells the Linux kernel what to do with a certain IP package.

Let's check them out:

1# From within the cluster, that is from within a node(vm) running in the cluster
2Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ sudo iptables-save | grep simple-service
3
4-A KUBE-SEP-XKHKNSMBAPANOQ3H -s 10.12.1.9/32 -m comment --comment "default/simple-service:" -j KUBE-MARK-MASQ
5-A KUBE-SEP-XKHKNSMBAPANOQ3H -p tcp -m comment --comment "default/simple-service:" -m tcp -j DNAT --to-destination 10.12.1.9:9876
6-A KUBE-SEP-XRG5PL6H4OXP3HUZ -s 10.12.2.8/32 -m comment --comment "default/simple-service:" -j KUBE-MARK-MASQ
7-A KUBE-SEP-XRG5PL6H4OXP3HUZ -p tcp -m comment --comment "default/simple-service:" -m tcp -j DNAT --to-destination 10.12.2.8:9876
8-A KUBE-SERVICES ! -s 10.12.0.0/14 -d 10.15.255.188/32 -p tcp -m comment --comment "default/simple-service: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
9-A KUBE-SERVICES -d 10.15.255.188/32 -p tcp -m comment --comment "default/simple-service: cluster IP" -m tcp --dport 80 -j KUBE-SVC-LRSQWG6IZCA6IBBJ
10-A KUBE-SVC-LRSQWG6IZCA6IBBJ -m comment --comment "default/simple-service:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XKHKNSMBAPANOQ3H
11-A KUBE-SVC-LRSQWG6IZCA6IBBJ -m comment --comment "default/simple-service:" -j KUBE-SEP-XRG5PL6H4OXP3HUZ

I have no clue how to read the above table, however, this is the kube-proxy defining rules to allow TCP connections back-n-forth the ClusterIP 10.15.255.188 and the pod IPs 10.12.1.9:9876 and 10.12.2.8:9876.

Let's scale up our ReplicationController:

1$ kubectl scale replicationcontroller --replicas=3 rc-sise
2replicationcontroller/rc-sise scaled
3
4# Check the pods
5$ kubectl get pods --show-labels -o wide -w
6NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE LABELS
7rc-sise-24vg4 1/1 Running 0 29m 10.12.1.9 gke-k8s-by-example-default-pool-5574bdde-dnnl <none> app=rc-sise
8rc-sise-dm4p8 1/1 Running 0 29m 10.12.2.8 gke-k8s-by-example-default-pool-5574bdde-gkhk <none> app=rc-sise
9rc-sise-p7sk9 1/1 Running 0 14s 10.12.1.10 gke-k8s-by-example-default-pool-5574bdde-dnnl <none> app=rc-sise

We have one more pod IP to handle, 10.12.1.10.

And guess what? The service simple-service has already updated itself to account for the 3rd pod added to the ReplicationController.

1# Check the Endpoints key. All the 3 pod IPs are now handled by the service
2$ kubectl describe service simple-service
3Name: simple-service
4Namespace: default
5Labels: <none>
6Annotations: kubectl.kubernetes.io/last-applied-configuration:
7 {"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"simple-service","namespace":"default"},"spec":{"ports":[{"port":8...
8Selector: app=rc-sise
9Type: ClusterIP
10IP: 10.15.255.188
11Port: <unset> 80/TCP
12TargetPort: 9876/TCP
13Endpoints: 10.12.1.10:9876,10.12.1.9:9876,10.12.2.8:9876
14Session Affinity: None
15Events: <none>

Let's also checkout the IPtables as well from within the cluster:

1Bensooraj@gke-k8s-by-example-default-pool-5574bdde-7k75 ~ $ sudo iptables-save | grep simple-service
2
3-A KUBE-SEP-O5OGXTSGDHX72GHE -s 10.12.1.10/32 -m comment --comment "default/simple-service:" -j KUBE-MARK-MASQ
4-A KUBE-SEP-O5OGXTSGDHX72GHE -p tcp -m comment --comment "default/simple-service:" -m tcp -j DNAT --to-destination 10.12.1.10:9876
5-A KUBE-SEP-XKHKNSMBAPANOQ3H -s 10.12.1.9/32 -m comment --comment "default/simple-service:" -j KUBE-MARK-MASQ
6-A KUBE-SEP-XKHKNSMBAPANOQ3H -p tcp -m comment --comment "default/simple-service:" -m tcp -j DNAT --to-destination 10.12.1.9:9876
7-A KUBE-SEP-XRG5PL6H4OXP3HUZ -s 10.12.2.8/32 -m comment --comment "default/simple-service:" -j KUBE-MARK-MASQ
8-A KUBE-SEP-XRG5PL6H4OXP3HUZ -p tcp -m comment --comment "default/simple-service:" -m tcp -j DNAT --to-destination 10.12.2.8:9876
9-A KUBE-SERVICES ! -s 10.12.0.0/14 -d 10.15.255.188/32 -p tcp -m comment --comment "default/simple-service: cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
10-A KUBE-SERVICES -d 10.15.255.188/32 -p tcp -m comment --comment "default/simple-service: cluster IP" -m tcp --dport 80 -j KUBE-SVC-LRSQWG6IZCA6IBBJ
11-A KUBE-SVC-LRSQWG6IZCA6IBBJ -m comment --comment "default/simple-service:" -m statistic --mode random --probability 0.33332999982 -j KUBE-SEP-O5OGXTSGDHX72GHE
12-A KUBE-SVC-LRSQWG6IZCA6IBBJ -m comment --comment "default/simple-service:" -m statistic --mode random --probability 0.50000000000 -j KUBE-SEP-XKHKNSMBAPANOQ3H
13-A KUBE-SVC-LRSQWG6IZCA6IBBJ -m comment --comment "default/simple-service:" -j KUBE-SEP-XRG5PL6H4OXP3HUZ

... the traffic to the service is equally split between the three pods by invoking the statistics module of IPtables.

I think the --probability does that.

Alrighty! Time to clean up:

1$ kubectl delete replicationcontrollers rc-sise
2replicationcontroller "rc-sise" deleted
3
4$ kubectl delete svc simple-service
5service "simple-service" deleted

I think it makes more sense to delete the Service first and then the ReplicationController. I will do that next time.

[GO TO TOP]

Service Discovery

Service discovery is the process of figuring out how to connect to a service. While there is a service discovery option based on environment variables available, the DNS-based service discovery is preferable. Note that DNS is a cluster add-on so make sure your Kubernetes distribution provides for one or install it yourself.

1# Create the RC from service-discovery/rc.yaml
2$ kubectl apply -f service-discovery/rc.yaml
3replicationcontroller/rcsise created
4
5# Check the pods
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8rcsise-mgdc8 1/1 Running 0 56s 10.12.2.6 gke-k8s-by-example-default-pool-dec3a359-jsgl <none>
9rcsise-rwqxt 1/1 Running 0 56s 10.12.1.5 gke-k8s-by-example-default-pool-dec3a359-wrxk <none>
10
11# Create the service as well using service-discovery/svc.yaml
12$ kubectl apply -f service-discovery/svc.yaml
13service/thesvc created
14
15# Check the service that we just created
16$ kubectl get svc -o wide -w
17NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
18kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 159m <none>
19thesvc ClusterIP 10.15.241.194 <none> 80/TCP 11s app=sise

I will now create a jump pod in the default namespace and try to simulate connecting to the thesvc service from within the cluster, say, from another service.

1# Create the jump pod
2$ kubectl apply -f service-discovery/jumppod.yaml
3pod/jumppod created
4
5# List the pods created
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8jumppod 1/1 Running 0 1m 10.12.2.7 gke-k8s-by-example-default-pool-dec3a359-jsgl <none>
9rcsise-mgdc8 1/1 Running 0 31m 10.12.2.6 gke-k8s-by-example-default-pool-dec3a359-jsgl <none>
10rcsise-rwqxt 1/1 Running 0 31m 10.12.1.5 gke-k8s-by-example-default-pool-dec3a359-wrxk <none>

The DNS add-on will make sure that our service thesvc is available via the FQDN thesvc.default.svc.cluster.local from other pods in the cluster.

1# Get inside the pod `jumppod`
2$ kubectl exec -it jumppod sh
3
4# From within jumppod, ping thesvc.default.svc.cluster.local
5sh-4.2# ping thesvc.default.svc.cluster.local
6PING thesvc.default.svc.cluster.local (10.15.241.194) 56(84) bytes of data.
7^C
8--- thesvc.default.svc.cluster.local ping statistics ---
923 packets transmitted, 0 received, 100% packet loss, time 22522ms

The ping results in 100% packet loss, probably, because ping is not enable on the application image. However, you can see that the thesvc.default.svc.cluster.local translating to the ClusterIP 10.15.241.194.

Let ping the application using the FQDN, from withing the jumppod:

1# Using the FQDN
2sh-4.2# curl thesvc.default.svc.cluster.local/info
3{"host": "thesvc.default.svc.cluster.local", "version": "0.5.0", "from": "10.12.2.7"}
4
5# Using the name of the service `thesvc`
6sh-4.2# curl thesvc/info
7{"host": "thesvc", "version": "0.5.0", "from": "10.12.2.7"}
8
9# Or
10sh-4.2# curl http://thesvc/info
11{"host": "thesvc", "version": "0.5.0", "from": "10.12.2.7"}

Now, let's try to reach the application from within one of the nodes in the cluster

1Bensooraj@gke-k8s-by-example-default-pool-dec3a359-jsgl ~ $ curl thesvc.default.svc.cluster.local/info
2# curl: (6) Couldn't resolve host 'thesvc.default.svc.cluster.local'
3
4Bensooraj@gke-k8s-by-example-default-pool-dec3a359-jsgl ~ $ curl thesvc/info
5# curl: (6) Couldn't resolve host 'thesvc'
6Bensooraj@gke-k8s-by-example-default-pool-dec3a359-jsgl ~ $ curl http://thesvc/info
7# curl: (6) Couldn't resolve host 'thesvc'
8
9Bensooraj@gke-k8s-by-example-default-pool-dec3a359-jsgl ~ $ curl 10.15.241.194/info
10# {"host": "10.15.241.194", "version": "0.5.0", "from": "10.160.0.18"}

The FQDN thesvc.default.svc.cluster.local works only from within another pod in the same namespace, unlike the ClusterIP.

To access a service that is deployed in a different namespace than the one you’re accessing it from, use a FQDN in the form $SVC.$NAMESPACE.svc.cluster.local.

Let's attempt connecting to a service running in a different namespace

1# Create a new namespace `other`
2$ kubectl apply -f service-discovery/other-ns.yaml
3namespace/other created
4
5# Let's list the namespaces
6$ kubectl get namespaces
7NAME STATUS AGE
8default Active 3h
9kube-public Active 3h
10kube-system Active 3h
11other Active 7s
12
13# Create a ReplicationController in the namespace `other`
14$ kubectl apply -f service-discovery/other-rc.yaml
15
16# List all pods across all namespaces
17$ kubectl get pods --all-namespaces
18NAMESPACE NAME READY STATUS RESTARTS AGE
19default jumppod 1/1 Running 0 42m
20default rcsise-mgdc8 1/1 Running 0 1h
21default rcsise-rwqxt 1/1 Running 0 1h
22.
23.
24.
25other other-rc-stk98 1/1 Running 0 4m
26
27# Create the service in the namespace `other`
28$ kubectl apply -f service-discovery/other-svc.yaml
29service/other-sise-service created
30
31# List all services across all namespaces
32$ kubectl get svc --all-namespaces
33NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
34default kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 4h
35default thesvc ClusterIP 10.15.241.194 <none> 80/TCP 1h
36.
37.
38.
39other other-sise-service ClusterIP 10.15.245.253 <none> 80/TCP 3s

Get inside the jumppod and access the service running in the namespace other:

1$ kubectl exec -it jumppod sh
2
3# Ping using the FQDN
4sh-4.2# curl other-sise-service.other.svc.cluster.local/info
5{"host": "other-sise-service.other.svc.cluster.local", "version": "0.5.0", "from": "10.12.2.7"}
6
7# And the shorter version as well
8sh-4.2# curl other-sise-service.other/info
9{"host": "other-sise-service.other", "version": "0.5.0", "from": "10.12.2.7"}

Summing up, DNS-based service discovery provides a flexible and generic way to connect to services across the cluster.

Clean up time!

1# Bring down the resources in the namespace `other`
2$ kubectl --namespace=other delete svc other-sise-service
3service "other-sise-service" deleted
4
5$ kubectl --namespace=other delete rc other-rc
6replicationcontroller "other-rc" deleted
7
8# And in the namespace `default`
9$ kubectl delete svc thesvc
10service "thesvc" deleted
11
12$ kubectl delete rc rcsise
13replicationcontroller "rcsise" deleted
14
15$ kubectl delete pod jumppod
16pod "jumppod" deleted

[GO TO TOP]

Port Forward

In the context of developing apps on Kubernetes it is often useful to quickly access a service from your local environment without exposing it using, for example, a load balancer or an ingress resource. In this case you can use port forwarding.

1# Created a Deployment and a corresponding Service using port-forward/port-forward-1.yaml
2$ kubectl create -f port-forward/port-forward-1.yaml
3deployment.apps/sise-deploy created
4service/simpleservice created
5
6# List the deployment
7$ kubectl get deployment -o wide
8NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
9sise-deploy 1 1 1 1 1m sise mhausenblas/simpleservice:0.5.0 app=sise
10
11# And the service
12$ kubectl get service -o wide
13NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
14kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 13h <none>
15simpleservice ClusterIP 10.15.244.55 <none> 80/TCP 1m app=sise

The application running in GKE is either accessible from the pods or from any node of the cluster. We want to access the service from our local machine as well for development.

1# Fetch the pod IP
2$ kubectl describe pods sise-deploy-56955c466c-dz99x | grep IP
3IP: 10.12.2.9
4
5
6# From the local machine, let's curl the pod's IP
7$ curl 10.12.2.9:9876/info
8
9# Or the ClusteIP created by the service
10$ curl 10.15.244.55/info

They return nothing. Let's do a port-forward now.

1# To access the `simpleservice` service from the local environment on port 8080
2$ kubectl port-forward service/simpleservice 8080:80
3Forwarding from 127.0.0.1:8080 -> 9876
4Forwarding from [::1]:8080 -> 9876
5
6# Curl localhost:8080
7$ curl localhost:8080/info
8{"host": "localhost:8080", "version": "0.5.0", "from": "127.0.0.1"}
9
10# Perfecto!

Remember that port forwarding is not meant for production traffic but for development and experimentation.

Clean up!

1$ kubectl delete -f port-forward/port-forward-1.yaml
2deployment.apps "sise-deploy" deleted
3service "simpleservice" deleted

[GO TO TOP]

Health Checks

In order to verify if a container in a pod is healthy and ready to serve traffic, Kubernetes provides for a range of health checking mechanisms. Health checks, or probes as they are called in Kubernetes, are carried out by the kubelet to determine when to restart a container (for livenessProbe) and used by services and deployments to determine if a pod should receive traffic (for readinessProbe).

1# Pod which exposes /health for liveness health check. Kubernetes will start checking the /health endpoint, after initially waiting 2 seconds, every 5 seconds.
2$ kubectl apply -f health-checks/liveness-pod.yaml
3pod/readiness-pod created
4
5# List the pod
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8readiness-pod 1/1 Running 0 1m 10.12.1.6 gke-k8s-by-example-default-pool-85e67e86-2xlh <none>
9
10# Describe the pod
11$ kubectl describe pods readiness-pod

Relevant excerpt from kubectl describe pods readiness-pod:

Health 1

Let's launch a bad/unhealthy pod now, that has a container that randomly (in the time range 1 to 4 sec) does not return a 200 code.

1# Launch the bad pod
2$ kubectl apply -f health-checks/bad-pod.yaml
3pod/bad-pod created
4
5# List out the pods; look at the number of restarts!
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8bad-pod 1/1 Running 4 4m 10.12.0.5 gke-k8s-by-example-default-pool-85e67e86-qbpv <none>
9readiness-pod 1/1 Running 0 17m 10.12.1.6 gke-k8s-by-example-default-pool-85e67e86-2xlh <none>
10
11# Logging out the bad-pod
12$ kubectl logs -f bad-pod
13# 2019-03-18T06:38:49 INFO This is simple service in version v0.5.0 listening on port 9876 [at line 142]
14# 2019-03-18T06:38:52 INFO /health serving from 10.12.0.5:9876 has been invoked from 10.12.0.1 [at line 79]
15# 2019-03-18T06:38:55 INFO 200 GET /health (10.12.0.1) 3277.99ms [at line 1946]
16# 2019-03-18T06:38:57 INFO /health serving from 10.12.0.5:9876 has been invoked from 10.12.0.1 [at line 79]
17# 2019-03-18T06:39:00 INFO 200 GET /health (10.12.0.1) 3540.73ms [at line 1946]
18# 2019-03-18T06:39:02 INFO /health serving from 10.12.0.5:9876 has been invoked from 10.12.0.1 [at line 79]
19# 2019-03-18T06:39:06 INFO 200 GET /health (10.12.0.1) 3778.49ms [at line 1946]
20# ..
21# ..
22
23# Print out the events as well
24$ kubectl describe pods bad-pod

Relevant excerpt from printing out the bad-pod events:

Health 2

Let’s create a pod with a readinessProbe that signals when the container is ready to serve traffica and kicks in after 10 seconds:

1# Create the pod
2$ kubectl apply -f health-checks/readiness-pod.yaml
3pod/readiness-pod-1 created
4
5# List the pods now
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8bad-pod 0/1 CrashLoopBackOff 11 27m 10.12.0.5 gke-k8s-by-example-default-pool-85e67e86-qbpv <none>
9readiness-pod 1/1 Running 0 40m 10.12.1.6 gke-k8s-by-example-default-pool-85e67e86-2xlh <none>
10readiness-pod-1 1/1 Running 0 1m 10.12.1.7 gke-k8s-by-example-default-pool-85e67e86-2xlh <none>
11
12# Describe the pods events
13$ kubectl describe pods readiness-pod-1

Relevant excerpt from printing out the readiness-pod-1 events:

Health 3

Clean up time:

1$ kubectl delete pods --all
2pod "bad-pod" deleted
3pod "readiness-pod" deleted
4pod "readiness-pod-1" deleted

I just realised; I messed up the pod names. Sorry!

[GO TO TOP]

Environment Variables

You can set environment variables for containers running in a pod and in addition, Kubernetes exposes certain runtime infos via environment variables automatically.

Launch a pod with the environment variable SIMPLE_SERVICE_VERSION and value "1.0":

1# Use environment-variables/env-pod.yaml
2$ kubectl apply -f environment-variables/env-pod.yaml
3pod/envs created
4
5# List the pods
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8envs 1/1 Running 0 2m 10.12.2.6 gke-k8s-by-example-default-pool-6c270685-0hd5 <none>
9
10# Grab the IP address:
11$ kubectl describe pod envs | grep IP
12IP: 10.12.2.6
13
14# Curl the pod IP from inside the cluster
15[CLUSTER] $ curl 10.12.2.6:9876/info && echo
16{"host": "10.12.2.6:9876", "version": "1.0", "from": "10.12.2.1"}
17
18[CLUSTER] $ curl 10.12.2.6:9876/env && echo
19{"version": "1.0", "env": "{'LANG': 'C.UTF-8', 'KUBERNETES_PORT_443_TCP_PROTO': 'tcp', 'KUBERNETES_PORT_443_TCP': 'tcp://10.15.240.1:443', 'SIMPLE_SERVICE_VERSION': '1.0', 'PYTHON_PIP_VERSION': '9.0.1', 'KUBERNETES_SERVICE_HOST': '10.15.240.1', 'HOSTNAME': 'envs', 'KUBERNETES_SERVICE_PORT_HTTPS': '443', 'REFRESHED_AT': '2017-04-24T13:50', 'GPG_KEY': 'C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF', 'KUBERNETES_PORT_443_TCP_ADDR': '10.15.240.1', 'KUBERNETES_PORT': 'tcp://10.15.240.1:443', 'PATH': '/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin', 'KUBERNETES_PORT_443_TCP_PORT': '443', 'HOME': '/root', 'KUBERNETES_SERVICE_PORT': '443', 'PYTHON_VERSION': '2.7.13'}"}

Or, exec into the envs pod and printout the variables:

1$ kubectl exec envs -- sh -c 'env'
2KUBERNETES_SERVICE_PORT=443
3KUBERNETES_PORT=tcp://10.15.240.1:443
4HOSTNAME=envs
5PYTHON_PIP_VERSION=9.0.1
6HOME=/root
7GPG_KEY=C01E1CAD5EA2C4F0B8E3571504C367C218ADD4FF
8SIMPLE_SERVICE_VERSION=1.0
9KUBERNETES_PORT_443_TCP_ADDR=10.15.240.1
10PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
11KUBERNETES_PORT_443_TCP_PORT=443
12KUBERNETES_PORT_443_TCP_PROTO=tcp
13LANG=C.UTF-8
14PYTHON_VERSION=2.7.13
15KUBERNETES_SERVICE_PORT_HTTPS=443
16KUBERNETES_PORT_443_TCP=tcp://10.15.240.1:443
17KUBERNETES_SERVICE_HOST=10.15.240.1
18PWD=/usr/src/app
19REFRESHED_AT=2017-04-24T13:50

Clean up time:

1$ kubectl delete pods --all
2pod "envs" deleted

[GO TO TOP]

Namespaces

Namespaces provide for a scope of Kubernetes resource, carving up your cluster in smaller units. You can think of it as a workspace you’re sharing with other users.

1# Create a namespace named ben-test-namespace
2$ kubectl apply -f namespaces/ns.yaml
3namespace/ben-test-namespace created
4
5# List all the namespaces
6$ kubectl get ns
7NAME STATUS AGE
8ben-test-namespace Active 14s
9default Active 4h
10kube-public Active 4h
11kube-system Active 4h
12
13# Know more about the namespace
14$ kubectl describe ns ben-test-namespace
15Name: ben-test-namespace
16Labels: <none>
17Annotations: kubectl.kubernetes.io/last-applied-configuration:
18 {"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"name":"ben-test-namespace"}}
19Status: Active
20
21Resource Quotas
22 Name: gke-resource-quotas
23 Resource Used Hard
24 -------- --- ---
25 count/ingresses.extensions 0 1G
26 count/jobs.batch 0 1G
27 pods 0 1G
28 services 0 1G
29
30No resource limits.
Launching k8s resources/objects in the newly created namespaces

There are two ways you can accomplishe this. First, using kubectl's "namespace' flag:

1$ kubectl --namespace=ben-test-namespace apply -f namespaces/pod.yaml
2pod/pod-in-ben-test-namespace created

Second, by mentioning the namespace in the pod yaml file, under metadata.namespace

1apiVersion: v1
2kind: Pod
3metadata:
4 name: pod-in-ben-test-namespace
5 namespace: ben-test-namespace
6spec:

Clean up time!

1# Pod in the newly created namespace
2$ kubectl --namespace=ben-test-namespace delete pod pod-in-ben-test-namespace
3pod "pod-in-ben-test-namespace" deleted
4
5# Then the newly created namespace
6$ kubectl delete namespaces ben-test-namespace
7namespace "ben-test-namespace" deleted

[GO TO TOP]

Volumes

A Kubernetes volume is essentially a directory accessible to all containers running in a pod. In contrast to the container-local filesystem, the data in volumes is preserved across container restarts.

The medium backing a volume and its contents are determined by the volume type:

  • node-local types such as emptyDir or hostPath
  • file-sharing types such as nfs
  • cloud provider-specific types like awsElasticBlockStore, azureDisk, or gcePersistentDisk
  • distributed file system types, for example glusterfs or cephfs
  • special-purpose types like secret, gitRepo
1# Create the pod with the two containers
2$ kubectl apply -f volumes/pod.yaml
3pod/sharevol created
4
5# List the pods
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8sharevol 2/2 Running 0 48s 10.12.1.8 gke-k8s-by-example-default-pool-200a36e7-c33r <none>
9
10# Describe the pod (showing only relevant excerpts)
11$ kubectl describe pod sharevol
12Name: sharevol
13Namespace: default
14IP: 10.12.1.8
15Containers:
16 c1:
17 Image: centos:7
18 Mounts:
19 /tmp/xchange from xchange (rw)
20 c2:
21 Image: centos:7
22 Mounts:
23 /temp/data from xchange (rw)
24.
25.
26Volumes:
27 xchange:
28 Type: EmptyDir (a temporary directory that shares a pod's lifetime)
29 Medium:

Let's get inside the containers c1 and c2 and play around. Inside container c1:

1# Exec into the first container c1
2$ kubectl exec -it sharevol -c c1 -- bash
3[root@sharevol /]#
4[root@sharevol /]# mount | grep xchange
5/dev/sda1 on /tmp/xchange type ext4 (rw,relatime,commit=30,data=ordered)
6# Create a
7[root@sharevol /]# cd /tmp/xchange/
8[root@sharevol xchange]# echo "Hannah! I love you babe :)" > love.txt
9[root@sharevol xchange]# cat love.txt
10Hannah! I love you babe :)

Now, let's checkout the second container c2:

1$ kubectl exec -it sharevol -c c2 -- bash
2[root@sharevol /]# cd /temp/data/
3# Guess what!? The love.txt file which we created
4# in the first container is available here
5[root@sharevol data]# ls
6love.txt
7# Let's peek into the content too
8[root@sharevol data]# cat love.txt
9Hannah! I love you babe :)

Note that in each container you need to decide where to mount the volume and that for emptyDir you currently can not specify resource consumption limits.

Clean up time:

1$ kubectl delete pods --all
2pod "sharevol" deleted

[GO TO TOP]

Secrets

Secrets provide you with a mechanism to use information such as database passwords or an API keys in a safe (non-plain text) and reliable way with the following properties:

  • Secrets are namespaced objects, that is, exist in the context of a namespace
  • You can access them via a volume or an environment variable from a container running in a pod
  • The secret data on nodes is stored in tmpfs volumes
  • A per-secret size limit of 1MB exists
  • The API server stores secrets as plaintext in etcd
1# Dump some random text to the file secrets/api-key.txt
2$ echo -n "k2hl1bflkh4lk23b41lkdlk23b4l341234" > secrets/api-key.txt
3
4# Create a new secret named apikey using the file secrets/api-key.txt
5$ kubectl create secret generic apikey --from-file=secrets/api-key.txt
6secret/apikey created
7
8# Describe the secret we just created
9$ kubectl describe secrets apikey
10Name: apikey
11Namespace: default
12Labels: <none>
13Annotations: <none>
14
15Type: Opaque
16
17Data
18====
19api-key.txt: 34 bytes

Let's now use the secret that we just created

1# Create a pod which uses the secret apikey
2$ kubectl apply -f secrets/pod.yaml
3pod/pod-with-secret created
4
5# List the pod
6$ kubectl get pods -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8pod-with-secret 1/1 Running 0 4m 10.12.1.6 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
9
10# Get inside the pod
11$ kubectl exec -it pod-with-secret -- bash
12
13# You can see secret file api-key.txt at "tmp/apikey/"
14[root@pod-with-secret /]# ls tmp/apikey/
15api-key.txt
16
17# Let's checkout out its content
18[root@pod-with-secret /]# cat tmp/apikey/api-key.txt
19k2hl1bflkh4lk23b41lkdlk23b4l341234

Note that for service accounts Kubernetes automatically creates secrets containing credentials for accessing the API and modifies your pods to use this type of secret.

Clean up time:

1$ kubectl delete pods --all
2pod "pod-with-secret" deleted

[GO TO TOP]

Logging

Logging is one option to understand what is going on inside your applications and the cluster at large. Basic logging in Kubernetes makes the output a container produces available, which is a good use case for debugging.

Create a pod logme that writes to stdout and stderr:

1# Create the pod using logging/pod.yaml
2$ kubectl apply -f logging/pod.yaml
3pod/logme created
4
5# View the five most recent log lines of the gen container in the logme pod
6$ kubectl logs --tail=5 logme -c gen
7Sat Mar 23 21:55:51 UTC 2019
8Sat Mar 23 21:55:52 UTC 2019
9Sat Mar 23 21:55:52 UTC 2019
10Sat Mar 23 21:55:53 UTC 2019
11Sat Mar 23 21:55:53 UTC 2019
12
13# Stream the log of the gen container in the logme pod
14$ kubectl logs -f --since=5s logme -c gen
15Sat Mar 23 21:57:28 UTC 2019
16Sat Mar 23 21:57:28 UTC 2019
17Sat Mar 23 21:57:29 UTC 2019
18Sat Mar 23 21:57:29 UTC 2019
19.......
20......
21.....
22....
23...
24..
25.

..if you wouldn’t have specified --since=10s in the above command, you would have gotten all log lines from the start of the container. ...You can also view logs of pods that have already completed their lifecycle.

1# Create a new pod which counts down from 9 to 1
2$ kubectl apply -f logging/oneshot.yaml
3pod/oneshot created
4
5# Let's log it out
6$ kubectl logs -p oneshot -c gen
79
88
97
106
115
124
133
142
151

Using the -p option you can print the logs for previous instances of the container in a pod

Clean up time:

1$ kubectl delete pods --all
2pod "logme" deleted
3pod "oneshot" deleted

[GO TO TOP]

Jobs

A job in Kubernetes is a supervisor for pods carrying out batch processes, that is, a process that runs for a certain time to completion, for example a calculation or a backup operation.

1# Create a job called countdown that supervises a pod counting from 9 down to 1:
2$ kubectl apply -f jobs/job.yaml
3job.batch/countdown created
4
5# List the job
6$ kubectl get jobs -o wide
7NAME DESIRED SUCCESSFUL AGE CONTAINERS IMAGES SELECTOR
8countdown 1 1 31s counter centos:7 controller-uid=04f4458c-4dbc-11e9-a1d6-42010aa00ff8
9
10# List the pods as well
11$ kubectl get pods -o wide
12NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
13countdown-td4vz 0/1 Completed 0 2m 10.12.1.9 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
14
15# Decribe a job
16$ kubectl describe jobs countdown
17Name: countdown
18Namespace: default
19Selector: controller-uid=04f4458c-4dbc-11e9-a1d6-42010aa00ff8
20Labels: controller-uid=04f4458c-4dbc-11e9-a1d6-42010aa00ff8
21 job-name=countdown
22Parallelism: 1
23Completions: 1
24Start Time: Sun, 24 Mar 2019 04:05:54 +0530
25Completed At: Sun, 24 Mar 2019 04:05:55 +0530
26Duration: 1s
27Pods Statuses: 0 Running / 1 Succeeded / 0 Failed
28Pod Template:
29 Labels: controller-uid=04f4458c-4dbc-11e9-a1d6-42010aa00ff8
30 job-name=countdown
31 Containers:
32 counter:
33 Image: centos:7
34 Port: <none>
35 Host Port: <none>
36 Command:
37 bin/bash
38 -c
39 for i in 9 8 7 6 5 4 3 2 1 ; do echo $i ; done
40 Environment: <none>
41 Mounts: <none>
42 Volumes: <none>
43Events:
44 Type Reason Age From Message
45 ---- ------ ---- ---- -------
46 Normal SuccessfulCreate 3m55s job-controller Created pod: countdown-td4vz
47
48# Checkout the output of the logs as well
49$ kubectl logs countdown-td4vz
509
518
527
536
545
554
563
572
581

Clean up time:

1$ kubectl delete jobs countdown
2job.batch "countdown" deleted

[GO TO TOP]

StatefulSet

If you have a stateless app you want to use a Deployment. However, for a stateful app you might want to use a StatefulSet. Unlike a Deployment, the StatefulSet provides certain guarantees about the identity of the pods it is managing (that is, predictable names) and about the startup order. Two more things that are different compared to a Deployment: for network communication you need to create a headless services and for persistency the StatefulSet manages a persistent volume per pod.

We will be using an educational Kubernetes-native NoSQL datastore called mehdb for this exercise.

1# Verify if the yaml file is fine
2$ kubectl apply -f statefulset/mehdb-sts-svc.yaml --dry-run=true
3statefulset.apps/mehdb created (dry run)
4service/mehdb created (dry run)
5
6# Create the statefulset along with the persistent volume and the headless service
7$ kubectl apply -f statefulset/mehdb-sts-svc.yaml
8statefulset.apps/mehdb created
9service/mehdb created
10
11# You can see how the pods are created in order
12$ kubectl get pods -o wide -w
13NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
14mehdb-0 0/1 ContainerCreating 0 24s <none> gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
15mehdb-0 0/1 Running 0 43s 10.12.1.10 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
16mehdb-0 1/1 Running 0 86s 10.12.1.10 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
17mehdb-1 0/1 Pending 0 0s <none> <none> <none>
18mehdb-1 0/1 Pending 0 0s <none> <none> <none>
19mehdb-1 0/1 Pending 0 5s <none> <none> <none>
20mehdb-1 0/1 Pending 0 5s <none> gke-k8s-by-example-default-pool-635ddecf-30n4 <none>
21mehdb-1 0/1 ContainerCreating 0 5s <none> gke-k8s-by-example-default-pool-635ddecf-30n4 <none>
22mehdb-1 0/1 Running 0 32s 10.12.2.5 gke-k8s-by-example-default-pool-635ddecf-30n4 <none>
23mehdb-1 1/1 Running 0 56s 10.12.2.5 gke-k8s-by-example-default-pool-635ddecf-30n4 <none>

A summary of all the resources that have been created:

1# Let's checkout all the resources created
2$ kubectl get sts,po,pvc,svc -o wide
3NAME DESIRED CURRENT AGE CONTAINERS IMAGES
4statefulset.apps/mehdb 2 2 8m shard quay.io/mhausenblas/mehdb:0.6
5
6NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
7pod/mehdb-0 1/1 Running 0 8m 10.12.1.10 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
8pod/mehdb-1 1/1 Running 0 7m 10.12.2.5 gke-k8s-by-example-default-pool-635ddecf-30n4 <none>
9
10NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
11persistentvolumeclaim/data-mehdb-0 Bound pvc-8a824c22-4f7e-11e9-9777-42010aa00008 1Gi RWO standard 8m
12persistentvolumeclaim/data-mehdb-1 Bound pvc-be0d2686-4f7e-11e9-9777-42010aa00008 1Gi RWO standard 7m
13
14NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
15service/kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 2d <none>
16service/mehdb ClusterIP None <none> 9876/TCP 8m app=mehdb

Let's access the StatefulSet via a jump pod:

1$ kubectl run -it --rm jumppod --restart=Never --image=quay.io/mhausenblas/jump:0.2 -- sh
2If you do not see a command prompt, try pressing enter.
3~ $
4
5# The headless service itself has no cluster IP and has created two endpoints for the pods mehdb-0 and mehdb-1 respectively. The DNS configuration now returns A record entries for the pods
6~ $ nslookup mehdb
7nslookup: cannot resolve '(null)': Name does not resolve
8
9Name: mehdb
10Address 1: 10.12.1.10 mehdb-0.mehdb.default.svc.cluster.local
11Address 2: 10.12.2.5 mehdb-1.mehdb.default.svc.cluster.local
12
13# Since there is no data in the datastore, /status?level=full should return a 0
14~ $ curl mehdb:9876/status?level=full
150
16
17# Let's put some data now
18~ $ echo "test data" > /tmp/test
19~ $ cat /tmp/test
20test data
21~ $ curl -sL -XPUT -T /tmp/test mehdb:9876/set/test
22open /mehdbdata/test/content: no such file or directory
23
24# Unable to set value for the key test. So logging out mehdb-0's activity reveals some
25# permissions issue. Will debug this later
26$ kubectl logs mehdb-0 -f
272019/03/26 04:21:35 mehdb serving from mehdb-0:9876 using /mehdbdata as the data directory
282019/03/26 04:21:35 I am the leading shard, accepting both WRITES and READS
292019/03/26 05:09:14 Cannot write key test due to open /mehdbdata/test/content: no such file or directory
30
31# Found another issue
32$ kubectl logs mehdb-1 -f
332019/03/26 05:29:48 Checking for new data from leader
342019/03/26 05:29:48 Cannot get keys from leader due to Get http://mehdb-0.default:9876/keys: dial tcp: lookup mehdb-0.default on 10.15.240.10:53: no such host

mehdb-1 should be querying at mehdb-0.mehdb.default:9876/keys instead of at http://mehdb-0.default:9876/keys.

Clean up time!

1$ kubectl delete sts mehdb
2statefulset.apps "mehdb" deleted
3
4# We are left with the persistent volume and the service
5$ kubectl get sts,po,pvc,svc -o wide
6NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
7persistentvolumeclaim/data-mehdb-0 Bound pvc-8a824c22-4f7e-11e9-9777-42010aa00008 1Gi RWO standard 1h
8persistentvolumeclaim/data-mehdb-1 Bound pvc-be0d2686-4f7e-11e9-9777-42010aa00008 1Gi RWO standard 1h
9
10NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
11service/kubernetes ClusterIP 10.15.240.1 <none> 443/TCP 2d <none>
12service/mehdb ClusterIP None <none> 9876/TCP 1h app=mehdb
13
14# Explicity delete the persistentvolumeclaims
15$ for i in 0 1; do kubectl delete persistentvolumeclaims data-mehdb-$i; done
16persistentvolumeclaim "data-mehdb-0" deleted
17persistentvolumeclaim "data-mehdb-1" deleted
18
19# And the service as well
20$ kubectl delete service mehdb
21service "mehdb" deleted

This exercise wasn't very successful though. :-/

[GO TO TOP]

Init Containers

It’s sometimes necessary to prepare a container running in a pod. For example, you might want to wait for a service being available, want to configure things at runtime, or init some data in a database. In all of these cases, init containers are useful. Note that Kubernetes will execute all init containers (and they must all exit successfully) before the main container(s) are executed.

1# Create a deployment consisting of an init container that writes a message into a file at /ic/this and the main (long-running) container reading out this file:
2$ kubectl apply -f init-containers/deploy.yaml
3deployment.apps/ic-deploy created
4
5# List the deployment and the pod
6$ kubectl get deploy,po -o wide
7NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR
8deployment.extensions/ic-deploy 1 1 1 1 29s main centos:7 app=ic
9
10NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
11pod/ic-deploy-bf75cbf87-z8nh5 1/1 Running 0 30s 10.12.1.14 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
12
13# Check the pod logs
14$ kubectl logs ic-deploy-bf75cbf87-z8nh5 -f
15INIT_DONE
16INIT_DONE
17INIT_DONE
18INIT_DONE
19INIT_DONE
20^C

Clean up time!

1# Delete the deployment
2$ kubectl delete deployments --all
3deployment.extensions "ic-deploy" deleted
4
5# And the pods are terminating, it can take some time
6$ kubectl get deploy,po -o wide
7NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
8pod/ic-deploy-bf75cbf87-z8nh5 0/1 Terminating 0 7m 10.12.1.14 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>

Time to move on though!

[GO TO TOP]

Nodes

... nodes are the (virtual) machines where your workloads in shape of pods run.

1# Let's list all the nodes:
2$ kubectl get nodes -o wide
3NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
4gke-k8s-by-example-default-pool-635ddecf-1xsh Ready <none> 3d v1.11.7-gke.12 10.160.0.29 35.200.209.198 Container-Optimized OS from Google 4.14.91+ docker://17.3.2
5gke-k8s-by-example-default-pool-635ddecf-30n4 Ready <none> 3d v1.11.7-gke.12 10.160.0.30 35.244.18.55 Container-Optimized OS from Google 4.14.91+ docker://17.3.2
6gke-k8s-by-example-default-pool-635ddecf-ll16 Ready <none> 3d v1.11.7-gke.12 10.160.0.28 35.200.217.246 Container-Optimized OS from Google 4.14.91+ docker://17.3.2
7
8# Label one of the nodes
9$ kubectl label nodes gke-k8s-by-example-default-pool-635ddecf-1xsh shouldrun=here
10node/gke-k8s-by-example-default-pool-635ddecf-1xsh labeled
11
12# Now, create a pod to run on the specific node that we just labelled
13$ kubectl apply -f nodes/pod.yaml
14pod/onspecificnode created
15
16# It's running on gke-k8s-by-example-default-pool-635ddecf-1xsh, which is the one we labelled
17$ kubectl get po -o wide
18NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
19onspecificnode 1/1 Running 0 38s 10.12.1.15 gke-k8s-by-example-default-pool-635ddecf-1xsh <none>
20
21# You can learn more about node using describe (portion of the entire dump)
22$ kubectl describe nodes gke-k8s-by-example-default-pool-635ddecf-1xsh
23.
24.
25.
26Addresses:
27 InternalIP: 10.160.0.29
28 ExternalIP: 35.200.209.198
29 Hostname: gke-k8s-by-example-default-pool-635ddecf-1xsh
30Capacity:
31 cpu: 1
32 ephemeral-storage: 98868448Ki
33 hugepages-2Mi: 0
34 memory: 3787656Ki
35 pods: 110
36Allocatable:
37 cpu: 940m
38 ephemeral-storage: 47093746742
39 hugepages-2Mi: 0
40 memory: 2702216Ki
41 pods: 110
42System Info:
43 Kernel Version: 4.14.91+
44 OS Image: Container-Optimized OS from Google
45 Operating System: linux
46 Architecture: amd64
47 Container Runtime Version: docker://17.3.2
48 Kubelet Version: v1.11.7-gke.12
49 Kube-Proxy Version: v1.11.7-gke.12
50PodCIDR: 10.12.1.0/24
51
52.
53.
54.

[GO TO TOP]

API Server access

Sometimes it’s useful or necessary to directly access the Kubernetes API server, for exploratory or testing purposes.

1# In one terminal, proxy the API to the local environment
2$ kubectl proxy --port=8080
3Starting to serve on 127.0.0.1:8080
4
5# In another terminal, access the API
6$ curl http://localhost:8080/api/v1
7{
8 "kind": "APIResourceList",
9 "groupVersion": "v1",
10 "resources": [
11 {
12 "name": "bindings",
13 "singularName": "",
14 "namespaced": true,
15 "kind": "Binding",
16 "verbs": [
17 "create"
18 ]
19 },
20 .
21 .
22 .
23 .
24 {
25 "name": "services/status",
26 "singularName": "",
27 "namespaced": true,
28 "kind": "Service",
29 "verbs": [
30 "get",
31 "patch",
32 "update"
33 ]
34 }
35 ]
36}

Another method:

1# Use kubectl directly
2$ kubectl get --raw=/api/v1
3
4# Check supported API versions
5$ kubectl api-versions
6.
7.
8apps/v1
9apps/v1beta1
10apps/v1beta2
11.
12.
13batch/v1
14batch/v1beta1
15.
16.
17v1
18
19# And resources (I am showing only a few)
20$ kubectl api-resources
21NAME SHORTNAMES APIGROUP NAMESPACED KIND
22configmaps cm true ConfigMap
23endpoints ep true Endpoints
24events ev true Event
25namespaces ns false Namespace
26nodes no false Node
27persistentvolumeclaims pvc true PersistentVolumeClaim
28persistentvolumes pv false PersistentVolume
29pods po true Pod
30replicationcontrollers rc true ReplicationController
31secrets true Secret
32serviceaccounts sa true ServiceAccount
33services svc true Service
34daemonsets ds apps true DaemonSet
35deployments deploy apps true Deployment
36replicasets rs apps true ReplicaSet
37statefulsets sts apps true StatefulSet
38cronjobs cj batch true CronJob
39jobs batch true Job
40storageclasses sc storage.k8s.io false StorageClass

That's the end!

[GO TO TOP]