Working With Deployments(2)

Creating a Service to Access the Pod

Tried to create the Service with the YAML manifest from the book but it seems that the port is not valid for today’s Kubernetes version:

❯ kubectl apply -f svc.yml
The Service "hello-svc" is invalid: spec.ports[0].nodePort: Invalid value: 30001: provided port is not in the valid range. The range of valid ports is 32768-35535

So, I used a different port from the valid range mentioned in the previous output and the Service was created:
❯ kubectl apply -f svc.yml
service/hello-svc created

Here you can see it and some other intersting stuff:

❯ cat svc.yml
apiVersion: v1
kind: Service
metadata:
  name: hello-svc
  labels:
    app: hello-world  <<<==== This is how you associate this Service to our Deployment.
spec:
  type: NodePort   <<<===== Attribute to make a port available from the public IP of the node.
  ports:
  - port: 8080
    nodePort: 32769  <<<========== Port that will be listening on each one of the nodes of the cluster.
    protocol: TCP
  selector:
    app: hello-world

Now, I want to know where the Pods were deployed, so I take a look as below:

❯ kubectl get pods -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP               NODE         NOMINATED NODE   READINESS GATES
hello-deploy-85fd664fff-2bvnx   1/1     Running   0          48m   192.168.255.89   mke-node-0   <none>           <none>
hello-deploy-85fd664fff-4nf7z   1/1     Running   0          48m   192.168.64.86    mke-node-1   <none>           <none>
hello-deploy-85fd664fff-7prbs   1/1     Running   0          48m   192.168.64.84    mke-node-1   <none>           <none>
hello-deploy-85fd664fff-8cbx7   1/1     Running   0          48m   192.168.64.82    mke-node-1   <none>           <none>
hello-deploy-85fd664fff-8fxvc   1/1     Running   0          48m   192.168.255.87   mke-node-0   <none>           <none>
hello-deploy-85fd664fff-g8bfk   1/1     Running   0          48m   192.168.255.88   mke-node-0   <none>           <none>
hello-deploy-85fd664fff-sthxv   1/1     Running   0          48m   192.168.255.86   mke-node-0   <none>           <none>
hello-deploy-85fd664fff-swx8b   1/1     Running   0          48m   192.168.64.81    mke-node-1   <none>           <none>
hello-deploy-85fd664fff-xp55x   1/1     Running   0          48m   192.168.64.85    mke-node-1   <none>           <none>
hello-deploy-85fd664fff-xv78n   1/1     Running   0          48m   192.168.64.83    mke-node-1   <none>           <none>

Knowing that the Pods are on nodes 0 and 1, then I proceed to check their public IP addresses in AWS and try to access them in my browser:

k8s-Service-nodePort


From the previous screenshot, you can see that the specified port was listening and serving the web page on each public node’s IP.

NOTE: Those IP’s and ports are no longer available, so do not try to connect. This was only for lab purposes.


Scaling Replicas - The Imperative Way

❯ kubectl scale deploy hello-deploy --replicas 5
deployment.apps/hello-deploy scaled
❯ kubectl get deployment
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
hello-deploy   5/5     5            5           17h

NOTE: One drawback from this approach is that the current state is indeed five (5) replicas as defined with the command, but the original YAML manifest file has ten (10) as desired state, so if for any reason the YAML file is used again, the number of replicas will be increased one more time. Therefore, the declarative way is recommended over the imperative.


Scaling Replicas - The Declarative Way

The declarative way would be scaling by modifying the manifest. In other words, by telling what we want to Kubernetes, instead of telling it how to do so.

First, I will scale up to twelve (12) replicas in an imperative way:

❯ kubectl scale deploy hello-deploy --replicas 12
deployment.apps/hello-deploy scaled
❯ kubectl get deploy hello-deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
hello-deploy   12/12   12           6           17h

Now I will scale down by modifying the manifest:

❯ cat deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-deploy
spec:
  replicas: 10  <============= 
  selector:
    matchLabels:
      app: hello-world
  revisionHistoryLimit: 5
  progressDeadlineSeconds: 300
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-pod
        image: nigelpoulton/k8sbook:1.0
        ports:
        - containerPort: 8080

Then, I will apply the manifest:

❯ kubectl apply -f deploy.yml
deployment.apps/hello-deploy configured

And finally, we have scaled down in a declartive way:

❯ kubectl get deploy hello-deploy
NAME           READY   UP-TO-DATE   AVAILABLE   AGE
hello-deploy   10/10   10           10          17h

Rolling Update

Assuming that the application was containerized already and the image is available, then I need only to specify the new image in the manifest and deploy it to perform an update. For example:

❯ cat deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hello-deploy
spec:
  replicas: 10
  selector:
    matchLabels:
      app: hello-world
  revisionHistoryLimit: 5
  progressDeadlineSeconds: 300
  minReadySeconds: 10
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  template:
    metadata:
      labels:
        app: hello-world
    spec:
      containers:
      - name: hello-pod
        image: nigelpoulton/k8sbook:2.0   <================== Changed from 1.0 to 2.0. 
        ports:
        - containerPort: 8080
❯ kubectl apply -f deploy.yml
deployment.apps/hello-deploy configured
❯ kubectl rollout status deployment hello-deploy
Waiting for deployment "hello-deploy" rollout to finish: 2 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 2 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 2 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 4 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 5 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 7 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 9 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 9 of 10 updated replicas are available...
Waiting for deployment "hello-deploy" rollout to finish: 9 of 10 updated replicas are available...
deployment "hello-deploy" successfully rolled out

NOTE: If for any reason the rollout needs to be paused and resumed, the following commands can be used for such respectively: “kubectl rollout pause deploy hello-deploy” and “kubectl rollout resume deploy hello-deploy”.


Performing a Rollback

This is an imperative way to do a rollback but it perfectly exemplifies how useful the ReplicaSet revisions are in Kubernetes. First, notice the available ReplicaSets:

❯ kubectl get rs
NAME                      DESIRED   CURRENT   READY   AGE
hello-deploy-5445f6dcbb   10        10        10      16m
hello-deploy-85fd664fff   0         0         0       23h

Even more important, notice that one ReplicaSet is for the previous version and the second one is for the current one:

❯ kubectl describe rs hello-deploy-85fd664fff
Name:           hello-deploy-85fd664fff
Namespace:      default
Selector:       app=hello-world,pod-template-hash=85fd664fff
Labels:         app=hello-world
                pod-template-hash=85fd664fff
Annotations:    deployment.kubernetes.io/desired-replicas: 10
                deployment.kubernetes.io/max-replicas: 11
                deployment.kubernetes.io/revision: 1  <<<======== First revision.
Controlled By:  Deployment/hello-deploy
Replicas:       0 current / 0 desired      <<<======================= Zero replicas since I updated already to 2.0 version.
Pods Status:    0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
  Labels:  app=hello-world
           pod-template-hash=85fd664fff
  Containers:
   hello-pod:
    Image:        nigelpoulton/k8sbook:1.0  <<<=============== Old ReplicaSet with 1.0 version.
    Port:         8080/TCP
    Host Port:    0/TCP
❯ kubectl describe rs hello-deploy-5445f6dcbb
Name:           hello-deploy-5445f6dcbb
Namespace:      default
Selector:       app=hello-world,pod-template-hash=5445f6dcbb
Labels:         app=hello-world
                pod-template-hash=5445f6dcbb
Annotations:    deployment.kubernetes.io/desired-replicas: 10
                deployment.kubernetes.io/max-replicas: 11
                deployment.kubernetes.io/revision: 2   <<<====== Second (and current) revision.
Controlled By:  Deployment/hello-deploy
Replicas:       10 current / 10 desired
Pods Status:    10 Running / 0 Waiting / 0 Succeeded / 0 Failed   <<<==== It has 10 running since it is the current version.
Pod Template:
  Labels:  app=hello-world
           pod-template-hash=5445f6dcbb
  Containers:
   hello-pod:
    Image:        nigelpoulton/k8sbook:2.0   <<<==================== New version.
    Port:         8080/TCP
    Host Port:    0/TCP

Now, knowing that the old ReplicaSet is still available, I could undo the last update, or in other words to I could perform a rollback:

❯ kubectl rollout undo deployment hello-deploy --to-revision=1
deployment.apps/hello-deploy rolled back
❯ kubectl rollout status deployment hello-deploy
Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 6 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 8 out of 10 new replicas have been updated...
Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination...
Waiting for deployment "hello-deploy" rollout to finish: 1 old replicas are pending termination...
deployment "hello-deploy" successfully rolled out

The below output should confirm in detail that the rollback indeed worked:

❯ kubectl describe deploy hello-deploy
Name:                   hello-deploy
Namespace:              default
CreationTimestamp:      Wed, 16 Mar 2022 17:50:04 +0100
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=hello-world
Replicas:               10 desired | 10 updated | 10 total | 10 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        10
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  app=hello-world
  Containers:
   hello-pod:
    Image:        nigelpoulton/k8sbook:1.0  <<<====== Rolled back to 1.0 version

 Share!

 
comments powered by Disqus