In this part we’ll go over a lot of things you need to get yourself started with using Kubernetes. This includes terminology, your first deploy, a little bit of networking and introduction to volumes. By the end of this part you will be able to

  • Create and run a Kubernetes cluster locally with k3d

  • Deploy applications to Kubernetes

Foreword on microservices

On this course we’ll talk about microservices and create microservices. Before we get started with anything else we’ll need to define what a microservice is.

A microservice is any service that is smaller than a monolith.

As such the easiest method to achieve microservice architecture is by splitting off a single piece out of a monolith - they are then both less than a monolith. Why would you do this? For example, to scale a piece of the application separately or to have a separate team work on a piece of the application.

The misconception of microservices being a large number of extremely small services is proliferated by large enterprises. If you have an extremely large enterprise where teams don’t even know the existence of other teams you may have a unconventionally large number of microservices. Due of the insanity of large number of small services without any good reasoning we’re witnessing the term monolith trending in 2020.

For the context of this unpopular opinion Kelsey Hightower points the fault at Distributed Monoliths where you have a large number of microservices without a good reason.

  • “Run a small team, not a tech behemoth? Embrace the monolith and make it majestic. You Deserve It!” - David Heinemeier Hansson, cofounder & CTO at Basecamp, “The Majestic Monolith”

And this evolves into “The Majestic Monolith can become The Citadel” with the following: “next step is The Citadel, which keeps the Majestic Monolith at the center, but supports it with a set of Outposts, each extracting a small subset of application responsibilities.”

Sometimes during this course we’ll do arbitrary splits to our services just to show the features of Kubernetes. We will also see at least one actual use case for microservices.

What is Kubernetes?

Let’s say you have 3 processes and 2 computers incapable of running all 3 processes. How would you approach this problem?

You’ll have to start by deciding which 2 processes go on the same computer and which 1 will be on the different one. How would you fit them? By having the ones demanding most resources and the least resources on the same machine or having the most demanding be on it’s own? Maybe you want to add one process and now you have to reorganize all of them. What happens when you have more than 2 computers and more than 3 processes? One of the processes is eating all of the memory and you need to get that away from the “critical-bank-application”. Should we virtualize everything? Containers would solve that problem, right? Would you move the most important process to a new computer? Maybe some of the processes need to communicate with each other and now you have to deal with networking. What if one of the computers break? What about your friday plans to visit the local Craft brewery?

What if you could just define “This process should have 6 copies using X amount of resources.” and have the 2..N computers working as a single entity to fulfill your request? That’s just one thing Kubernetes makes possible.

“Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery.” -

A container orchestration system such as Kubernetes is often required when maintaining containerized applications. The main responsibility of an orchestration system is the starting and stopping of containers. In addition, they offer networking between containers and health monitoring. Rather than manually doing docker run critical-bank-application every time the application crashes, or restart it if it becomes unresponsive, we want the system to keep the application automatically healthy.

A more familiar orchestration system may be docker-compose, which also does the same tasks; starting and stopping, networking and health monitoring. What makes Kubernetes special is the robust feature set for automating all of it.

Read this comic and watch the video below to get a fast introduction. You may want to revisit these after this part!

We will get started with a lightweight Kubernetes distribution. K3s - 5 less than K8s, offers us an actual Kubernetes cluster that we can run in containers using k3d.

Kubernetes cluster with k3d

What is a cluster?

A cluster is a group of machines, nodes, that work together - in this case they are part of Kubernetes cluster. Kubernetes cluster can be of any size - a single node cluster would consist of one machine that hosts the Kubernetes control-plane (exposing API and maintaining the cluster) and that cluster can then be expanded with up to 5000 nodes total, as of Kubernetes v1.18.

We will use the term “server node” to refer to nodes with control-plane and “agent node” to refer to the nodes without that role.

Starting a cluster with k3d

We’ll use K3d to create a group of docker containers that run k3s. Thus creating our very own Kubernetes cluster.

$ k3d cluster create -a 2

This created a Kubernetes cluster with 2 agent nodes. As they’re in docker you can confirm that they exist with docker ps.

$ docker ps
  CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                             NAMES
  11543a6b5015        rancher/k3d-proxy:v3.0.0   "/bin/sh -c nginx-pr…"   16 seconds ago      Up 14 seconds       80/tcp,>6443/tcp   k3d-k3s-default-serverlb
  f17e07a77061        rancher/k3s:latest         "/bin/k3s agent"         26 seconds ago      Up 24 seconds                                         k3d-k3s-default-agent-1
  b135b5ac987d        rancher/k3s:latest         "/bin/k3s agent"         27 seconds ago      Up 25 seconds                                         k3d-k3s-default-agent-0
  7e5fbc8db7e9        rancher/k3s:latest         "/bin/k3s server --t…"   28 seconds ago      Up 27 seconds                                         k3d-k3s-default-server-0

Here we also see that port 6443 is opened to “k3d-k3s-default-serverlb”, a useful loadbalancer proxy, that’ll redirect connection to 6443 into the server node, and that’s how we can access the contents of the cluster. The port on our machine, above 57734, is randomly chosen. We could have opted out of the loadbalancer with k3d cluster create -a 2 --no-lb and the port would be open straight to the server node but having a loadbalancer will offer us a few features we wouldn’t otherwise have.

K3d helpfully also set up a kubeconfig, the contents of which is output by k3d kubeconfig get k3s-default. Kubectl will read kubeconfig from the location in KUBECONFIG environment value or by default from ~/.kube/config and use the information to connect to the cluster. The contents include certificates, passwords and the address in which the cluster API. You can manually set the config with k3d kubeconfig merge k3d-default --switch-context.

Now kubectl will be able to access the cluster

$ kubectl cluster-info
  Kubernetes master is running at
  CoreDNS is running at
  Metrics-server is running at

We can see that kubectl is connected to the container k3d-k3s-default-serverlb through (in this case) port 57734.

If you want to stop / start the cluster you can simply run

$ k3d cluster stop
  INFO[0000] Stopping cluster 'k3s-default'

$ k3d cluster start
  INFO[0000] Starting cluster 'k3s-default'
  INFO[0000] Starting Node 'k3d-k3s-default-agent-1'
  INFO[0000] Starting Node 'k3d-k3s-default-agent-0'
  INFO[0000] Starting Node 'k3d-k3s-default-server-0'
  INFO[0001] Starting Node 'k3d-k3s-default-serverlb'

For now we’re going to need the cluster but if we want to remove the cluster we can run k3d cluster delete.

First Deploy

Preparing for first deploy

Before we can deploy anything we’ll need to do a small application to deploy. During the course you will develop your own application. The technologies used for the application do not matter - for the examples we’re going to use node.js but the example application will be offered through GitHub as well as Docker Hub.

Let’s create an application that generates and outputs a hash every 5 seconds or so.

I’ve prepared one here docker run jakousa/dwk-app1.

To deploy we need the cluster to have an access to the image. By default Kubernetes is intended to be used with a registry. K3d offers import-images command, but since that won’t work when we go to non-k3d solutions we’ll use the now very familiar registry Docker Hub.

$ docker tag _image_ _username_/_image_
$ docker push _username_/_image_

In the future the material will use the offered applications in the commands. Follow along by changing the image to your application

Now we’re finally ready to deploy our first app into Kubernetes!


To deploy an application we’ll need to create a Deployment with the image.

$ kubectl create deployment hashgenerator-dep --image=jakousa/dwk-app1
  deployment.apps/hashgenerator-dep created

This action created a few things for us to look at: a Deployment and a Pod.

What is a Pod?

A Pod is an abstraction around one or more containers. Similarly as you’ve now used containers to define environments for a single process. Pods provide an context for 1..N containers so that they can share a storage and a network. They can be thought of as a container of containers. Most of the same rules apply: it is deleted if the containers stop running and files will be lost with it.

What is a Deployment?

A Deployment takes care of deployment. It’s a way to tell Kubernetes what container you want, how they should be running and how many of them should be running.

The Deployment also created a ReplicaSet, which is a way to tell how many replicas of a Pod you want. It will delete or create Pods until the number of Pods you wanted are running. ReplicaSets are managed by Deployments and you should not have to manually define or modify them.

You can view the deployment:

$ kubectl get deployments
  NAME                READY   UP-TO-DATE   AVAILABLE   AGE
  hashgenerator-dep   1/1     1            1           54s

And the pods:

$ kubectl get pods
  NAME                               READY   STATUS    RESTARTS   AGE
  hashgenerator-dep-6965c5c7-2pkxc   1/1     Running   0          2m1s

1/1 replicas are ready and it’s status is Running! We will try multiple replicas later.

To see the output we can run kubectl logs -f hashgenerator-dep-6965c5c7-2pkxc

Use source <(kubectl completion bash) to save yourself a lot of headache. Add it to .bashrc for automatic load. (Also available for zsh)

A helpful list for other commands from docker-cli translated to kubectl is available here

Exercise 1.01: Getting started

Exercises can be done with any language and framework you want.

Create an application that generates a random string on startup, stores this string into memory and outputs it every 5 seconds with a timestamp. e.g.

2020-03-30T12:15:17.705Z: 8523ecb1-c716-4cb6-a044-b9e83bb98e43
2020-03-30T12:15:22.705Z: 8523ecb1-c716-4cb6-a044-b9e83bb98e43

Deploy it into your Kubernetes cluster and confirm that it’s running with kubectl logs ...

In the future exercises this application will be referred too as “Main application”. We will revisit some exercise applications during the course.

Exercise 1.02: Project v0.1

Project can be done with any language and framework you want

The project is a simple todo application with the familiar features of create, read, update, and delete (CRUD). We’ll develop it during all parts of this course. At the end it will look like this:


Keep this in mind if you want to avoid avoid doing more work than necessary.

Let’s get started!

Create a web server that outputs “Server started in port NNNN” when it’s started and deploy it into your Kubernetes cluster. You won’t have access to the port yet but that’ll come soon.

Declarative configuration with YAML

We created the deployment with

$ kubectl create deployment hashgenerator-dep --image=jakousa/dwk-app1

If we wanted to scale it 4 times and update the image:

$ kubectl scale deployment/hashgenerator-dep --replicas=4`

$ kubectl set image deployment/hashgenerator-dep dwk-app1=jakousa/dwk-app1:78031863af07c4c4cc3c96d07af68e8ce6e3afba`

Things start to get really cumbersome. In the dark ages deployments were created similarly by running commands after each other in a “correct” order. We’ll now use a declarative approach where we define how things should be. This is more sustainable in the long term than the iterative approach.

Before redoing the previous let’s take the deployment down.

$ kubectl delete deployment hashgenerator-dep
  deployment.apps "hashgenerator-dep" deleted

and create a new folder named manifests to the project and a file called deployment.yaml with the following contents (you can check the example here):


apiVersion: apps/v1
kind: Deployment
  name: hashgenerator-dep
  replicas: 1
      app: hashgenerator
        app: hashgenerator
        - name: hashgenerator
          image: jakousa/dwk-app1:78031863af07c4c4cc3c96d07af68e8ce6e3afba

I personally use vscode to create these yaml files. It has helpful autofill, definitions and syntax check for Kubernetes with the extension Kubernetes by Microsoft. Even now it helpfully warns us that we haven’t defined resource limitations.

This looks a lot like the docker-compose.yamls we have previously written. Let’s ignore what we don’t know for now, which is mainly labels, and focus on the things that we know:

  • We’re declaring what kind it is (kind: Deployment)
  • We’re declaring it a name as metadata (name: hashgenerator-dep)
  • We’re declaring that there should be one of them (replicas: 1)
  • We’re declaring that it has a container that is from a certain image with a name

Apply the deployment with apply command:

$ kubectl apply -f manifests/deployment.yaml
  deployment.apps/hashgenerator-dep created

That’s it, but for revisions sake lets delete it and create it again:

$ kubectl delete -f manifests/deployment.yaml
  deployment.apps "hashgenerator-dep" deleted

$ kubectl apply -f
  deployment.apps/hashgenerator-dep created

Woah! The fact that you can apply manifest from the internet just like that will come in handy.

Exercise 1.03: Declarative approach

In your main application project create the folder for manifests and move your deployment into a declarative file. Make sure everything still works by restarting and following logs.

Exercise 1.04: Project v0.2

Create deployment for your project. You won’t have access to the port yet but that’ll come soon.

Networking Part 1

Restarting and following logs has been a treat. Next we’ll open an endpoint to the application and access it via HTTP.

Simple networking application

Let’s develop our application so that it has a HTTP server responding with two hashes: a hash that is stored until the process is exited and a hash that is request specific. The response body can be something like “Application abc123. Request 94k9m2”. Choose any port to listen to.

I’ve prepared one here. By default it will listen on port 3000.

$ kubectl apply -f
  deployment.apps/hashresponse-dep created

Connecting from outside of the cluster

We can confirm that the hashresponse-dep is working with port-forward command. Let’s see the name of the pod first and then port forward there:

$ kubectl get po
  NAME                                READY   STATUS    RESTARTS   AGE
  hashgenerator-dep-5cbbf97d5-z2ct9   1/1     Running   0          20h
  hashresponse-dep-57bcc888d7-dj5vk   1/1     Running   0          19h

$ kubectl port-forward hashresponse-dep-57bcc888d7-dj5vk 3003:3000
  Forwarding from -> 3000
  Forwarding from [::1]:3003 -> 3000

Now we can view the response from http://localhost:3003 and confirm that it is working as expected.

Exercise 1.05: Project v0.3

Have the application return something to a GET request sent to the application. A simple html page is good or you can deploy something more complex like a single-page-application.

Use kubectl port-forward to confirm that the application is accessible and works in the cluster.

External connections with docker used the flag -p -p 3003:3000 or in docker-compose ports declaration. Unfortunately Kubernetes isn’t as simple. We’re going to use either a Service resource or an Ingress resource.

Before anything else

Because we are running our cluster inside docker with k3d we will have to do a few preparations. Opening a route from outside of the cluster to the pod will not be enough if we have no means of accessing the cluster inside the containers!

$ docker ps
  CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                             NAMES
  b60f6c246ebb        rancher/k3d-proxy:v3.0.0   "/bin/sh -c nginx-pr…"   2 hours ago         Up 2 hours          80/tcp,>6443/tcp   k3d-k3s-default-serverlb
  553041f96fc6        rancher/k3s:latest         "/bin/k3s agent"         2 hours ago         Up 2 hours                                            k3d-k3s-default-agent-1
  aebd23c2ef99        rancher/k3s:latest         "/bin/k3s agent"         2 hours ago         Up 2 hours                                            k3d-k3s-default-agent-0
  a34e49184d37        rancher/k3s:latest         "/bin/k3s server --t…"   2 hours ago         Up 2 hours                                            k3d-k3s-default-server-0

K3d has helpfully prepared us a port to access the API in 6443 and in addition has opened port to 80. All requests to the loadbalancer here will be proxied to the same ports of all server nodes of the cluster. However, for testing purposes we’ll want an individual port open for a single node. Let’s delete our old cluster and create a new one with port 8082 open:

$ k3d cluster delete
  INFO[0000] Deleting cluster 'k3s-default'               
  INFO[0002] Successfully deleted cluster k3s-default!    

$ k3d cluster create --port '8082:30080@agent[0]' -p 8081:80@loadbalancer --agents 2
  INFO[0000] Created network 'k3d-k3s-default'
  INFO[0021] Cluster 'k3s-default' created successfully!
  INFO[0021] You can now use it like this:
  kubectl cluster-info

$ kubectl apply -f
  deployment.apps/hashresponse-dep created

Now we have access through port 8081 to our server node (actually all nodes) and 8082 to one of our agent nodes port 30080. They will be used to showcase different methods of communicating with the servers.

We will have limited amount of ports available in the future but that’s ok for your own machine.

Your OS may support using the host network so no ports need to be opened.

What is a Service?

As Deployment resources took care of deployments for us. Service resource will take care of serving the application to connections from outside of the cluster.

Create a file service.yaml into the manifests folder and we need the service to do the following things:

  1. Declare that we want a Service
  2. Declare which port to listen to
  3. Declare the application where the request should be directed to
  4. Declare the port where the request should be directed to

This translates into a yaml file with contents


apiVersion: v1
kind: Service
  name: hashresponse-svc
  type: NodePort
    app: hashresponse
    - name: http
      nodePort: 30080 # This is the port that is available outside. Value for nodePort can be between 30000-32767
      protocol: TCP
      port: 1234 # This is a port that is available to the cluster, in this case it can be ~ anything
      targetPort: 3000 # This is the target port
$ kubectl apply -f manifests/service.yaml
  service/hashresponse-svc created

As we’ve published 8082 as 30080 we can access it now via http://localhost:8082.

We’ve now defined a nodeport with type: NodePort. NodePorts simply ports that are opened by Kubernetes to all of the nodes and the service will handle requests in that port. NodePorts are not flexible and require you to assign a different port for every application. As such NodePorts are not used in production but are helpful to know about.

What we’d want to use instead of NodePort would be a LoadBalancer type service but this “only” works with cloud providers as it configures a, possibly costly, load balancer for it. We’ll get to know them in part 3.

There’s one additional resource that will help us with serving the application, Ingress.

Exercise 1.06: Project v0.4

Use a NodePort Service to enable access to the project!

What is an Ingress?

Incoming Network Access resource Ingress is completely different type of resource from Services. If you’ve got your OSI model memorized, it works in the layer 7 while services work on layer 4. You could see these used together: first the aforementioned LoadBalancer and then Ingress to handle routing. In our case as we don’t have a load balancer available we can use the Ingress as the first stop. If you’re familiar with reverse proxies like Nginx, Ingress should seem familiar.

Ingresses are implemented by various different “controllers”. This means that ingresses do not automatically work in a cluster, but gives you the freedom of choosing which which ingress controller works for you the best. K3s has Traefik installed already. Other options include Istio and Nginx Ingress Controller, more here.

Switching to Ingress will require us to create an Ingress resource. Ingress will route incoming traffic forward to a Services, but the old NodePort Service won’t do.

$ kubectl delete -f manifests/service.yaml
  service "hashresponse-svc" deleted

A ClusterIP type Service resource gives the Service an internal IP that’ll be accessible in the cluster.

The following will let TCP traffic from port 2345 to port 3000.


apiVersion: v1
kind: Service
  name: hashresponse-svc
  type: ClusterIP
    app: hashresponse
    - port: 2345
      protocol: TCP
      targetPort: 3000

For resource 2 the new Ingress.

  1. Declare that it should be an Ingress
  2. And route all traffic to our service


apiVersion: extensions/v1beta1
kind: Ingress
  name: dwk-material-ingress
  - http:
      - path: /
          serviceName: hashresponse-svc
          servicePort: 2345

Then we can apply everything and view the result

$ kubectl apply -f manifests/service.yaml
  service/hashresponse-svc created
$ kubectl apply -f manifests/ingress.yaml
  ingress.extensions/dwk-material-ingress created

$ kubectl get svc
  NAME               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)    AGE
  kubernetes         ClusterIP      <none>        443/TCP    23h
  hashresponse-svc   ClusterIP   <none>        2345/TCP   4m23s

$ kubectl get ing
  NAME                    HOSTS   ADDRESS      PORTS   AGE
  dwk-material-ingress    *   80      77s

We can see that the ingress is listening on port 80. As we already opened port there we can access the application on http://localhost:8081.

Exercise 1.07: External access with Ingress

In addition to outputting the timestamp and hash, save it to memory and display it when accessing the main application via HTTP. Then use ingress to access it with a browser.

Exercise 1.08: Project v0.5

Start using Ingress instead of NodePort to access the project.

Exercise 1.09: More services

Develop a second application that simply responds with “pong 0” to a GET request and increases a counter (the 0) so that you can see how many requests have been sent. The counter should be in memory so it may reset at some point. Create a new deployment for it and use ingress to route requests directed ‘/pingpong’ to it.

In future exercises this second application will be referred to as “ping/pong application”

This is not required, but you can add the following annotation to your ingress so that the path in ingress is stripped from the request. This’ll allow you to use “/pingpong” path whilst the pingpong application listens on “/”:

  annotations: "PathPrefixStrip"

Volumes Part 1

Storage in Kubernetes is hard. In part 1 we will look into a very basic method of using storage and return to this topic later. Where almost everything else in Kubernetes is very much dynamic, moving between nodes and replicating with ease, storage does not have the same possibilities.

There are multiple types of volumes and we’ll get started with two of them.

Simple Volume

Where in docker and docker-compose it would essentially mean that we had something persistent here that is not the case. There are multiple types of volumes emptyDir volumes are shared filesystems inside a pod, this means that their lifecycle is tied to a pod. When the pod is destroyed the data is lost.

Before we can get started with this, we need an application that shares data with another application. In this case it will work as a method to share simple log files between each other. We’ll need to develop the apps:

App 1 will check if /usr/src/app/files/image.jpg exists and if not download a random image and save it as image.png. Any HTTP request will trigger a new image generation.

App 2 will check for /usr/src/app/files/image.jpg and show it if it is available.

They share a deployment so that both of them are inside the same pod. My version available here. The example includes ingress and service to access the application.


apiVersion: apps/v1
kind: Deployment
  name: images-dep
  replicas: 1
      app: images
        app: images
      volumes: # Define volume
        - name: shared-image
          emptyDir: {}
        - name: image-finder
          image: jakousa/dwk-app3-image-finder:a04092af0393067d08280db7a79057eaab67692b
          volumeMounts: # Mount volume
          - name: shared-image
            mountPath: /usr/src/app/files
        - name: image-response
          image: jakousa/dwk-app3-image-response:31a78aec1090d7ea44446d2f9af621a2c59efe72
          volumeMounts: # Mount volume
          - name: shared-image
            mountPath: /usr/src/app/files

As the display is dependant on the volume we can confirm that it works by accessing the image-response and getting the image. The provided ingress used the previously opened port 8081 http://localhost:8081

Note that all data is lost when the pod goes down.

Exercise 1.10: Even more services

Split the main application into two different containers:

One generates a new timestamp every 5 seconds and saves it into a file. The other reads that file and outputs it with its hash for the user to see.

Persistent Volumes

This type of storage is what you probably had in mind when we started talking about volumes. Unfortunately we’re quite limited with the options here and will return to PersistentVolumes briefly in Part 2 and again in Part 3 with GKE.

The reason for the difficulty is because you should not store data with the application or create a dependency to the filesystem by the application. Kubernetes supports cloud providers very well and you can run your own storage system. During this course we are not going to run our own storage system as that would be a huge undertaking and most likely “in real life” you are going to use something hosted by a cloud provider. This topic would probably be a part of its own, but let’s scratch the surface and try something you can use to run something at home.

A local volume is a PersistentVolume that binds a path from the node to use as a storage. This ties the volume to the node.

For the PersistentVolume to work you first need to create the local path in the node we are binding it to. Since our k3d cluster runs via docker let’s create a directory at /tmp/kube in the k3d-k3s-default-agent-0 container. This can simply be done via docker exec k3d-k3s-default-agent-0 mkdir -p /tmp/kube


apiVersion: v1
kind: PersistentVolume
  name: example-pv
  storageClassName: manual
    storage: 1Gi # Could be e.q. 500Gi. Small amount is to preserve space when testing locally
  volumeMode: Filesystem # This declares that it will be mounted into pods as a directory
  - ReadWriteOnce
    path: /tmp/kube
  nodeAffinity: ## This is only required for local, it defines which nodes can access it
      - matchExpressions:
        - key:
          operator: In
          - k3d-k3s-default-agent-0

As this is bound into that node avoid using this in production.

The type local we’re using now can not be dynamically provisioned. A new PersistentVolume needs to be defined only rarely, for example to your personal cluster once a new physical disk is added. After that a PersistentVolumeClaim is used to claim a part of the storage for an application. If we create multiple PersistentVolumeClaims the rest will stay in Pending state, waiting for a suitable PersistentVolume.


apiVersion: v1
kind: PersistentVolumeClaim
  name: image-claim
  storageClassName: manual
    - ReadWriteOnce
      storage: 1Gi

Modify the previously introduced deployment to use it:


        - name: shared-image
            claimName: image-claim
        - name: image-finder
          image: jakousa/dwk-app3-image-finder:a04092af0393067d08280db7a79057eaab67692b
          - name: shared-image
            mountPath: /usr/src/app/files
        - name: image-response
          image: jakousa/dwk-app3-image-response:31a78aec1090d7ea44446d2f9af621a2c59efe72
          - name: shared-image
            mountPath: /usr/src/app/files

And apply it

$ kubectl apply -f

With the previous service and ingress we can access it from http://localhost:8081. To confirm that the data is persistent we can run

$ kubectl delete -f
  deployment.apps "images-dep" deleted
$ kubectl apply -f
  deployment.apps/images-dep created

And the same file is available again.

If you are interested in learning more about running your own storage you can check out.


Exercise 1.11: Persisting data

Create both a PersistentVolume and PersistentVolumeClaim and alter the Deployment. As PersistentVolume is often maintained by cluster administrators rather than developers and are not application specific you should keep the definition for that separated.

In the end the two pods should share a persistent volume between the two applications. Save the number of requests to ping / pong application into a file in the volume and output it with the timestamp and hash when sending a request to our main application. So the browser should display the following when accessing the main application:

  2020-03-30T12:15:17.705Z: 8523ecb1-c716-4cb6-a044-b9e83bb98e43.
  Ping / Pongs: 3

Exercise 1.12: Project v0.6

Since the project looks really boring at the moment let’s add some outside resources.

A daily image where every day a new image is fetched on the first request.

Get an image from Lorem Picsum like and display it in the project. Make sure to cache the image into a volume so we don’t spam the API for new images every time we access the application or the container crashes.

Exercise 1.13: Project v0.7

We’ll need to do some coding to start seeing results in the next part.

  • Add an input field into the project and a send button. The input should not take todos that are over 140 characters long.

  • Add a list of the existing todos with some hard coded todos.

Submit your completed exercises through the submission application


In this part we learned about k8s, k3s and k3d. We learned about resources that are used in Kubernetes to run software as well as manage storage for some use cases, for example, caching and sharing data between Pods.

By now we know what the following are and how to use them:

  • Pods
  • Deployments
  • Services
  • Ingress
  • Volume

With them we’re ready to deploy simple software to a Kubernetes cluster. In the next part we’ll learn more about management as well as a number of cases where the tools we have acquired so far are not enough. Part 2