Introduction to Networking
Now back to development! Restarting and following logs has been a treat. Next, we'll open an endpoint to the application and access it via HTTP.
Let's develop our application so that it has an HTTP server responding with two hashes: a hash that is stored until the process is exited and a hash that is request specific. The response body can be something like "Application abc123. Request 94k9m2". Choose any port to listen to.
I've prepared one here. By default, it will listen on port 3000.
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes-hy/material-example/master/app2/manifests/deployment.yaml deployment.apps/hashresponse-dep created
We can confirm that the hashresponse-dep is working with the
port-forward command. Let's see the name of the pod first and then port forward there:
$ kubectl get po NAME READY STATUS RESTARTS AGE hashgenerator-dep-5cbbf97d5-z2ct9 1/1 Running 0 20h hashresponse-dep-57bcc888d7-dj5vk 1/1 Running 0 19h $ kubectl port-forward hashresponse-dep-57bcc888d7-dj5vk 3003:3000 Forwarding from 127.0.0.1:3003 -> 3000 Forwarding from [::1]:3003 -> 3000
Now we can view the response from http://localhost:3003 and confirm that it is working as expected.
External connections with docker used the flag -p
-p 3003:3000 or in docker-compose ports declaration. Unfortunately, Kubernetes isn't as simple. We're going to use either a Service resource or an Ingress resource.
Because we are running our cluster inside docker with k3d we will have to do some preparations.
Opening a route from outside of the cluster to the pod will not be enough if we have no means of accessing the cluster inside the containers!
$ docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b60f6c246ebb rancher/k3d-proxy:v3.0.0 "/bin/sh -c nginx-pr…" 2 hours ago Up 2 hours 80/tcp, 0.0.0.0:58264->6443/tcp k3d-k3s-default-serverlb 553041f96fc6 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-1 aebd23c2ef99 rancher/k3s:latest "/bin/k3s agent" 2 hours ago Up 2 hours k3d-k3s-default-agent-0 a34e49184d37 rancher/k3s:latest "/bin/k3s server --t…" 2 hours ago Up 2 hours k3d-k3s-default-server-0
K3d has helpfully prepared us a port to access the API in 6443 and, in addition, has opened a port to 80. All requests to the load balancer here will be proxied to the same ports of all server nodes of the cluster. However, for testing purposes, we'll want an individual port open for a single node. Let's delete our old cluster and create a new one with port some ports open.
K3d documentation tells us how the ports are opened, we'll open local 8081 to 80 in k3d-k3s-default-serverlb and local 8082 to 30080 in k3d-k3s-default-agent-0. The 30080 is chosen almost completely randomly, but needs to be a value between 30000-32767 for the next step:
$ k3d cluster delete INFO Deleting cluster 'k3s-default' ... INFO Successfully deleted cluster k3s-default! $ k3d cluster create --port 8082:30080@agent:0 -p 8081:80@loadbalancer --agents 2 INFO Created network 'k3d-k3s-default' ... INFO Cluster 'k3s-default' created successfully! INFO You can now use it like this: kubectl cluster-info $ kubectl apply -f https://raw.githubusercontent.com/kubernetes-hy/material-example/master/app2/manifests/deployment.yaml deployment.apps/hashresponse-dep created
Now we have access through port 8081 to our server node (actually all nodes) and 8082 to one of our agent nodes port 30080. They will be used to showcase different methods of communicating with the servers.
As Deployment resources took care of deployments for us. Service resource will take care of serving the application to connections from outside of the cluster.
Create a file service.yaml into the manifests folder and we need the service to do the following things:
- Declare that we want a Service
- Declare which port to listen to
- Declare the application where the request should be directed to
- Declare the port where the request should be directed to
This translates into a yaml file with contents
apiVersion: v1 kind: Service metadata: name: hashresponse-svc spec: type: NodePort selector: app: hashresponse # This is the app as declared in the deployment. ports: - name: http nodePort: 30080 # This is the port that is available outside. Value for nodePort can be between 30000-32767 protocol: TCP port: 1234 # This is a port that is available to the cluster, in this case it can be ~ anything targetPort: 3000 # This is the target port
$ kubectl apply -f manifests/service.yaml service/hashresponse-svc created
As we've published 8082 as 30080 we can access it now via http://localhost:8082.
We've now defined a nodeport with
type: NodePort. NodePorts simply ports that are opened by Kubernetes to all of the nodes and the service will handle requests in that port. NodePorts are not flexible and require you to assign a different port for every application. As such NodePorts are not used in production but are helpful to know about.
What we'd want to use instead of NodePort would be a LoadBalancer type service but this "only" works with cloud providers as it configures a, possibly costly, load balancer for it. We'll get to know them in part 3.
There's one additional resource that will help us with serving the application, Ingress.
Incoming Network Access resource Ingress is a completely different type of resource from Services. If you've got your OSI model memorized, it works in layer 7 while services work on layer 4. You could see these used together: first the aforementioned LoadBalancer and then Ingress to handle routing. In our case, as we don't have a load balancer available we can use the Ingress as the first stop. If you're familiar with reverse proxies like Nginx, Ingress should seem familiar.
Ingresses are implemented by various different "controllers". This means that ingresses do not automatically work in a cluster, but gives you the freedom of choosing which ingress controller works for you the best. K3s has Traefik installed already. Other options include Istio and Nginx Ingress Controller, more here.
Switching to Ingress will require us to create an Ingress resource. Ingress will route incoming traffic forward to a Services, but the old NodePort Service won't do.
$ kubectl delete -f manifests/service.yaml service "hashresponse-svc" deleted
A ClusterIP type Service resource gives the Service an internal IP that'll be accessible in the cluster.
The following will let TCP traffic from port 2345 to port 3000.
apiVersion: v1 kind: Service metadata: name: hashresponse-svc spec: type: ClusterIP selector: app: hashresponse ports: - port: 2345 protocol: TCP targetPort: 3000
For resource 2 the new Ingress.
- Declare that it should be an Ingress
- And route all traffic to our service
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dwk-material-ingress spec: rules: - http: paths: - path: / pathType: Prefix backend: service: name: hashresponse-svc port: number: 2345
Then we can apply everything and view the result
$ kubectl apply -f manifests/ ingress.networking.k8s.io/dwk-material-ingress created service/hashresponse-svc configured $ kubectl get svc,ing NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 5m22s service/hashresponse-svc ClusterIP 10.43.0.61 <none> 2345/TCP 45s NAME CLASS HOSTS ADDRESS PORTS AGE ingress.networking.k8s.io/dwk-material-ingress <none> * 172.21.0.3,172.21.0.4,172.21.0.5 80 16s
We can see that the ingress is listening on port 80. As we already opened port there we can access the application on http://localhost:8081.