Kubernetes Networking 101
Networking can be very important when dealing with microservice-based architectures, and Kubernetes provides first-class support for a range of different networking configurations. Essentially, it provides you with a simple and abstracted cluster-wide network. Behind the scenes, Kubernetes networking can be quite complex due to its range of different networking plugins. It may be useful to try keeping the simpler concepts in mind before trying to identify the flow of individual networking packets.
A good understanding of Kubernetes’ range of service types and ingresses should help you choose appropriate configurations for your clusters. Likewise, it will help minimize the complexity and resources (like provisioned load balancers) involved.
To begin with, here are some useful facts:
1. Every pod is assigned a unique IP address
2. Pods run within a virtual network (specified by the pod networking CIDR)
3. Containers within an individual pod share the same network namespace (Linux network namespace), this means they are all reachable via localhost and share the same port space.
4. All containers are configured to use a DNS server managed by Kubernetes.
Providing external access into your cluster
The process of requiring external access into your cluster works slightly differently than the process for listening to an open port. Instead, an ingress, LoadBalancer service, or
NodePort service is used, which we will cover below.
Inspecting a pod IP address
It is often useful to identify a pod IP address. This value is held in metadata within the Kubernetes cluster state.
You can inspect the IP with the following command:
$ kubectl get pod -o yaml busybox | grep podIP podIP: 10.10.3.4
Doing so will save you you the trouble of having to manually execute into the container using ip addr or otherwise. You can also view this with the -o wide argument to kubectl get pods.
$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox 0/1 Completed 0 2d8h 10.10.3.4 node-1 <none> <none>
A wide variety of service configuration is supported. However, there are four basic types of services.
This is the default service type, and one of the simplest. Two main properties are defined, the name of the service and the selector. The name of the service is just a unique identifier, while the selector specifies what the service should route to the target.
NodePorts are similar to ClusterIPs, except that all nodes get specified or random ports allocated to its service. Network requests to ports on any of the nodes are proxied into the service.
LoadBalancers are similar to ClusterIPs, but they are externally provisioned and
have assigned public IP addresses. The load balancer will be implementation specific. This is often used in cloud platforms.
Two main properties are defined: the name of the service and the external domain. This is a domain alias in some sense. This allows you to define a service that is referenced in multiple places (pods or other services) and manage the endpoint / external domain defined in one place. It also allows you to abstract the domain as a service, so you can swap it for another Kubernetes service later on.
Configuring a simple service
Create a new file named web-app-service.yaml with contents of:
apiVersion: v1 kind: Service metadata: name: web-service spec: ports: - name: http port: 80 selector: app: web
Create and describe it:
$ kubectl create -f web-app-service.yaml service/web-service created
$ kubectl describe services web-service Name: web-service Namespace: default Labels: <none> Annotations: <none> Selector: app=web Type: ClusterIP IP: 10.97.7.119 Port: http 80/TCP TargetPort: 80/TCP Endpoints: <none> Session Affinity: None Events: <none>
In the above output, we see Endpoints: <none>. This value shows the pod IP addresses that match the specified selector app=web, in this case <none> (no matches).
So let’s go ahead and create two pods with the appropriate labels to match the selector. We can simply execute this by creating two manually managed pods (as opposed to a deployment) with the following commands:
$ kubectl run httpbin --generator=run-pod/v1 --image=kennethreitz/httpbin --labels=”app=web” pod/httpbin created $ kubectl run httpbin-2 --generator=run-pod/v1 --image=kennethreitz/httpbin --labels=”app=web” pod/httpbin-2 created
Once those pods are scheduled and successfully running, we can inspect the service again. We should see the following for Endpoints:
$ kubectl describe services web-service | grep “Endpoints” Endpoints: 172.17.0.3:80,172.17.0.4:80
Those IP addresses belong to the pods we just created!
Accessing a service
As mentioned earlier, Kubernetes creates a DNS entry for each service defined. In the case of the service we created, the Kubernetes DNS server will resolve the web-service hostname to one of the pods in the web-service. To demonstrate this, we can execute curl into one of the containers, making sure to install curl as though it wasn’t included by default!
$ kubectl exec -it httpbin -- /bin/bash $ apt update ... $ apt install curl ... $ curl web-service <!DOCTYPE html> <html lang=”en”> <head> <meta charset=”UTF-8”> <title>httpbin.org</title> ...
Configuring external access via a NodePort service
One of the simplest ways to provide external access into your Kubernetes pods is through NodePort. In order to configure a NodePort service, we need to explicitly set the spec type (which by default is ClusterIP otherwise) in a service configuration:
spec: type: NodePort
To configure one, create a new file named web-app-nodeport-service.yaml with contents of:
apiVersion: v1 kind: Service metadata: name: web-service-nodeport spec: type: NodePort ports: - name: http port: 80 selector: app: web
Create and inspect it:
$ kubectl create -f web-app-nodeport-service.yaml service/web-service-nodeport created $ kubectl get services web-service-nodeport NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE web-service-nodeport NodePort 10.101.203.150 <none> 80:32285/TCP 23s
Taking a look at the PORT field, we can see it’s been allocated the 32285 port. This is the port that gets allocated on each of our Kubernetes nodes, which will in turn proxy to appropriate pods.
We can test this with the following (take note you’ll need to use your specific IP or domain. In my case, it’s just the internal node ip of 192.168.122.188):
$ curl 192.168.122.188:32285 <!DOCTYPE html> <html lang=”en”> <head> <meta charset=”UTF-8”> ...
Ingresses are another Kubernetes object. They are essentially a more feature-full version of a service. The functionality of ingresses mostly revolves around the routing of HTTP requests, though they have some similarities to services. You may need to setup or configure a particular ingress controller as one won’t necessarily be configured by default. In addition, multiple ingress controllers can be running at the same time. Each controller usually only manages ingresses that have an appropriate kubernetes.io/ingress.class annotation relating to the specific controller.
Ingresses target services and not pods. Some functionalities supported by ingresses include SSL, domain/path-based routing, and configuration of load balancers.
Although ingress controllers conform to a common specification or interface, they often include additional implementation specific configuration. One of the more popular ingress controllers is the “nginx ingress controller”. This usually refers to the following project https://github.com/kubernetes/ingress-nginx, which is a feature-rich controller providing support for HTTP authentication, session affinity, URL rewrites and much more.
Configuring a simple ingress
Create a new file named app-ingress.yaml with the code below. Notice we’re setting
a rule for a host of example.com.
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: rules: - http: paths: - path: / backend: serviceName: web-service servicePort: 80
Create and describe it:
$ kubectl create -f app-ingress.yaml ingress.networking.k8s.io/app-ingress created
$ kubectl describe ingresses app-ingress Name: app-ingress Namespace: default Address: 192.168.122.188 Default backend: default-http-backend:80 (172.17.0.8:8080) Rules: Host Path Backends ---- ---- -------- example.com / web-service:80 (172.17.0.3:80,172.17.0.4:80) Annotations: nginx.ingress.kubernetes.io/rewrite-target: / Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CREATE 21m nginx-ingress-controller Ingress default/app-ingress Normal UPDATE 67s (x5 over 20m) nginx-ingress-controller Ingress default/app-ingress
If we test sending an HTTP request with curl to the ingress IP:
$ curl -H “Host: example.com” http://192.168.122.188 <!DOCTYPE html> <html lang=”en”> <head> <meta charset=”UTF-8”> <title>httpbin.org</title> <link href=”https://fonts.googleapis.com/css?family=Open+Sans:400,700|- Source+Code+Pro:300,600|Titillium+Web:400,600,700” ...
On the other hand, if we try a host that we did not configure:
$ curl -H “Host: example123.com” http://192.168.122.188 default backend - 404
We get a 404 not found response, which seems pretty reasonable – what else could we expect?
Configuring an ingress with SSL
We’ll use a self-signed certificate to demonstrate SSL functionality.
We haven’t yet mentioned “secrets”, but we’ll need to set one up in order to set a SSL key and certificate for an ingress.
We can do this by running the following command:
$ kubectl create secret tls ssl-example-cert --key ssl.key --cert ssl.cert secret/ssl-example-cert created
We can add an SSL certificate by specifying the secret, under the spec key in an ingress:
- secretName: ssl-example-cert
In other words our ingress file will have contents of:
apiVersion: networking.k8s.io/v1beta1 kind: Ingress metadata: name: app-ingress annotations: nginx.ingress.kubernetes.io/rewrite-target: / spec: tls: - secretName: ssl-example-cert rules: - host: example.com http: paths: - path: / backend: serviceName: web-service servicePort: 80
If we now try the HTTPS endpoint of the ingress (also adding the -k parameter to curl to ignore the self-signed certificate error), we’ll see:
$ curl -k -H “Host: example.com” https://192.168.122.188/ <!DOCTYPE html> <html lang=”en”> <head> <meta charset=”UTF-8”> <title>httpbin.org</title> <link href=”https://fonts.googleapis.com/css?family=Open+Sans:400,700|- Source+Code+Pro:300,600|Titillium+Web:400,600,700” ...
Now that you’ve been introduced to Kubernetes networking, it’s time to learn more. Our ebook The Beginner’s Guide to Kubernetes will show you what else can be done with Kubernetes networking as well as provide resources into Kubernetes deployments, volumes, and security.