A step-by-step introduction to running your app in Kubernetes. For Generation X, but not only. Part IV

Andrew Kornilov
6 min readJul 9, 2022

--

I have been working in the software engineering industry for more than two decades. In this series of blog posts, I will try to do my best and turn a simple API service from an old-fashioned application running on my laptop into a modern microservice running in the Kubernetes cluster. Also, we will try to make the app less old-fashioned (aka, cloud-native). Step by step. One step at a time.

Part IV. Let’s open it…

https://www.delish.com/kitchen-tools/cookware-reviews/a28931484/how-to-open-wine-bottle/

Previous recipe

You’ll need a new puzzle in the Kubernetes stack, called an ingress controller. There are multiple controllers available, but the simplest and most popular one is the Nginx ingress controller, built on top of the famous Nginx web server (and reverse proxy). Installation is as easy as the execution of the following command:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.2.1/deploy/static/provider/cloud/deploy.yaml

Check if your ingress controller was installed

When the ingress controller is up and running, we are ready to expose the application “bottled” in the cluster to the outside world. Let’s create for that a new type of Kubernetes resource named ingress.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-app
spec:
ingressClassName: nginx
rules:
- host: demo-app.localdev.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo-app
port:
number: 80

Highlighted parts of the manifest are the domain name for our application and the name of service of our application.

Finally, after applying the manifest, you can send the first request to the application from your local machine:

Can we improve setup of our application further? Of course. Right now the application is sending plain text response and network transfer can be quite big.

In order to speed up receiving of the response on the client’s side, a good practice is to compress the response with some of the well-known (and supported by Nginx) compression methods.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-app
annotations:
nginx.ingress.kubernetes.io/server-snippet: |
gzip on;
gzip_types text/plain text/css application/json application/x-
javascript text/xml application/xml application/xml+rss text/javascript;
spec:
ingressClassName: nginx
rules:
- host: demo-app.localdev.me
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo-app
port:
number: 80

More possible tweaks for gzip you can find in the following article — https://www.digitalocean.com/community/tutorials/how-to-improve-website-performance-using-gzip-and-nginx-on-ubuntu-20-04. Despite the fact that the article is for Nginx webs server, all the parameters are available for Nginx ingress as well.

Apply updated manifest and check new result:

curl -i -H ‘Accept-Encoding: gzip’ http://demo-app.localdev.me/hello

First of all, you should notice that content is compressed by gzip compression method, and that’s why you can not see response body now — it’s binary now.

This way you can customize a lot of functionality of ingress controller. For example, CORS headers.

Probably later we will add SSL certificates handling, and, as result, handling of encription outside of your application (making it simpler for development and focused on its main functionality instead of handling security too).

That’s actually it for the current chapter. But in the end of the article we will also improve setup and reliability of our deployment. As you remember, we have a service, exposing multiple pods in deployment and actions as load balancer for it. But what if one of our pods is not healthy and actually can not serve the requests? Service know nothing about those troubles and will continue sending requests to unhealthy instance of application. In order to make service aware of status of pod, we will add a liveness probe:

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
replicas: 2
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest
imagePullPolicy: Always
resources:
requests:
cpu: "150m"
memory: "110Mi"
limits:
cpu: "150m"
memory: "210Mi"
livenessProbe:
httpGet:
path: /hello
port: 7777
initialDelaySeconds: 1

env:
- name: SOME_ENT_VAR1
value: "qqqqqqqq"
envFrom:
- configMapRef:
name: demo-app-configmap
restartPolicy: Always

Apply and check the logs of a pod:

After adding the liveness probe, Kubernetes will make sure that pods in your deployments are still alive (able to respond on specified location), and will remove failed (unhealthy) instances from the service. Of course, you won’t to use for that check your real endpoints, handling some, probably heavy, logic. You can create a lightweight endpoint for that. For example /status.

import os
import uvicorn

from fastapi import FastAPI
from fastapi.logger import logger


app = FastAPI()


@app.get("/hello")
def hello():
logger.warning('Request!')
return {"Hello": "At vero eos et accusamus et iusto odio dignissimos ducimus qui blanditiis praesentium voluptatum deleniti atque corrupti quos dolores et quas molestias excepturi sint occaecati cupiditate non provident, similique sunt in culpa qui officia deserunt mollitia animi, id est laborum et dolorum fuga. Et harum quidem rerum facilis est et expedita distinctio. Nam libero tempore, cum soluta nobis est eligendi optio cumque nihil impedit quo minus id quod maxime placeat facere possimus, omnis voluptas assumenda est, omnis dolor repellendus. Temporibus autem quibusdam et aut officiis debitis aut rerum necessitatibus saepe eveniet ut et voluptates repudiandae sint et molestiae non recusandae. Itaque earum rerum hic tenetur a sapiente delectus, ut aut reiciendis voluptatibus maiores alias consequatur aut perferendis doloribus asperiores repellat."}


@app.get("/status")
def status():
return 'OK'



if __name__ == "__main__":
app_port = int(os.getenv('APP_PORT', 9000))
uvicorn.run(app, host="0.0.0.0", port=app_port)

To create and publish a new updated version of docker image:

docker-compose build
docker-compose push

And restart the deployment

kubectl rollout restart deployment demo-ap

After that you can change a path in your liveness probe from /hello to a new location /status and apply updated manifest.

But what if you application requires a slow bootstrap process and is not ready start serving the requests? To cover that scenario, you need a readiness probe

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
replicas: 2
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest
imagePullPolicy: Always
resources:
requests:
cpu: "150m"
memory: "110Mi"
limits:
cpu: "150m"
memory: "210Mi"
livenessProbe:
httpGet:
path: /status
port: 7777
initialDelaySeconds: 1
readinessProbe:
httpGet:
path: /status
port: 7777
initialDelaySeconds: 180

env:
- name: SOME_ENT_VAR1
value: "qqqqqqqq"
envFrom:
- configMapRef:
name: demo-app-configmap
restartPolicy: Always

So, what are the main points covered in this article:

  • We’ve exposed our application to the outer world
  • We’ve added gzip compression into our application (without changing the application)
  • We’ve improved stability of the application with livemess and readiness checks.

--

--