A step-by-step introduction to running your app in Kubernetes. For Generation X, but not only. Part III

Andrew Kornilov
7 min readJul 3, 2022

--

I have been working in the software engineering industry for more than two decades. In this series of blog posts, I will try to do my best and turn a simple API service from an old-fashioned application running on my laptop into a modern microservice running in the Kubernetes cluster. Also, we will try to make the app less old-fashioned (aka, cloud-native). Step by step. One step at a time.

Part III. Add some cool kubes…

© Shutterstock.com

Previous recipe

You’ll need some Kubernetes environment. A simple Kubernetes environment was added to the recent versions of the Docker Desktop. You just have to enable it.

Or you can use Minikube.

When the Kubernetes system is up and running, you can use kubectl command-line client to interact with the cluster.

For example, to check that nothing is running in your cluster currently:

The smallest execution unit in Kubernetes is a pod. You can create a text (YAML) file pod.yml, with a definition (called manifest) of your first pod.

apiVersion: v1
kind: Pod
metadata:
name: demo-app
labels:
site: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest

This manifest defines the pod with a single container (it’s possible to have more than one, but we will discuss this later) which will use a docker image of our application, created in the previous articles, and hosted on the docker hub. You can apply your manifest and check what’s running in the cluster.

kubectl apply -f pod.yml
kubectl get pods
kubectl describe pod demo-pod

So, now you have an instance of your application, running in the Kubernetes cluster. You can even check more details about your pod:

kubectl describe pod demo-pod

Currently, your application is running inside the cluster and is not accessible from the outer world (your machine is this outer world for the Kubernetes cluster). Let’s go inside the cluster and verify that the application is alive (responds on the default port) but not accessible from your machine (exit from the container with ctrl+z before executing the second check).

kubectl exec demo-pod -it -- /bin/bash

That’s it for pods. In the real sets ups, pods are not very useful. So, let’s delete our pod and try to set up something more practical.

kubectl delete pod demo-pod

A more practical way to run the application in Kubernetes is deployment. Deployment is the way to start and manage multiple instances (replicas) of the same pod. We will get back to the replicas later, and now let’s create the deployment running a single instance of the pod

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
replicas: 1
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest
imagePullPolicy: Always
restartPolicy: Always

Apply as usually

kubectl apply -f simple-deployment.yml

Now you can check your deployment and pod

You can go inside your pod and check if the application is running properly (See command above, just adjust the name of the pod to a real value).

After that, you can play with your application at the deployment level. For example, increase and, later, decrease the number of replicas

kubectl scale deployment demo-app — replicas 3
kubectl scale deployment demo-app — replicas 1

Unfortunately, all the replica instances of the pod are acting as independent applications and the names of the pods have random prefixes. We will implement a few improvements in our setup and after that will address this issue.

As a first improvement, we will implement configuration for the application based on the environment variables.

The simplest way to do that — define the required variables in the manifest of deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
replicas: 1
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest
imagePullPolicy: Always
env:
- name: APP_PORT
value: "7001"

restartPolicy: Always

After applying the updated manifest

Our application is running now not on the default port, but on the port, defined in the environment variable.

But this is not the best way to manage your environment variables in the Kubernetes ecosystem. Lets’ create a configmap

kind: ConfigMap
apiVersion: v1
metadata:
name: demo-app-configmap
data:
APP_PORT: '7777'
SOME_ENT_VAR2: 'SOME VALUE'

And adjust/apply our deployment

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
replicas: 1
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest
imagePullPolicy: Always
env:
- name: SOME_ENT_VAR1
value: "qqqqqqqq"
envFrom:
- configMapRef:
name: demo-app-configmap
restartPolicy: Always

You can mix both versions of environment management.

The second improvement is the assignment of resources available for (every) pod

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
replicas: 1
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest
imagePullPolicy: Always
resources:
requests:
cpu: "550m"
memory: "1100Mi"
limits:
cpu: "650m"
memory: "1100Mi"

env:
- name: SOME_ENT_VAR1
value: "qqqqqqqq"
envFrom:
- configMapRef:
name: demo-app-configmap
restartPolicy: Always

After applying the updated manifest, you can check if the limits were taken into account

kubectl describe pod demo-app-b44b55995-d4p6s

Kubernetes cluster will make sure that requested resources are available for a pod. Otherwise, a pod will not be started (we will talk about that case later).

And now, after all the improvements for our deployment, let's make our replicas act as a single application. We need a service resource for that.

First of all, let’s update the deployment’s manifest to start 2 replicas of the pod and.

---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
spec:
selector:
matchLabels:
app: demo-app
replicas: 2
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: frutik777/k8s-example:latest
imagePullPolicy: Always
resources:
requests:
cpu: "550m"
memory: "1100Mi"
limits:
cpu: "650m"
memory: "1100Mi"
env:
- name: SOME_ENT_VAR1
value: "qqqqqqqq"
envFrom:
- configMapRef:
name: demo-app-configmap
restartPolicy: Always

Apply updated manifest and create manifest for the service

apiVersion: v1
kind: Service
metadata:
name: demo-app
spec:
ports:
- port: 80
targetPort: 7777
protocol: TCP
name: http
selector:
app: demo-app
  1. “selector.app: demo-app” should match “app: demo-app” from the deployment manifest.
  2. “targetPort: 7777” should match port of your application running inside the pods.

Apply it and check the result

kubectl apply -f service.yml
kubectl get svc

Kubernetes provides a built-in DNS server and you can access your service by the hostname <service-name>.<namespace>. We will discuss namespaces later, but since we didn't specify the namespace, the default value “default” was used. So, let’s try to call the service of our application. You should go inside any of your running pods.

Our service is acting as a load-balancer and reverse proxy for all replicas of our application.

As result, we have:

  • Dockerized application.
  • Running in Kubernetes cluster.
  • Configuration from the environment variables.
  • Each instance of the application has its own defined and guaranteed resources.
  • Multiple instances of the application, running behind the load-balancer. So, if one of the instances would crash, the service will continue to serve requests.

Next chapter

--

--