Kubernetes 101: How to run in local

Daniel S. Blanco
4 min readFeb 22, 2024

--

In this post we are going to see the easiest way to create your own Kubernetes local cluster. And so, we can get a little bit into this technology. In the development world it’s increasingly necessary to have at least basic knowledge of DevOps.

And that’s what this post is about, to be able to have at least a notion of what certain concepts mean. Although the example is simple and in a couple of files we will have what we want, we will need more theory than we normally indicate, to be able to understand all. The idea is to be able to start a cluster, with two instances of our microservice that will contain a simple healthcheck and to be able to test it from outside the cluster.

Let’s start with the basic concepts of the technology to be used:

  • Kubernetes: It is an open source platform to automate the deployment, scaling and management of containerized applications.
  • Kind: is a tool to run local Kubernetes clusters using Docker container “nodes”. Although it was born with another purpose, it can be used for local deployment.
  • Kubectl: Command-line tool to manage your Kubernetes cluster.

So the first thing to do is to install Docker or Podman, as well as Kind and Kubectl if we don’t have it yet. And before showing the cluster configuration we must know another couple of concepts about Kubernetes architecture. These are:

  • Node or worker: It’s is a worker machine that runs Kubernetes workloads. It can be a physical or virtual machine, which executes applications in containers.
  • Control Plane: Manages the workers and Pods in the cluster.

Knowing this, we will be able to understand a little more the cluster configuration file. This will be stored in a YAML file as we can do with any other Kubernetes object.

apiVersion: kind.x-k8s.io/v1alpha4
kind: Cluster
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
- role: worker

In this file, we can see how we will have two nodes, the administration node and the one that will allow us to deploy the containers. We can indicate as many as we want taking into account the requirements of our machine. We also indicate that we are going to expose port 30000 (the first available port since it is not possible to indicate lower ports) from the cluster to the outside. We will see later why.

If we want to start our cluster we must execute this command:

kind create cluster --config kind-config.yaml --name kind-basic

And if we want to destroy it, this one:

kind delete cluster --name kind-basic

The next thing will be to prepare the configuration files of the Kubernetes objects. Although there are more, for now we only need to know about these:

  • Pods: These are the smallest deployable units that can be created and managed in Kubernetes. And they can be composed of a single container, in a common use case, or several.
  • Deployment: An object that describes how an application should be deployed and updated in the cluster.
  • Service: It is a method for exposing a network application that is running as one or more Pods in your cluster. It can describe ports and load balancers associated to it.

Knowing this, we will proceed to configure on the one hand a Deployment, which will allow us to indicate which application we want to deploy, its version, how many replicas to deploy in the cluster, and many other possibilities. In this file we will declare the name and label associated to our deployment, and what is more important, how many replicas we want to deploy and which is the image we should use.

apiVersion: apps/v1
kind: Deployment
metadata:
name: ms-k8s
labels:
app: ms-k8s
spec:
replicas: 2
selector:
matchLabels:
app: ms-k8s
template:
metadata:
labels:
app: ms-k8s
spec:
containers:
- name: ms-k8s
imagePullPolicy: IfNotPresent
image: deesebc/ms-k8s:1.0.0-SNAPSHOT

Finally we must configure the service. Through which we will be able to expose our application to the outside. Up to this point the configuration had no difficulty, beyond knowing how Kubernetes needs its objects to be configured. But for the service we need to know what types of services exist and which one we need for our example.

  • ClusterIP: Default option. It exposes the Service on an internal IP address of the cluster associated to a certain port, also internal.
  • NodePort: Allows to expose the associated Service on each node IP on a static port. Allowing external access through this port.
  • LoadBalancer: Allows to expose the service through a LoadBalancer of a cloud provider. The option to use for production.
  • ExternalName: Works similarly to any other type, but instead of returning the IP associated with the service, it returns the CNAME record with the indicated value.

Therefore, and taking into account that we want it to be accessible from the outside, and that being an example to be deployed locally. The configuration option needed for our example is the NodePort. Associating it to the output port that we indicated previously in the creation of the cluster, the 30000.

apiVersion: v1
kind: Service
metadata:
name: ms-k8s-service
spec:
type: NodePort
selector:
app: ms-k8s
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30000

It is not necessary to have both configurations separated, we can include them in the same file as long as we have a line with the text ‘ — -’ between both configurations. And if we want to deploy them in our cluster, we must execute the command:

kubectl apply -f configuration.yaml

Now we only have to test it, to do so we must execute the following command:

curl --location 'http://localhost:30000/q/health/live'

And that’s all, friends. I hope it has helped you to have a minimum knowledge of Kubernetes and how to perform a basic configuration.

--

--

No responses yet