In this blog post, we are going to deploy a simple Dockerized Node.js application on Kubernetes using Helm. Before we move to the actual deployment process, we will get a brief intro of Docker, Kubernetes and Helm in order to understand why are we using these technologies in the first place.
What is Docker and why is it used?
How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%
Docker is a Platform as a Service (PaaS) product that was developed to create, deploy and run applications using the concept of containers. Instead of needing to install an application and all of its dependencies on all servers where an application needs to run, a container contains an application, all the space it needs, and all required dependencies needed to run the application. The use of Docker has made code deployment and mobility easier and more efficient.
Docker containers are deployed with the Kubernetes service which provides a solid and scalable deployment pipeline for products or services. Instead of creating a whole virtual operating system as a virtual machine, Docker allows applications to use the same Linux kernel as the system that they’re running on and only requires applications to be shipped with dependencies not already running on the host computer.
Although the concept of containers resolves numerous deployment issues which are generally faced by developers when an application is deployed, it still doesn’t solve problems such as:
- Scaling applications efficiently and easily in case of increased/decreased traffic, our apps still need to be manually scaled up or down.
- Decreasing the downtime caused by application updates.
- Utilizing cloud services to their fullest potential in a cost effective manner compared to manual deployment, as it carries the risk of being inefficient at distributing resources based on traffic fluctuations.
To resolve the issues mentioned above in an efficient and scalable manner we rely on a container orchestration service called Kubernetes. Similar functionality is also provided by Docker Swarm but its features and automation doesn’t come close to Kubernetes. Docker Swarm needs human intervention every now and then whereas Kubernetes resolves various issues on its own based on the config files provided at the time on deployment.
What is Kubernetes and why is it used?
How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%
Kubernetes (K8s) is an open-source system for automating the deployment, scaling, and management of containerized applications. Kubernetes is the most commonly used container orchestration service available in the market right now. It can be used to run billions of containers for your services, or even for an entire production platform.
- Automation of load balancing between application components.
- Replication of microservices/application components to avoid a single point failure.
- Sharing of storage across numerous services/applications using Volume Mounts, and Persistent Volume Claim.
- Scaling of microservices/applications. Kubernetes can autoscale clusters without human intervention.
- Suppose you’ve deployed an app which has the capacity to handle 10,000 concurrent requests, but your app receives more public attention than anticipated. You need to avoid the drop in user experience caused by a deployment that can’t sustain the increased load. For example, we could set an upper limit of concurrent requests to a cluster to 10,000. If that threshold is exceeded, it will automatically spin up additional resources based on your configuration. Once the traffic subsides and dips below the threshold, it will automatically downscale to the original configuration.
What is Helm?
Helm is a tool for Kubernetes that helps in the installation and management of applications. Helm uses a concept called Charts, a helm chart is nothing but a collection of our files that are packaged in a certain way so that Helm can use them.
Helm has two parts:
Helm CLI: It is run on the command line and helps in the generation of the yaml template file and is used to interact with the Helm Server(Tiller) that is running on the Kubernetes cluster.
Helm Tiller: Tiller is the actual helm server that is running inside the Kubernetes cluster. Tiller is responsible for the managing the Helm Releases and maintaining Helm History for all our deployments. Maintaining a history for any release is important as if something goes wrong with our new deployment we can easily roll back to the last stable version using the release history.
Structure of Helm
The structure of any Helm package that is generated follows the below format.
nodejs-sample-chart/
Chart.yaml
values.yaml
requirements.yaml
.helmignore
charts/
templates/
Chart.yaml: it stores the metadata about the chart and version information
values.yaml: it contains the configuration values which are interpolated inside the template
requirements.yaml: it contains the dependencies for your application i.e some database required for the Node.js application.
charts/: this directory contains the packages of the dependencies we add in the requirements file.
templates/: this is the core directory that contains the actual template of our application and contains — application.yaml, deployment.yaml, ingress.yaml, service.yaml and serviceaccount.yaml.
Working of Kubernetes deployment using Helm
How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%
At the time of deployment, the files in the templates/ directory interpolate the variables from the values.yaml file and extract the functions from the _helpers directory and in turn sends the resultant yaml file to the Kubernetes server where Helm is installed i.e the Tiller. Tiller creates a release of the chart which in turn generates the services and deployments of our application. Deployments are the actual locations inside which the pods of our application and running whereas services are the entry point for those deployments that are generally connected to an external ingress service such as Nginx Ingress, Argo Ingress, etc.
Steps to deploy a Node.js application using Helm on Kubernetes
Prerequisites
Create a Node.js application of your choice and create a Dockerfile for it. You can use a Node.js framework like Express.js to make the development easier and streamlined.
Node.js Reference Link: https://nodejs.org/en/
Express.js Reference Link: https://expressjs.com/
Docker Reference Link: https://docs.docker.com/
Steps
- Go to the root directory of the Node.js application which consists the package.json and the Dockerfile and run the following command:
helm create nodejs-sample-chart
This command creates a sub directory inside our root directory at the level of the Dockerfile and package.json file with the name nodejs-sample-chart and contains a few files and folders as explained above in the “Structure of Helm” section.
- Create a requirements.yaml file inside the nodejs-sample-charts for additional resources that we want to run alongside our Node.js application such as a database like MongoDB, PostgreSQL. Add the dependencies inside the file for our application such as our database in the format.
dependencies:
– name: YOUR_DEPENDENCY_NAME
repository: LINK_TO_THE_REPOSITORY
version: VERSION_OF_THE_DEPENDENCY
- After adding the dependencies in the requirements.yaml . Go to the charts/ directory and run the following command:
helm dependency update
All the dependencies mentioned in the requirements.yaml will be added as .tgz files in the charts/ directory.
- Create an application.yaml file in the templates/ directory which is located inside the nodejs-sample-chart directory.
Note: The _helpers.tpl file would also be present in this templates directory which would contain our helper labels and functions as explained above in the section “Working of Kubernetes deployment using Helm”.
- The application.yaml file is the core file which contains information about the deployment and the service of the Node.js application.
application.yaml
How One AI-Driven Media Platform Cut EBS Costs for AWS ASGs by 48%
_helpers.tpl
Key Points
- The {{ template “sample-nodejs-chart.fullname” . }} is added in the application.yaml file instead of a hardcoded name so that we we generate a unique name for each of the releases so that we can revert back to an old stable release anytime we want. This name is automatically generated inside the _helpers.tpl file when we run the command “helm create nodejs-sample-chart”. We can generate similar labels for other resources such as database etc inside the _helpers.tpl file.
- Replace the DATABASE_URL and DATABASE_HOST_LINK placeholders in the application.yaml file with the actual values. Providing values for DATABASE_URL and DATABASE_HOST_LINK are only necessary if you want to link a database deployment with the Node.js application. Otherwise you can just omit these lines(line 19 and line 20) completely in the configuration file and create a MongoDB deployment separately which is a preferred approach.
- Run the following command to see how actual the .yaml file would be generated for the deployment.
helm template .
This is essential for debugging our configuration files so that we can rectify any mistakes we made while creating the file. Running this command DOES NOT deploy the application on the actual Kubernetes cluster.
- Run the following command to package the helm chart and actually release it on the Kubernetes cluster:
helm install .
Note: The application.yaml file we wrote looks quite similar to a regular kubernetes.yaml file. The major benefit is that in a regular deployment without helm we would have to generate each of the deployments and services manually by running kubectl commands whereas while using Helm all the services, deployments and the pods inside the deployments are automatically created as per the configuration file by running a single command i.e helm install .
- That’s it! Your Node.js application is deployed on your Kubernetes cluster by using Helm. You can play around and get additional information about deployment and services using kubectl commands.
Credits: kubectl Commands: https://kubernetes.io/docs/reference/kubectl/cheatsheet/