This post details my experience deploying a simple Node.js "Hello World" application across development, staging, and production environments using Kubernetes, Helm, and GitHub Actions. While the application itself is basic, the process highlights key concepts applicable to more complex projects. I'll share the challenges I faced, the decisions I made, and suggestions for future improvements.
Prerequisites:
Before starting, I ensured I had:
- A Kubernetes cluster (I used Minikube for local development; for production, a cloud-based solution would be preferable).
-
kubectl
installed and configured correctly. This took a bit of troubleshooting initially, ensuring the right context was set. - Helm installed and functioning.
- A GitHub account.
Step 1: The Application – A Simple Node.js Server
I created a minimal Node.js server (server.js
):
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
app.get('/', (req, res) => {
res.send('Hello from ' + process.env.NODE_ENV + '!');
});
app.listen(port, () => {
console.log(`Server listening on port ${port}`);
});
And its corresponding Dockerfile
:
FROM node:18
WORKDIR /app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "node", "server.js" ]
Building and pushing the image to Docker Hub was straightforward:
docker build -t <my-docker-hub-username>/multi-stage-app:dev .
docker push <my-docker-hub-username>/multi-stage-app:dev
Step 2: Defining Environments and Namespaces
Creating separate namespaces in Kubernetes was crucial for isolation:
kubectl create namespace dev
kubectl create namespace staging
kubectl create namespace production
Step 3: Creating and Configuring the Helm Chart
I started with helm create multi-stage-app
and then modified the templates/deployment.yaml
and templates/service.yaml
files. The deployment.yaml
file was particularly important for setting up environment variables and specifying the image. I made sure to use the {{ .Values.env.NODE_ENV }}
to dynamically set the NODE_ENV
variable based on the values file used during deployment.
Step 4: Helm Values Configuration
This is where the magic of Helm's flexibility came in. I created separate values files: values.yaml
, values-dev.yaml
, values-staging.yaml
, and values-prod.yaml
. The values-staging.yaml
and values-prod.yaml
files allowed me to easily adjust replica counts and other parameters based on the environment. I chose a LoadBalancer
service type for dev
for easy local access, and ClusterIP
for staging
and production
to minimize costs.
Step 5: Helm Deployments
Deploying with Helm was surprisingly easy:
helm install dev-release ./multi-stage-app -n dev -f values-dev.yaml
helm install staging-release ./multi-stage-app -n staging -f values-staging.yaml
helm install prod-release ./multi-stage-app -n production -f values-prod.yaml
Step 6: Automating with CI/CD (GitHub Actions)
The GitHub Actions workflow (deploy.yml
) was the most challenging part. I initially struggled with the manual approval step for production deployments. The solution was to use conditional logic within the workflow.
# ... (Workflow definition) ...
deploy:
# ...
- name: Deploy to Production (Manual Approval)
if: github.event_name == 'push' && github.ref == 'refs/heads/main'
needs: manual-approval
run: |
helm upgrade --install prod-release ./multi-stage-app -n production -f values-prod.yaml
manual-approval:
# ...
Suggestions for Improvement:
- More Robust Health Checks: The current deployment lacks robust health checks. Integrating a liveness and readiness probe within the Kubernetes deployment definition would significantly improve reliability.
- Secrets Management: Hardcoding sensitive information (like database credentials) directly in the values files is insecure. A secrets management solution (like Kubernetes Secrets or HashiCorp Vault) should be implemented.
- Advanced Ingress Controller: For production, an advanced ingress controller (like Nginx or Traefik) would provide features like SSL termination and load balancing.
- Automated Rollbacks: The CI/CD pipeline should include automated rollbacks in case of deployment failures.
- Monitoring and Logging: Integrating monitoring and logging tools (like Prometheus and Grafana) is crucial for observability and troubleshooting.
Remember to replace <my-docker-hub-username>
with your actual Docker Hub username.
Top comments (0)