Micro-Frontend in Kubernetes: Utilizing NGINX Pods as Web Servers for React Applications with NGINX Ingress Controller
Before we begin I would like give credits to Stefan Moraru for this demo project which I have used to demonstrate later in this article. To visit his project visit this link.
Overview
The concept of micro frontends emerged as a solution to the limitations posed by traditional monolithic frontend architectures, particularly in the context of increasingly complex web applications.
Micro frontends extend the principles of microservices — a well-established architectural pattern that divides backend services into smaller, independently deployable units — to the frontend layer of applications. Just as microservices allow backend components to be developed, deployed, and scaled independently, micro frontends enable frontend components to be treated as autonomous modules. This modularity allows teams to work on different parts of an application simultaneously without needing to coordinate releases with other teams, thus enhancing agility and reducing bottlenecks in development.
Benefits of Micro Frontends
The adoption of micro frontends offers several advantages:
- Flexibility: Teams can use different technologies or frameworks for different micro frontends, allowing for greater flexibility in development.
- Improved Maintenance: Smaller codebases are easier to manage and maintain, reducing the complexity associated with large monolithic applications.
- Enhanced User Experience: By allowing for more frequent updates and iterations on specific features without affecting others, micro frontends can lead to a better overall user experience.
Single Nginx Pod vs Multiple Nginx Pods
Deploying React components in a Kubernetes environment can be approached in two primary ways: using a single Nginx pod to serve all components or deploying multiple pods, each serving different components. Each approach has its own advantages and disadvantages.
Pros and Cons of Deploying All React Components in One Nginx Pod
Pros
- Simplicity: Managing a single pod can simplify deployment and configuration. You only need to handle one Nginx configuration, which can reduce complexity in routing and service management.
- Resource Efficiency: A single pod may use fewer resources than multiple pods, as there is less overhead from Kubernetes managing multiple instances. This can be beneficial in scenarios where resource constraints are a concern.
- Reduced Latency: Since all components are served from the same pod, inter-component communication can be faster due to shared networking and reduced latency compared to communicating across multiple pods.
Cons
- Scalability Limitations: A single pod may become a bottleneck as traffic increases, limiting the ability to scale effectively. If the pod fails, all components go down together, impacting availability.
- Complex Configuration Management: As the application grows, managing a single Nginx configuration for multiple components can become complex. Changes or updates may require careful coordination to avoid service disruptions.
- Resource Contention: All components share the same resources (CPU, memory), which can lead to contention issues if one component consumes excessive resources, potentially affecting the performance of others.
Pros and Cons of Deploying Multiple Pods
Pros
- Improved Scalability: Each component can be scaled independently based on its specific needs. This allows for better resource allocation and handling of varying loads across different parts of the application.
- Enhanced Fault Isolation: If one pod fails, others remain operational. This improves overall application resilience and minimizes downtime for users.
- Simpler Configuration Management: Each pod can have its own configuration tailored to its specific component, making it easier to manage changes and updates without affecting the entire application
Cons
- Increased Overhead: More pods mean more overhead in terms of resource usage for Kubernetes management, which could lead to higher costs if not managed properly.
- Complex Networking: Managing network communication between multiple pods can introduce complexity, requiring additional configurations such as services or ingress controllers to route traffic correctly.
- Potential Latency Issues: Communication between different pods may introduce latency compared to a single pod setup since they may not share the same network namespace.
In conclusion, the choice between deploying all React components in one Nginx pod versus multiple pods should be guided by the specific requirements of your application, including factors like expected load, resource availability, and operational complexity.
For the demo purposes I have used a react project form github and also I have created and made changes to the some files in the project to deploy onto Kubernetes so you can refer my project directly form github.
Code Changes
Before proceeding, please keep in mind that Shell Service is our primary service. All configurations have been tailored to support this.
I have updated the URLs and ports for all services in the following files:
- ‘micropods.config.ts’ located in the main folder
- ‘rs.config.ts’ located in each service folder
Feel free to modify these configurations further according to your specific requirements.
Note: Use http instead of https. (I have exported publicly with SSL certificate to test.)
Nginx Configuration
I have added nginx.conf in every service with caching disabled.
Note: Only location get modified for services and everything remains same.
# This is complete nginx.conf file (used in shell service)
server {
listen 80;
server_name localhost;
gzip on;
gzip_types text/plain text/css text/html application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
# Handle main application routes
location / {
alias /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
# Don't cache files
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
# Handle 404 errors
error_page 404 /index.html;
# Handle server errors
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
internal;
}
}
For others change below config.
# for others
location /dashboard {
alias /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
# Don't cache HTML files
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
#---------------------
location /invoices {
alias /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
# Don't cache HTML files
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
#---------------------
location /server {
alias /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
# Don't cache HTML files
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
#---------------------
location /ui {
alias /usr/share/nginx/html/;
include /etc/nginx/mime.types;
try_files $uri $uri/ /index.html;
# Don't cache HTML files
add_header Cache-Control "no-store, no-cache, must-revalidate";
}
Docker Configuration
I have created Dockerfile to create docker images.
FROM iaminci/node:18.20.4-slim AS build
ARG SERVICE_NAME
WORKDIR /app
COPY . .
RUN pnpm install --force
RUN pnpm --filter=${SERVICE_NAME} build dev
FROM nginx:alpine AS production
ARG SERVICE_NAME
COPY --from=build /app/pods/${SERVICE_NAME}/dist /usr/share/nginx/html/
COPY --from=build /app/pods/${SERVICE_NAME}/nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
docker build --build-arg SERVICE_NAME=<service_name> -t <docker_image>:<docker_tag> .
# For example SERVICE_NAME=shell
Kubernetes Configurations
I have also added Kubernetes manifests inside k8s-manifests folder with project.
Note: Change image from all manifest files.
K3D Configuration (Optional)
You can run Kubernetes in your local system to test this changes by using minikube, kind, k3d. I like to use k3d as it.
You only need docker as a pre-requisite to install k3d just run below command.
Use can either use wget or curl to download and install k3d.
# wget:
wget -q -O - https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
# curl:
curl -s https://raw.githubusercontent.com/k3d-io/k3d/main/install.sh | bash
Create config.yaml and add below config and run below command.
apiVersion: k3d.io/v1alpha5
kind: Simple
metadata:
name: k3d
servers: 1
agents: 2
kubeAPI:
hostIP: "0.0.0.0"
hostPort: "6445"
nodeFilters:
- server:0
- agent:*
options:
k3d: # k3d runtime settings
wait: true # wait for cluster to be usable before returning; same as `--wait` (default: true)
timeout: "60s" # wait timeout before aborting; same as `--timeout 60s`
disableLoadbalancer: false # same as `--no-lb`
disableImageVolume: false # same as `--no-image-volume`
disableRollback: true # same as `--no-Rollback`
# Do not change command
k3d cluster create k8s --config /path/to/your/config.yaml -p "80:80@loadbalancer" -p "443:443@loadbalancer"
# Run this command to remove default load balancer
kubectl delete job.batch/helm-install-traefik -n kube-system
kubectl delete job.batch/helm-install-traefik-crd -n kube-system
kubectl delete deployment.apps/traefik -n kube-system
kubectl delete service/traefik -n kube-system
Install NGINX Ingress Controller
To install NGINX ingress controller run below commands.
# Install helm using below link
https://helm.sh/docs/intro/install/
# Add nginx ingress repo
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
# Install nginx ingress controller using helm
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace kube-system --set controller.ingressClassResource.name=nginx
Deploying Kubernetes manifests
To deploy your application use below commands.
# Run below command from mirco-frontend-demo directory to deploy everything at once.
kubectl apply -f k8s-manifests
# Run below command to check if everything is working.
kubectl get all
Result
Now check your app in browser with your url. You should also get something similar.
I have also added this steps in readme file you can also refer it.
If you have any questions regarding this feel free to ask and I will try to answer it ASAP.