Self Hosting Part V - A Self-Provisioned Edge Load Balancer
22 Jun 2023Let’s set up a self-provisioned edge load balancer for the Kubernetes cluster using NGINX and a Raspberry Pi.
The Kubernetes cluster is up and running, and I’m really impressed with the kubespray deployment process. Let’s tackle the next challenge.
When you create a LoadBalancer service in a cloud providers like AKS, EKS or GKE, a network load balancer is provided with a single entry point to connect the clients to the applications running in the cluster. All you need to do is specify some parameters such as static public IP address, port, FQDN and you’re done.
Kubernetes in a bare-metal environment doesn’t provide such load balancers out of the box. Some possible solutions are available on the Bare-metal considerations section of the NGINX Ingress Controller documentation.
The self-provisioned edge approach avoids clients accessing the cluster nodes directly, and if in some point I want to expose services to the Internet, I could expose just the edge device.
So the goal is to end up with something like this:
Load Balancer] edge --- ingress[fab:fa-nfc-directional Ingress] subgraph k8s cluster direction LR ingress ---|routing rules| svc[fa:fa-server Service] svc --- deploy[fa:fa-sitemap Deployment] deploy --- D[fa:fa-cube Pod] deploy --- E[fa:fa-cube Pod] deploy --- F[fa:fa-cube Pod] end
Prerequisites
- roting in the Raspberry Pi;
- Ingress Controller in the Kubernetes cluster.
Raspberry Pi routing
The Raspberry Pi (see diagram below) must have HTTP routes between its network interface wlan0
(network 192.168.1.0/24) and eth0
(network 192.168.10.0/24). Eg.:
sudo ufw route allow in on wlan0 out on eth0 \
to 192.168.10.0/24 port 80 proto tcp
Ingress Controller
We’re going to use the NGINX Ingress Crontroller, but don’t get confused with the NGINX edge load balancer, which we’ll configure later in the Raspberry Pi. The Ingress Controller is deployed and runs in the Kubernetes cluster.
Deploy the NGINX Ingress Controller in the Kubernetes cluster:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.8.0/deploy/static/provider/baremetal/deploy.yaml
Check if the installation succeeded and the ingress controller’s version is as expected:
POD_NAMESPACE=ingress-nginx
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
Get the the ingress controller’S NodePort. In this case, each Kubernetes node will proxy the same port number (NodePort 30535 for http, and 32451 for https) into the NGINX ingress controller.
kubectl get service ingress-nginx-controller --namespace=ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.233.43.219 <none> 80:30535/TCP,443:32451/TCP 12m
Kubernetes demo app
Before we go ahead with the NGINX installation and configuration on the Raspberry Pi, lets deploy an application on Kubernetes. It will be useful to test the whole scenario at the end.
Create a Namespace.
apiVersion: v1
kind: Namespace
metadata:
name: lb-demo
labels:
name: lb-demo
Create a Deployment for a website that shows the pod’s hostname on the index.html
.
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp-deployment
namespace: lb-demo
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
lifecycle:
postStart:
exec:
command: ["/bin/sh", "-c","echo \"<html><body><p>hostname is: $(hostname)</p></body></html>\" > /usr/share/nginx/html/index.html"]
ports:
- containerPort: 80
Create a Service to expose the deployment. Without specifying a type it creates a ClusterIP service by default.
apiVersion: v1
kind: Service
metadata:
name: webapp-service
namespace: lb-demo
spec:
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
Create the Ingress with the rules to access the service. It will receive requests from http://IP:PORT/webapp and route the request internally to http://webapp-service/ (the rewrite is done with nginx.ingress.kubernetes.io/rewrite-target: /
).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
namespace: lb-demo
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /webapp
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80
NGINX Edge Load Balancer Installation
Now we will install and setup NGINX on the Raspberry Pi to act as an external self-provisioned edge load balancer for the Kubernetes cluster.
Compiling NGINX from the source provides more flexibility to add modules (including third-party ones) and to patch security vulnerabilities, so this is the chosen installation method.
Install the dependencies.
sudo apt-get install libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev
Visit the download section from http://nginx.org/ and download the desired version (here the v1.24
).
wget http://nginx.org/download/nginx-1.24.0.tar.gz
tar -zxvf nginx-1.24.0.tar.gz
cd nginx-1.24.0
Configure, compile and install. Check the docs for the full list of configuration parameters.
./configure --sbin-path=/usr/bin --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --with-pcre --pid-path=/var/run/nginx.pid --with-http_ssl_module
make
sudo make install
Check if the configuration was placed corretly and the version is as expected.
ls -l /etc/nginx
nginx -V
To set nginx as a systemd service, edit the following script with the parameters used for ./configure
before the installation and save it as lib/systemd/system/nginx.service
.
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=syslog.target network-online.target remote-fs.target nss-lookup.target
Wants=network-online.target
[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStartPre=/usr/bin/nginx -t
ExecStart=/usr/bin/nginx
ExecReload=/usr/bin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Start nginx as service and check the status
sudo systemctl start nginx
sudo systemctl status nginx
NGINX bare-metal load balancer configuration
The configuration file is /etc/nginx/nginx.conf
, and this is the relevant part.
http {
upstream k8s_http{
server 192.168.10.10:30535;
server 192.168.10.20:30535;
server 192.168.10.30:30535;
}
server {
location /lb-demo {
proxy_pass http://k8s_http/webapp;
}
}
}
Requests coming through http://<RASPBERRY-IP>/lb-demo
are redirected to the ingress controller in the Kubernetes cluster. The ingress controller exposes the nodePort
30535 for http
requests.
The default load balancing method is round-robin
, so check the docs if you want to use a different one.
If a request fails for one of the backend nodes in the upstream k8s_http
section, the request is automatically forwarded to the next running node and so on.
Let’s give it a try!
$ curl 192.168.2.200/lb-demo
<html><body><p>hostname is: webapp-deployment-5d9c556dd7-b5tfq</p></body></html>
$ curl 192.168.2.200/lb-demo
<html><body><p>hostname is: webapp-deployment-5d9c556dd7-822tc</p></body></html>
$ curl 192.168.2.200/lb-demo
<html><body><p>hostname is: webapp-deployment-5d9c556dd7-822tc</p></body></html>
$ curl 192.168.2.200/lb-demo
<html><body><p>hostname is: webapp-deployment-5d9c556dd7-w7ff4</p></body></html>