How to setup HAProxy for Redis Sentinel on Kubernetes

Yaniv Ben Hemo
5 min readApr 14, 2021

--

Apparently, it's not an easy task, but a very crucial one.

In order to achieve a real Active-Active experience during Redis master’s failover, we need something in front to route our traffic towards the slave who took the job.

As a noobie at HAProxy, It took me a while to figure out how to do it correctly with all the guides around, So I decided to create my own.

+@KNF https://github.com/fkocik for a smoother approach

Quick architecture -

Credit https://github.com/selcukusta/redis-sentinel-with-haproxy

This guide assumes you already deployed Redis sentinel on your Kubernetes cluster. I used the following Helm Chart command —

$ helm install redis -n redis — set sentinel.enabled=true,sentinel.quorum=3,cluster.slaveCount=3 bitnami/redis

The HAProxy part.

We will perform the following —

  1. Create k8s.yaml file (Contains service, configmap and deployment)
  2. Build and push the new image to our registry
  3. Deploy the image to our k8s cluster
  4. Check it works

Github Repo with all the files needed: https://github.com/yanivbhemo/haproxy-redis

haproxy.cfg file (Will be written inside the .yaml file)

global
daemon
maxconn 256

defaults
mode tcp
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms

frontend http
bind :8080
default_backend stats

backend stats
mode http
stats enable

stats enable
stats uri /
stats refresh 1s
stats show-legends
stats admin if TRUE

resolvers k8s

parse-resolv-conf

hold other 10s

hold refused 10s

hold nx 10s

hold timeout 10s

hold valid 10s

hold obsolete 10s

frontend redis-read
bind *:6380
default_backend redis-online

frontend redis-write
bind *:6379
default_backend redis-primary

backend redis-primary

mode tcp

balance first

option tcp-check

tcp-check send AUTH\ kKwpFLhMQ4\r\n

tcp-check expect string +OK

tcp-check send info\ replication\r\n

tcp-check expect string role:master

server-template redis 3 _tcp-redis._tcp.redis-headless.redis.svc.cluster.local:6379 check inter 1s resolvers k8s init-addr none

backend redis-online

mode tcp

balance roundrobin

option tcp-check

tcp-check send AUTH\ kKwpFLhMQ4\r\n

tcp-check expect string +OK

tcp-check send PING\r\n

tcp-check expect string +PONG

server-template redis 3 _tcp-redis._tcp.redis-headless.redis.svc.cluster.local:6379 check inter 1s resolvers k8s init-addr none

A quick guide for the file above —

Instead of traffic going directly to Redis, We will need to route it towards our HAProxy.

Traffic will enter HAProxy pod, and by analyzing the port attached to the packet (for example 6379) HAProxy will understand to which frontend should it be bind and from there to which backend it should be forward.

As you can see from our config file, packets with port 6379 will hit “frontend redis-write” and forward to “backend redis-primary”

The “tcp-checks” at each backend block will analyze which node is the master, and who is alive. By using this config file you can still use protected_mode with your Redis.

With “resolvers k8s” section in place, we can then rely on the HAProxy service discovery features and Kubernetes DNS conventions (https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#srv-records) to dynamically register backends through server-template instruction in backend bloc instead of server :

server-template redis 3 _tcp-redis._tcp.redis-headless.redis.svc.cluster.local:6379 check inter 1s resolvers k8s init-addr none

The number of servers in the pool (3) is the same value passed to key replica.replicaCount of the Redis chart while the service short name (_tcp-redis) is the name of the port in generated headless Redis service.

haproxy.yaml (This one is the file needed to deploy HAProxy on k8s)

Can be found on the github repository:

apiVersion: /v1
kind: Service
metadata:
name: haproxy-service
namespace: redis
spec:
type: ClusterIP
ports:
- name: dashboard
port: 8080
targetPort: 8080
- name: redis-writeport: 6379targetPort: 6379- name: redis-readport: 6380targetPort: 6380selector:app: haproxy---apiVersion: v1kind: ConfigMapmetadata:name: haproxy-confignamespace: redisdata:haproxy.cfg: |globaldaemonmaxconn 256defaultsmode tcptimeout connect 5000mstimeout client 50000mstimeout server 50000msfrontend httpbind :8080default_backend statsbackend statsmode httpstats enablestats enablestats uri /stats refresh 1sstats show-legendsstats admin if TRUEresolvers k8sparse-resolv-confhold other 10shold refused 10shold nx 10shold timeout 10shold valid 10shold obsolete 10sfrontend redis-readbind *:6380default_backend redis-onlinefrontend redis-writebind *:6379default_backend redis-primarybackend redis-primarymode tcpbalance firstoption tcp-checktcp-check send AUTH\ XXXXXXXX\r\ntcp-check expect string +OKtcp-check send info\ replication\r\ntcp-check expect string role:masterserver-template redis 3 _tcp-redis._tcp.redis-headless.redis.svc.cluster.local:6379 check inter 1s resolvers k8s init-addr nonebackend redis-onlinemode tcpbalance roundrobinoption tcp-checktcp-check send AUTH\ XXXXXXXX\r\ntcp-check expect string +OKtcp-check send PING\r\ntcp-check expect string +PONGserver-template redis 3 _tcp-redis._tcp.redis-headless.redis.svc.cluster.local:6379 check inter 1s resolvers k8s init-addr none---apiVersion: apps/v1kind: Deploymentmetadata:name: haproxy-deploymentnamespace: redislabels:app: haproxyspec:replicas: 2selector:matchLabels:app: haproxytemplate:metadata:name: haproxy-podlabels:app: haproxyspec:affinity:podAntiAffinity:requiredDuringSchedulingIgnoredDuringExecution:- labelSelector:matchExpressions:- key: appoperator: Invalues:- haproxytopologyKey: "kubernetes.io/hostname"containers:- name: haproxyimage: haproxy:2.3ports:- containerPort: 8080- containerPort: 6379- containerPort: 6380volumeMounts:- name: configmountPath: /usr/local/etc/haproxy/haproxy.cfgsubPath: haproxy.cfgreadOnly: truerestartPolicy: Alwaysvolumes:- name: configconfigMap:name: haproxy-config

kubectl apply -f haproxy.yaml

Our namespace’s pods should look like this -

Let’s check if our HAProxy is OK and routing properly -

# kubectl port-forward service/haproxy-service 8080:8080

Head to your explorer and enter localhost:8080, You should see HAProxy dashboard -

Possible issues along the way —

  1. No proper networking between HAProxy to Redis nodes may be the wrong service name at the haproxy.cfg file.
  2. Wrong Redis password at the TCP checks.
  3. Ports 6379/6380 are not open at the HAProxy k8s service.
  4. Please make sure to place the proper password at the haproxy.cfg, TCP-checks section.

Hope you find this guide helpful, feel free to reach out if you have any questions —

yaniv@memphis.dev

--

--

Yaniv Ben Hemo

A developer, technologist, and entrepreneur. Co-Founder and CEO at Memphis.dev. Trying to make developers' lives a bit easier.