HAProxy Configuration for Log Forwarding¶
HAProxy can be used as a TCP-level forwarder for HTTPS WebSocket connections from LogMan.io Collector to LogMan.io Receiver. This setup provides TCP-level forwarding of HTTPS connections, which is useful for load balancing and high availability scenarios.
Overview¶
HAProxy operates at the TCP layer, forwarding encrypted HTTPS traffic without decrypting it. This means:
- No SSL termination: HAProxy forwards the encrypted traffic as-is to NGINX
- Simpler configuration: No need to manage SSL certificates in HAProxy
- High performance: TCP-level forwarding is efficient for WebSocket connections
- Load balancing: Built-in support for distributing connections across multiple NGINX instances
Architecture¶
graph LR
lmio-collector -- "HTTPS WebSocket" --> haproxy[HAProxy Forwarder]
haproxy[HAProxy Forwarder] -- "HTTPS WebSocket" --> internet[Internet /<br/>Dedicated Link]
internet[Internet /<br/>Dedicated Link] -- "HTTPS WebSocket" --> nginx[NGINX]
nginx[NGINX] -- "HTTPS WebSocket" --> lmio-receiver[LogMan.io Receiver]
Diagram: HAProxy forwarding setup
In this architecture: - HAProxy Forwarder receives connections from collectors and forwards TCP connections - Internet / Dedicated Link carries the encrypted HTTPS traffic between HAProxy and NGINX - NGINX handles SSL termination and forwards to LogMan.io Receiver - LogMan.io Receiver processes the incoming logs
Prerequisites¶
- HAProxy 2.0 or newer (recommended for better WebSocket support)
- NGINX configured in front of LogMan.io Receiver on remote machine(s)
- Network connectivity between HAProxy machine and remote NGINX instances
- Docker (if using containerized setup)
- Port 443 available on the HAProxy machine (when using
network_mode: host)
Basic HAProxy Configuration¶
HAProxy Configuration File¶
Create an HAProxy configuration file (haproxy.cfg):
global
log stdout format raw local0
maxconn 4096
defaults
mode tcp
log global
option tcplog
timeout connect 10s
timeout client 1h
timeout server 1h
# Frontend: Listen for incoming HTTPS connections from collectors
frontend commlink_frontend
bind *:443
mode tcp
default_backend commlink_backend
# Backend: Forward to NGINX instances (which forward to receiver)
# NGINX and receiver run on remote machines
backend commlink_backend
mode tcp
balance roundrobin
option tcp-check
# Remote NGINX instances (adjust IP addresses/hostnames as needed)
server nginx-1 192.168.1.20:443 check
# Backup NGINX instances (optional, for HA)
server nginx-2 192.168.1.21:443 check backup
server nginx-3 192.168.1.22:443 check backup
Configuration Explanation¶
mode tcp: HAProxy operates in TCP mode, forwarding encrypted HTTPS traffic without inspectionfrontend commlink_frontend: Listens on port 443 for incoming HTTPS connections from collectorsbackend commlink_backend: Forwards connections to remote NGINX instances (which handle SSL termination and forward to receiver)balance roundrobin: Distributes connections evenly across available NGINX instancesoption tcp-check: Performs health checks on NGINX instancesserver nginx-1: Primary NGINX instance on remote machine (specify IP address or hostname)server nginx-2/3 backup: Backup NGINX instances on remote machines, used when primary is unavailable
Timeout Configuration¶
The timeout settings are important for WebSocket connections:
timeout client 1h: Maximum time to wait for client data (1 hour for long-lived WebSocket connections)timeout server 1h: Maximum time to wait for server response (1 hour for long-lived WebSocket connections)- WebSocket connections are long-lived (can last hours or days)
- HAProxy must keep the TCP tunnel open for the entire WebSocket session
- Adjust these values based on your requirements (1 hour is typically sufficient)
Docker Compose Setup¶
Here is a docker-compose.yaml example for running HAProxy. NGINX and LogMan.io Receiver run on separate machines:
version: '3.8'
services:
haproxy:
image: haproxy:latest
container_name: haproxy-commlink
network_mode: host
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
restart: unless-stopped
Network Mode
Using network_mode: host allows HAProxy to bind directly to the host's network interface, avoiding port mapping complexity. This is useful when HAProxy is the only service running on the machine.
Remote NGINX/Receiver
NGINX and LogMan.io Receiver run on separate machines. HAProxy forwards connections to remote NGINX instances specified in the haproxy.cfg backend configuration.
Directory Structure¶
.
├── docker-compose.yaml
└── haproxy.cfg
Advanced Configuration¶
Health Checks¶
HAProxy can perform health checks on NGINX instances:
backend commlink_backend
mode tcp
balance roundrobin
option tcp-check
tcp-check connect
tcp-check send-binary 474554202f20485454502f312e310d0a0d0a
tcp-check expect string "HTTP"
server nginx-1 192.168.1.20:443 check inter 5s fall 3 rise 2
server nginx-2 192.168.1.21:443 check inter 5s fall 3 rise 2 backup
inter 5s: Health check interval (every 5 seconds)fall 3: Mark server as down after 3 consecutive failuresrise 2: Mark server as up after 2 consecutive successes
Load Balancing Algorithms¶
HAProxy supports various load balancing algorithms:
backend commlink_backend
mode tcp
balance leastconn # Use least connections algorithm
# balance roundrobin # Round-robin (default)
# balance source # Source IP hash
server nginx-1 192.168.1.20:443 check
server nginx-2 192.168.1.21:443 check
server nginx-3 192.168.1.22:443 check
leastconn: Routes to the server with the fewest active connections (recommended for WebSocket)roundrobin: Distributes connections evenly in round-robin fashionsource: Uses source IP hash for consistent routing
SSL Passthrough with SNI¶
If you need to handle multiple domains, you can use SNI (Server Name Indication) inspection:
frontend commlink_frontend
bind *:443
mode tcp
tcp-request inspect-delay 5s
tcp-request content accept if { req_ssl_hello_type 1 }
use_backend commlink_backend if { req_ssl_sni -i recv.logman.example.com }
default_backend commlink_backend
This configuration:
- Inspects the SSL handshake to extract SNI
- Routes based on the server name in the SNI
- Falls back to default backend if SNI doesn't match
Collector Configuration¶
Configure the collector to connect through HAProxy:
connection:CommLink:commlink:
url: https://recv.logman.example.com/
Where recv.logman.example.com resolves to the HAProxy server IP address.
High Availability Setup¶
For high availability, deploy multiple HAProxy instances with DNS round-robin:
Architecture¶
graph TB
subgraph "Collectors"
c1[Collector 1]
c2[Collector 2]
c3[Collector 3]
end
subgraph "HAProxy Layer"
h1[HAProxy 1]
h2[HAProxy 2]
h3[HAProxy 3]
end
subgraph "NGINX Layer"
n1[NGINX 1]
n2[NGINX 2]
n3[NGINX 3]
end
subgraph "Receiver Layer"
r1[Receiver 1]
r2[Receiver 2]
r3[Receiver 3]
end
c1 --> h1
c2 --> h2
c3 --> h3
h1 --> n1
h1 --> n2
h1 --> n3
h2 --> n1
h2 --> n2
h2 --> n3
h3 --> n1
h3 --> n2
h3 --> n3
n1 --> r1
n2 --> r2
n3 --> r3
Multiple HAProxy Instances¶
Deploy HAProxy on multiple machines, each with its own docker-compose.yaml:
# docker-compose.yaml on haproxy-node-1
version: '3.8'
services:
haproxy:
image: haproxy:latest
container_name: haproxy-commlink
network_mode: host
volumes:
- ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
restart: unless-stopped
Deploy the same configuration on multiple HAProxy nodes, and configure DNS to resolve recv.logman.example.com to all HAProxy IP addresses. Each HAProxy instance should forward to all available remote NGINX instances.
DNS Configuration¶
Configure DNS with multiple A records pointing to HAProxy instances:
recv.logman.example.com. 60 IN A 192.168.1.10 # HAProxy node 1
recv.logman.example.com. 60 IN A 192.168.1.11 # HAProxy node 2
recv.logman.example.com. 60 IN A 192.168.1.12 # HAProxy node 3
Set a low TTL (60 seconds) to enable quick failover.