High availability load balancing can be easily configured on virtualized computing instances in the Cloud. This post explores deploying HAProxy and Heartbeat on Rackspace Cloud Servers running Debian 5.0 Lenny.
The desired result of this project is to have a redundant load balancer pair in active/passive configuration, distributing requests across two Apache web servers where any one load balancer and any one web server can fail with the environment still operational.
Example Server List
Server | Public IP | Private IP |
lb1 | 175.200.90.50 | 10.180.75.200 |
lb2 | 175.200.90.51 | 10.180.75.201 |
web-a | 175.200.90.52 | 10.180.75.202 |
web-b | 175.200.90.53 | 10.180.75.203 |
Virtual IP: 175.200.90.100
Step 1: Obtaining a Virtual IP
A Virtual IP is a static, public failover IP which can move between load balancers as needed. This is the IP you will use for your ‘A’ records when configuring DNS for your domain(s). You can easily request the failover IP via the Rackspace Cloud ticketing system (http://manage.rackspacecloud.com), but make sure to be very deliberate in the wording of your ticket or you may just get an additional IP provisioned which won’t share properly. Here is some sample ticket verbiage which may help: “Please provision a failover IP for lb1 and ensure that it is also shared with lb2. I understand and agree to the $2/mo additional charge for the IP.”
Step 2: Configuring Web Servers (perform on both web-a & web-b)
nano -w /etc/apache2/apache2.conf
Comment out the following line:
#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
Add this line in its place:
LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined
HAProxy checks the health of each web server via the check.txt file in /var/www web root. We must add the following lines in our vhost config to avoid these checks filling up access logs and skewing traffic statistics.
nano -w /etc/apache2/sites-available/default
SetEnvIf Request_URI "^/check\.txt$" dontlog
CustomLog /var/log/apache2/access.log combined env=!dontlog
Create the check.txt file:
touch /var/www/check.txt
Restart Apache:
/etc/init.d/apache2 restart
Step 3: Installing and Configuring HAProxy (perform on both lb1 & lb2)
apt-get install haproxy mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old
Paste the following configuration file and adjust the variables in orange:
nano -w /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 listen webfarm 175.200.90.100:80 mode http stats enable stats auth someuser:somepassword balance roundrobin cookie JSESSIONID prefix option httpclose option forwardfor option httpchk HEAD /check.txt HTTP/1.0 server webA 10.180.75.202:80 cookie A check server webB 10.180.75.203:80 cookie B check
Next, enable the HAProxy script:
nano -w /etc/default/haproxy
ENABLED=1
Step 4: Installing Heartbeat (perform on both lb1 & lb2)
apt-get install heartbeat
Create the same authentication key file on both servers
nano -w /etc/heartbeat/authkeys auth 1 1 sha1 APasswordYouLike
chmod 600 /etc/heartbeat/authkeys
Create the same haresources file *identically* on both servers replacing “lb1” with your Master LB hostname and the IP with your virtual IP.
nano -w /etc/heartbeat/haresources lb1 173.203.85.106/24
Step 5: Configuring Heartbeat
Paste the following config file on lb1 (Master) and adjust the vars in orange:
nano -w /etc/heartbeat/ha.cf
logfacility daemon
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth1 10.180.75.201 # The Private IP address of your SLAVE server.
auto_failback on
node lb1 # The hostname of your MASTER Server.
node lb2 # The hostname of your SLAVE Server.
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes
/etc/init.d/heartbeat restart
Paste the following config file on lb2 (Slave) and adjust the vars in orange:
nano -w /etc/heartbeat/ha.cf
logfacility daemon
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth1 10.180.75.200 # The Private IP address of your MASTER server.
auto_failback on
node lb1 # The hostname of your MASTER Server.
node lb2 # The hostname of your SLAVE Server.
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes
/etc/init.d/heartbeat restart
Finally, configure HAProxy to utilize the Shared IP, add some network performance tuning, and start the HAProxy service:
nano -w /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_tw_buckets = 360000
sysctl -p /etc/init.d/haproxy start
Step 6: Testing
You should now have a fully configured HA load balanced environment. Let’s run run a few tests to verify:
Testing HaProxy
- Enter http://175.200.90.100 into your browser.
- Issue /etc/init.d/apache2 stop one at a time on web-a and then web-b, and refresh the page to see the site is still active.
Testing Heartbeat – pull up 3 terminal windows:
- Run a continuous ping loop for your vIP on your local machine
- On lb1, run ifconfig and verify vIP is currently bound to the net interface
- On lb2, run watch ifconfig and notice no vIP is listed
- Reboot lb1, notice ifconfig is automatically updated on lb2, the ping loop isn’t interrupted and when lb1 is back up, the vIP is automatically failed back over.
That’s it! If you have questions or need any further information, please post in the comments section or email me!
Further reading and resources referenced for this post:
HowtoForge: Setting up HAProxy / Heartbeat on Debian Lenny
Rackspace Cloud Wiki: IP Failover – High Availability Explained
Rackspace Cloud Wiki: IP Failover – Setup and Installing Heartbeat