HAProxy & Heartbeat on Cloud Servers

High availability load balancing can be easily configured on virtualized computing instances in the Cloud.  This post explores deploying HAProxy and Heartbeat on Rackspace Cloud Servers running Debian 5.0 Lenny.

The desired result of this project is to have a redundant load balancer pair in active/passive configuration, distributing requests across two Apache web servers where any one load balancer and any one web server can fail with the environment still operational.

Example Server List

Server Public IP Private IP
lb1 175.200.90.50 10.180.75.200
lb2 175.200.90.51 10.180.75.201
web-a 175.200.90.52 10.180.75.202
web-b 175.200.90.53 10.180.75.203

Virtual IP: 175.200.90.100

Step 1: Obtaining a Virtual IP

A Virtual IP is a static, public failover IP which can move between load balancers as needed.  This is the IP you will use for your ‘A’ records when configuring DNS for your domain(s).  You can easily request the failover IP via the Rackspace Cloud ticketing system (http://manage.rackspacecloud.com), but make sure to be very deliberate in the wording of your ticket or you may just get an additional IP provisioned which won’t share properly. Here is some sample ticket verbiage which may help: “Please provision a failover IP for lb1 and ensure that it is also shared with lb2.  I understand and agree to the $2/mo additional charge for the IP.”

Step 2: Configuring Web Servers (perform on both web-a & web-b)

nano -w /etc/apache2/apache2.conf

Comment out the following line:

#LogFormat "%h %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

Add this line in its place:

LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" combined

HAProxy checks the health of each web server via the check.txt file in /var/www web root.  We must add the following lines in our vhost config to avoid these checks filling up access logs and skewing traffic statistics.

nano -w /etc/apache2/sites-available/default
SetEnvIf Request_URI "^/check\.txt$" dontlog
CustomLog /var/log/apache2/access.log combined env=!dontlog

Create the check.txt file:

touch /var/www/check.txt

Restart Apache:

/etc/init.d/apache2 restart


Step 3: Installing and Configuring HAProxy (perform on both lb1 & lb2)

apt-get install haproxy
mv /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.old

Paste the following configuration file and adjust the variables in orange:

nano -w /etc/haproxy/haproxy.cfg
global
log 127.0.0.1   local0
log 127.0.0.1   local1 notice
#log loghost    local0 info
maxconn 4096
#debug
#quiet
user haproxy
group haproxy

defaults
log     global
mode    http
option  httplog
option  dontlognull
retries 3
redispatch
maxconn 2000
contimeout      5000
clitimeout      50000
srvtimeout      50000

listen webfarm 175.200.90.100:80
mode http
stats enable
stats auth someuser:somepassword
balance roundrobin
cookie JSESSIONID prefix
option httpclose
option forwardfor
option httpchk HEAD /check.txt HTTP/1.0
server webA 10.180.75.202:80 cookie A check
server webB 10.180.75.203:80 cookie B check

Next, enable the HAProxy script:

nano -w /etc/default/haproxy
ENABLED=1


Step 4: Installing Heartbeat (perform on both lb1 & lb2)

apt-get install heartbeat

Create the same authentication key file on both servers

nano -w /etc/heartbeat/authkeys
auth 1
1 sha1 APasswordYouLike
chmod 600 /etc/heartbeat/authkeys

Create the same haresources file *identically* on both servers replacing “lb1″ with your Master LB hostname and the IP with your virtual IP.

nano -w /etc/heartbeat/haresources
lb1 173.203.85.106/24


Step 5: Configuring Heartbeat

Paste the following config file on lb1 (Master) and adjust the vars in orange:

nano -w /etc/heartbeat/ha.cf
logfacility daemon
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth1 10.180.75.201 # The Private IP address of your SLAVE server.
auto_failback on
node lb1 # The hostname of your MASTER Server.
node lb2 # The hostname of your SLAVE Server.
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes
/etc/init.d/heartbeat restart

Paste the following config file on lb2 (Slave) and adjust the vars in orange:

nano -w /etc/heartbeat/ha.cf
logfacility daemon
keepalive 2
deadtime 15
warntime 5
initdead 120
udpport 694
ucast eth1 10.180.75.200 # The Private IP address of your MASTER server.
auto_failback on
node lb1 # The hostname of your MASTER Server.
node lb2 # The hostname of your SLAVE Server.
respawn hacluster /usr/lib/heartbeat/ipfail
use_logd yes
/etc/init.d/heartbeat restart

Finally, configure HAProxy to utilize the Shared IP, add some network performance tuning, and start the HAProxy service:

nano -w /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1
net.ipv4.tcp_window_scaling = 1
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_fin_timeout = 15
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_tw_buckets = 360000
sysctl -p
/etc/init.d/haproxy start


Step 6: Testing

You should now have a fully configured HA load balanced environment.  Let’s run run a few tests to verify:

Testing HaProxy

  1. Enter http://175.200.90.100 into your browser.
  2. Issue /etc/init.d/apache2 stop one at a time on web-a and then web-b, and refresh the page to see the site is still active.

Testing Heartbeat – pull up 3 terminal windows:

  1. Run a continuous ping loop for your vIP on your local machine
  2. On lb1, run ifconfig and verify vIP is currently bound to the net interface
  3. On lb2, run watch ifconfig and notice no vIP is listed
  4. Reboot lb1, notice ifconfig is automatically updated on lb2, the ping loop isn’t interrupted and when lb1 is back up, the vIP is automatically failed back over.

That’s it! If you have questions or need any further information, please post in the comments section or email me!

Further reading and resources referenced for this post:
HowtoForge: Setting up HAProxy / Heartbeat on Debian Lenny
Rackspace Cloud Wiki: IP Failover – High Availability Explained
Rackspace Cloud Wiki: IP Failover – Setup and Installing Heartbeat

5 thoughts on “HAProxy & Heartbeat on Cloud Servers”

  1. that’s all great if you show me the detail step by step haproxy and heartbeat on Centos5 OS.

    Thanks a lots !

  2. Thanks for the great article. We are using the hardware based load balancer, but more I read about the HAProxy, more I feel like that we can save up to $3000 per month on the hardware cost with this route, while meeting all the bandwidth and connection needs.

    I have one specific question. In your article, you have one VIP servicing 2 servers. In our hardware LB, we have many VIPs for different farms. I realize that one way to achieve it is to have multiple of these VMs per each VIP. However, is it possible to consolate all of that and serviced by one HA HAProxy setup? I assume that I can configure the multipe farms and provide differnet “listen” configurations?

    Also, how can I obtain multiple multiple VIPs that are shared by two VMs through rackspace? Is this even possible? If so, what are the pros and cons?

    Cheers

    1. Hi Se Hee,

      At Rackspace, you can provision up to 4 additional IPs per Cloud Server for a total of 5. Any or all of these 4 additionals can absolutely be VIPs, so yes you could achieve the configuration you speak of for multiple farms. You could do so either with different listen ports or simply assigning certain web nodes to certain VIPs.

      Since writing this post, Rackspace has come out with Cloud Load Balancers. This product is fantastic and I’ve saved a massive amount of money and time by converting all of my HAProxy nodes to this service. I highly recommend that you look in to it.

      Let me know if I can help with anything else!

      Andrei

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>