Replacing my load balancers with cheaper solutions

Did I say cheaper? I really meant free, because cheap is too expensive…

I don’t have a big budget. In fact, if you want to get really technical, I don’t have a budget at all – I just sort of have to ask management for everything I want to buy. I’m really hoping that changes this year, but the point is that right now, it’s really hard for me to spend any money. This is inconvenient at times like the present, because I’ve got a critical piece of infrastructure at my backup site that has started a death spiral.

To give you some back story, the application that we use to provide our clients’ information to them is written in Java and delivered via tomcat. We’ve got multiple application (read: tomcat) servers, and we wanted to be able to provide high availability failover. Load balancing was secondary, due to the limited volume of traffic we get, but it was vital that the application be available.

The solution that we decided on about 4 years ago was a Kemp Loadmaster 1500. I’d link to the actual device, but it’s so old that it’s been EOL’d and doesn’t exist on their website anymore.

The original LM1500 we got served us well…so well, in fact, that we bought two more, which we installed into our production site, and moved the original to the secondary site. This is the situation that we’re currently in, and the original is now starting to die.

Of course, since it’s been years, the original is no longer under a service contract (I’ve recently been in discussions with management about whether it’s cheaper all the way around to purchase extended service contracts or replace infrastructure, but that’s another blog post). This means that the support team at Kemp is sympathetic, but ultimately unwilling to help, except for having a sales guy call me to sell me a new set of load balancers.

This has the advantage of putting me in a position to make choices. It’s unfortunate that all of my choices are unpleasant, though.

On one hand, I suppose that I could continue indefinitely in having the colocation staff trudge out to my rack and cold reboot the load balancer twice a day. Or I could beg management for a couple thousand dollars to replace this load balancer (then twice whatever that amount is in another year, when the production load balancers fail). Or I could find another solution.

Assuming you’re not insane or a masochist (Oh who am I kidding? You’re a sysadmin, you’re a masochist), you picked the last option, which is also the one I picked (despite my insanity AND my masochism), and I went looking for solutions.

Because I firmly believe that all of us are smarter than any of us, I used the hivemind to find my answer, and I put out this twitter message:

I was actually kind of surprised at the sheer volume of the responses I got. There were dozens of suggestions, but by far, the most frequent was HAproxy. Their list of people who use it is impressive, but moreso was the fact that every time I searched for something related to it, the author, Willy Tarreau was commenting and offering advice. It’s really great to see someone that in tune with the users of his products.

After spending a bit educating myself on how load balancers work in general, and how HAproxy works specifically, I’ve got a running configuration with NGiNX doing HTTPS and doing a proxy handoff to HAproxy, which balances between application servers (and also performs mail and FTP proxying, too). The configuration isn’t where I want it to be, but I’ll be playing with it in the coming days, and when it’s in a closer-to-final form, I’m going to post another entry with the configuration. After all of that is done, I’ll be moving on to a HA load balancer situation, most likely using keepalived.

If you’re new to the whole idea of load balancers, you could do much worse than reading this document written by Willy Tarreau about making applications scale with load balancers. It’s pretty enlightening if you haven’t spent much time in the arena. Also, because the client access is only part of the picture, if you want to learn more about making your infrastructure go fast, I really really recommend John Allspaw’s The Art of Capacity Planning, which is an amazingly great book not only because it’s full of very useful information, but because it makes you want to optimize your infrastructure.

Until next time, I’m back to the load balancer salt mines to produce a good working configuration…

  • meanasspenguin

    Just something else to consider; Pen Loadbalancer:

  • Jason Faulkner

    If you end up going with keepalived, come party with me in the IRC… and I’ll help with whatever I can.


  • You could also review this PDF from Brocade which has a good introductory section on Load Balancing. It’s a simple, but good quality.

  • Awesome post! It seems like most people end up going with a combo of HAProxy and keepalived in the end. Consider yourself Twitter-followed :).

  • tara

    Try wackmole: it’s peer to peer and in debian-stable.

  • Hi Matt,
    Before you reached the punchline I was going to jump in and say “HAProxy”. It saved our bacon recently when we had to get multiple machines in rotation for an onslaught of new users.

    It seems like there are hidden gems like this all over the OSS world now. I think we’ve reached a point where we are overflowing with choices, when there used to be too few of them.

  • We’ve been using HAproxy+keepalived in production for a little over a year now and they are both rock solid.

    Be sure to join the haproxy mailing list, there are a lot of helpful and knowledgeable folks on it.

  • niczar

    @meanasspenguin: Pen is junk compared to haproxy. I looked at the code, and IIRC it’s based on a rather simplistic model (contrast with HAproxy’s FSM based architecture) and it doesn’t make use of advanced APIs like vmsplice.

  • meanasspenguin

    @niczar: I’m not comparing the products, merely providing options, for everyones benefit to compare themselves. I suggest pen precisely for simplistic implementations. Pen’s own description says it is simplistic. Its got a low learning curve and is fast to deploy. It’s good for some things and not for others. HAproxy is a fine product and I never said otherwise. Take it easy.

  • I like CARP for cheap failover: you can give the web site a pool of IPs and if one server croaks another will pick up the IP. Pretty crude, though.

    Pound (a reverse proxy) appealed to me last time I was shopping for a solution. LVS gave me heartache.

    Good luck!


  • Hi Matt,

    if you’re going with nginx to do the SSL job, you might also consider making it deliver static files and/or caching. You can generally achieve nice architectures by combining haproxy, nginx and keepalived, I’m sure you won’t regret your choice !


  • Muahahah, my Willy Tarreau trap was successful ;-)

    Thanks a lot for the tips. We really aren’t even close to needing any performance tweaks (outside of optimizing the actual application), but I’ve been trying to figure out the best way to store everything static available on a separate server while maintaining the in-application security. I haven’t figured it out completely, but I have some ideas that I’ll be playing with and benchmarking.

  • Keepalived and HAProxy, you can also use NGINX on your load balancers to do ssl off-loading so the connection will be encrypted between the load balancers and clients and clear text between your webservers and load balancers.