[H-GEN] Constructing load-balancing high traffic web servers on cheap har dware?

Bruce Campbell bc at thehub.com.au
Tue Feb 23 03:47:24 EST 1999


On Tue, 23 Feb 1999, Ben Carlyle wrote:

> First up: I'm not asking in any official capacity...
>           just putting out feelers.
> 
> What is the best way to setup a group of redundant web
> servers for low-cost and high-reliability?

Apart from suggesting a Cisco Warp device, then I'd suggest a poor-mans
solution - a pair (or more) of (load balancing via round-robin dns) squid
proxies which provide caching service to the actual web servers (depending
on the platform, and how its expanding, 3 or so?)

> I've seen a fair bit of documentation on the squid http
> accelerator option, and some round-robin apache web
> serving.  Has anyone in humbug has used these kinds of
> setups for reasonable sized sites?  How do they perform
> in practice?

The tests I did with TheHub proxies showed that there was little
difference in perceived user performance unless you have dedicated proxy
servers (these were acting as customer proxies, so could not be tweaked
with such niceties as images from this site have a longer expire time to
avoid excessive 'has this changed' querying), but a noticeable saving on
web server load.  With suitable tweaking of rules affecting known static
content, a higher performance improvement could be obtained.

> My current idea is to use the squid accellerator with
> virtual hosts enabled based on a round robin script
> and very short term (a few seconds) caching.  More
> static parts of the site can be housed on other servers.
> 
> Does this sound like reasonable proposal to give to my
> superiors?

Don't run the proxies on the same machines as the web servers.  Not unless
you've got suitably beefed up machines.

--==--
Bruce.


-
This is list (humbug) general handled by majordomo at humbug.org.au .  Postings
are accepted only from subscribed addresses of lists general or general-post.



More information about the General mailing list