LVS

From Wikitech
(Difference between revisions)
Jump to: navigation, search
(lvsmon / pybal)
Line 1: Line 1:
'''Note to visitors from Google''': This section documents the installation of LVS on the [http://wikimediafoundation.org/ Wikimedia] Apache cluster. It is a mixed cluster of i386 and x86_64 architectures running Fedora Core 3 and 4.  
+
'''lvsmon''' is used as LVS load balancer control script between the Squids and the Apaches. In front of the Squids we're using another script, [[PyBal]].
  
<div style="border: solid 2px red;">Note: This is out of date. We're running pybal, not lvsmon</div>
 
 
==Apache pool==
 
==Apache pool==
  

Revision as of 11:34, 14 September 2006

lvsmon is used as LVS load balancer control script between the Squids and the Apaches. In front of the Squids we're using another script, PyBal.

Contents

Apache pool

Director setup

Dalembert is functioning as an LVS-DR director. Installing a new LVS director is just a matter of

yum install ipvsadm
ip addr add 10.0.5.3 dev eth0
cp ~tstarling/lvs/* /usr/local/bin/
screen
lvsmon
^AD
run-icpagent.sh

Apache setup

When installing new apaches, one has to be careful of the "ARP problem". If you add the LVS virtual IP to an interface of something other the director without setting arp_announce and arp_ignore on all ethernet interfaces, the apache may steal the IP from the director. Presumably icpagent won't be running on the apache so squid would automatically fall back to perlbal, assuming it's running, so it wouldn't be an unmitigated disaster. But it's probably best to avoid trying it out.

Procedure is as follows:

cat /home/config/others/etc/sysctl.conf.local >> /etc/sysctl.conf
sysctl -w net.ipv4.conf.eth0.arp_ignore=1
sysctl -w net.ipv4.conf.eth0.arp_announce=2
sysctl -w net.ipv4.conf.eth1.arp_ignore=1
sysctl -w net.ipv4.conf.eth1.arp_announce=2

The last two commands will probably give you an error since eth1 usually doesn't exist, but you may as well run them anyway just in case. Now, I haven't tried this myself yet, but I think it would be sensible to run a test to make sure ARP is configured correctly. 10.0.5.4 is a reserved service IP and should not be used anywhere.

ip addr add 10.0.5.4 dev lo
ssh zwinger ping 10.0.5.4

This should give "destination host unreachable". This test could easily be automated and run concurrently in apache setup scripts. If you get a response, fix it before continuing to the next step. This is the scary step.

ip addr del 10.0.5.4 dev lo
ip addr add 10.0.5.3 dev lo

Then add it to the apaches node group and restart lvsmon on the director.

lvsmon

Lvsmon is 80 lines of PHP code written by Tim to monitor apaches and configure ipvsadm accordingly. It should be run in a screen, with no arguments. It uses curl to request http://en.wikipedia.org/w/health-check.php . Because it's so short, I'd recommend you read the code if you want to know the details. But here's an important point: it gets a list of apaches from the dsh node group, and then tests them with their unique 10/8 address, not with the VIP. So if you have apache running on a machine but you don't have it set up for LVS rotation, it's important to remove it from the apaches node group, or else intermittent "connection refused" errors will be returned to the user.

If you kill lvsmon, LVS will keep working, it just won't notice apache state changes anymore.

For a copy of the source, click here

Removing apaches

Apaches can be removed from the pool temporarily by simply shutting down apache. Because lvsmon runs in a single thread, checking apaches in turn, it's probably better to remove permanently dead apaches from the apache nodelist.

Diagnosing problems

Run ipvsadm -l on the director. Healthy output looks like this:

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  upload.pmtpa.wikimedia.org:h wlc
  -> sq10.pmtpa.wmnet:http        Route   10     5202       5295
  -> sq1.pmtpa.wmnet:http         Route   10     8183       12213
  -> sq4.pmtpa.wmnet:http         Route   10     7824       13360
  -> sq5.pmtpa.wmnet:http         Route   10     7843       12936
  -> sq6.pmtpa.wmnet:http         Route   10     7930       12769
  -> sq8.pmtpa.wmnet:http         Route   10     7955       11010
  -> sq2.pmtpa.wmnet:http         Route   10     7987       13190
  -> sq7.pmtpa.wmnet:http         Route   10     8003       7953

All the servers are getting a decent amount of traffic, there's just normal variation.

If a realserver is refusing connections or doesn't have the VIP configured, it will look like this:

IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  upload.pmtpa.wikimedia.org:h wlc
  -> sq10.pmtpa.wmnet:http        Route   10     2          151577
  -> sq1.pmtpa.wmnet:http         Route   10     2497       1014
  -> sq4.pmtpa.wmnet:http         Route   10     2459       1047
  -> sq5.pmtpa.wmnet:http         Route   10     2389       1048
  -> sq6.pmtpa.wmnet:http         Route   10     2429       1123
  -> sq8.pmtpa.wmnet:http         Route   10     2416       1024
  -> sq2.pmtpa.wmnet:http         Route   10     2389       970
  -> sq7.pmtpa.wmnet:http         Route   10     2457       1008

Active connections for the problem server are depressed, inactive connections normal or above normal. This problem must be fixed immediately, because in wlc mode, LVS load balances based on the ActiveConn column, meaning that servers that are down get all the traffic.

LVS director list

Cluster Director VIP
pmtpa apaches dalembert 10.0.5.3
upload squids avicenna 207.142.131.228
yaseo apaches yf1018 211.115.107.161
yaseo squids yf1018 211.115.107.162
Personal tools
Namespaces

Variants
Actions
Navigation
Ops documentation
Wiki
Toolbox