Network design
The purpose of this page is to give an overview of the current design of the network of the Wikimedia servers, and to provide a place to develop a new and improved network scheme.
Contents |
Current situation
Wikimedia servers reside in two racks along with Bomis servers, hosted at Candidhosting. Wikimedia/Bomis have a dedicated IP range, 207.142.131.192/26. There are two gateways: 207.142.131.193 and 207.142.131.225, but they both resolve to the same MAC address, so they are almost certainly the same router. Total burstable bandwidth is 200 Mbit/s, delivered through two separate 100BaseTx uplinks, connected from the same broadcast domain that is shared with other customers.
Wikimedia owns three switches. As the two uplinks are not allowed to create a loop, they must be connected to different switches that are not connected to eachother (when not using STP), which is not an ideal situation. A third switch is currently used to connect internal servers, that don't have public IPs and should not be accessible from the Internet. The IP range used for this internal network is 10.0.0.0/8.
Problems
The current network setup is not optimal in many ways, as will be described here.
Multiple uplinks
Recently, Wikimedia traffic spiked to 100Mbit/s multiple times, which is the limit of a single 100BaseTx connection. Also, average outgoing traffic at this moment is about 45 Mbit/s, so it is clear that Wikimedia was slowly becoming network limited. However, the colo provider charges $400 dollar per month just to provide us with a Gigabit uplink, unless we commit to 60 Mbit/s average traffic or higher. Instead, they decided to give us a second 100BaseTx for free.
This does pose some problems though. Because the two uplinks are connected from the same broadcast domain, we cannot connect them internally, or we would create a loop. One solution to this problem is to connect the uplinks to different switches that are not connected, but this means that hosts on the two different switches can only exchange traffic between eachother through the uplinks. This traffic is graphed and billed twice, and is a bottleneck, as it has to traverse both relatively slow uplinks.
It appears that, even though Wikimedia has a dedicated IP range, the broadcast domain is shared with other customers. Running tethereal shows a lot of non-wikipedia traffic. It's odd that Wikipedia doesn't have it's own broadcast domain (probably implemented as a separate VLAN at the upstream provider), as there doesn't seem to be a reason not to.
Within a shared broadcast domain, other customers can snoop Wikimedia traffic, spoof our IPs, and cause unnecessary traffic through our uplinks.
Inflexible internal network setup
The Wikimedia network was recently split in two parts: the external, publicly visible network containing machines that need to be accessed from the Internet (the Squids, mostly), and an internal network for machines that are only accessed by other wikimedia servers (Apaches, DB servers, management devices). Some servers, like the Squids, need to be in both networks because they serve as gateways between the Internet and the internal machines.
The internal network is currently implemented as a physically separate switch. This switch is not connected to the other two, and the only paths to the external network are through the servers that are on both networks. These servers use separate interfaces to connect to the different networks (eth0 for internal, eth1 for external).
Using physically separate switches for different networks is inflexible. This design does not permit efficient use of resources like switch ports and bandwidth. It requires extra switches when the internal network is full, even if the switches for the external network have plenty of ports free. Even the currently used switches support VLANs (including 802.1Q) and all of its advantages, so it would be good to use them.
- Plan is to switch to a VLAN once we find out what's connected to each switch port - Kate
Failover default routing using BGP
Because the internal servers are not directly connected to the Internet, both Zwinger and Albert are setup to Source NAT traffic originated by these internal servers, to allow them to access Internet servers for management purposes.
Two hosts are configured as routers, to provide failover support. This, however, is done using BGP and Quagga on all boxes. This seems to be a bit excessive, as better and easier solutions exist for this job: VRRP and CARP. These solutions only need to be implemented on the routers, and don't require complicated daemons and protocols run on each host.
Limited switch features
Proposed solutions
This section discusses some possible solutions to the problems mentioned.
Gigabit uplink
The easiest and probably best solution to the multiple uplinks problem, is to just get a single Gigabit (1000BaseTx) uplink. This will solve all bandwith problems for quite a while, and save us from having to design our network to prevent loops from happening. It isn't as redundant as two links, but switch port failures are quite uncommon, and often happen with multiple at the same time anyway.
As it turns out, this options currently costs Wikimedia an extra $400 dollar each month until we generate more monthly average traffic (60 Mbit/s), and is therefore not likely to happen soon.
LACP trunks
An alternative option to the multiple uplinks problem, and one that actually takes advantage of the two uplinks we have, is to configure them as LACP trunks. This means that we aggregate the two links together into one logical 200 Mbit/s link, using the IEEE 802.3ad protocol. This has both the performance and reliability benefits of using two physical links, but does not pose problems to our current network design, as no loops would be created when aggregating the two uplinks on one switch. It turns out that our current switches indeed support 802.3ad link aggregation, so we could start using it right away, as long as the uplink colo provider is willing to cooperate.
This also means that our internal traffic does not have to pass any uplinks, and therefore cannot be graphed and billed. We can connect our switches without any problems using full capacity, so we won't have performance bottlenecks.
Proposed design
-- Mark 15:46, 22 Oct 2004 (UTC)