MySQL
Contents |
Source
The patched MySQL 4.0 source is available in a special repository on svn.wikimedia.org:
Sun has officially stopped maintaining MySQL 4.0, but community development continues to some extent, mainly in the form of backported patches from MySQL 5.0. Domas was responsible for patch selection in Wikimedia's fork of MySQL, and he called the result "version 4.0.40", thinking the number suitably poetic.
Packaging
This source is packaged in the wikimedia-mysql4 package, available from apt.wikimedia.org and with debian directory checked in to /mediawiki/trunk/debs as per usual. The package version number contains the revision number of the mysql repository described above.
The package contains a Wikimedia-specific default configuration file. Note that this configuration file must be edited manually after installation to give it a unique server ID.
Installation
- Partition the hard drive with a small root partition, and the bulk space mounted at /a. XFS and JFS are considered to be appropriate filesystems for /a.
- Install the wikimedia-mysql4 package. This creates a mysql user.
- Create directory /a/tmp, owned by mysql:mysql.
- Create directory /a/sqldata.
- /a/sqldata is generally copied from an existing MySQL server, so that the new server can be added to that replicated cluster.
- For those rare cases when you need to start a data directory from scratch, the shell script /usr/mysql-wikimedia/bin/mysql_install_db should be used. This creates necessary administrative files in /a/sqldata.
- Edit /etc/my.cnf and change:
- The server-id variable, to a unique ID derived from the IP address
- The innodb_buffer_pool_size, to around 80% of physical memory.
- Note that many settings need to be changed if MyISAM is to be used, e.g. for external storage.
Copying the data directory
- Depool a slave from the same cluster, which will be used as the data source.
- Shut down MySQL on it.
- Find the following settings in /etc/my.cnf on the source server and copy them to the destination server:
- innodb_data_file_path. The names and sizes of the data files must match.
- innodb_log_file_size. Note that if this is small (<100MB) it can cause performance problems.
- Copy the data. Various methods have been tried for this. Rsync is good but has some nasty performance cases that can cause problems. An uncompressed tarpipe over netcat is probably best. First start the receiver:
nc -l 9999 | tar -xv -C/a
Then start the sender:
tar -cv -C/a sqldata | nc <dest_server> 9999
This should work with the nc available on Ubuntu, other flavours may require other command-line options to avoid hanging on to the connection after EOF.
De-pooling / Clean up / Shutdown
- The file we are going to be working on is: /home/wikipedia/common/wmf-deployment/wmf-config/db-pmtpa.php
- In this file, find the cluster you are working with. In this case, we want to add a slave to the commons db, ergo the cluster is: s2commons
- The 'sectionLoads' array will show the current members of the cluster. The FIRST member is ALWAYS the MASTER! In this case, the master would be db13.
- Pick a slave. For this we will use db3, and we will be working on installing db8 as the new slave.
- De-pool db3 so that we can copy data from it when the data is not changing. In the same array, comment out db3, and sync-file db-pmtpa.php.
- Clean up the slave by running reset-mysql-slave db3.pmtpa.wmnet on Zwinger
- Shut down mysqld on db3 and wait for the shutdown to complete (can take up to 1/2 hour).
Installing required packages on the "new" slave:
- On db8 (our new slave server), install the following:
- wikimedia-mysql4 (our default mysql server package)
- pv (to monitor the data transmission as described below)
Tweak configuration file on the new slave:
- edit the /etc/my.cnf file and change the line:
#id = 1 + zero-filled last 4 digits of internal IP server-id = 100241
Binary copy:
As usual, there is more than one way to do it. However the most efficient way that was pointed out to me is to use nc+tar combination to transfer everything from one slave to the other. You could just as well use mysql-dump or mydumper, scp and then re-import everything, but it is a lot more time-consuming.