Setting up a MySQL replica

From Wikitech
(Difference between revisions)
Jump to: navigation, search
(New Server Setup)
(New Server Setup)
Line 33: Line 33:
 
# Empty out /etc/mysql/my.cnf; it is not used.  (/etc/my.cnf is used instead)
 
# Empty out /etc/mysql/my.cnf; it is not used.  (/etc/my.cnf is used instead)
 
#* [ -s /etc/mysql/my.cnf ] && > /etc/mysql/my.cnf
 
#* [ -s /etc/mysql/my.cnf ] && > /etc/mysql/my.cnf
 +
#* Note: if you skip this step, you will see mysql trying to start with /var/* as its working directories.
  
 
== On New/Intended Slave ==
 
== On New/Intended Slave ==

Revision as of 19:36, 1 August 2011

Contents

Pre-requisite

  • (deprecated - this is now in puppet. Good to verify anyways though: dpkg -l xtrabackup). Install current xtrabackup, i.e. xtrabackup_1.4-193.lucid.25_amd64.deb on all servers involved

New Server Setup

  1. Ensure hardware raid is properly configured (raid-10, 256k strip, writeback cache, no read ahead)
    ToDo add in MegaCli line to show the raid level clearly
    root@db1047:~# MegaCli -LDInfo -Lall -Aall | egrep -e "Current Cache Policy" -e "Stripe Size"
    Stripe Size: 256kB
    Current Cache Policy: WriteBack, ReadAheadNone, Direct, No Write Cache if Bad BBU
    • if cache policy says 'ReadAdaptive' instead of 'ReadAheadNone', run MegaCli -LDSetProp NORA -Lall -Aall
  2. Setup /a xfs volume according to Domas' xfsfix (i.e. mkfs.xfs -f -d sunit=512,swidth=4096 -L data /dev/sda3 with uuid preservation)
    • Can be run from root@fenari via: (To Do: put this somewhere standard, /home/w/bin?)
      • cat /home/midom/xfsfix | ssh NEWDB "python" - to see what will run. if ok:
      • cat /home/midom/xfsfix | ssh NEWDB "python | bash"
    • Note xfsfix needs to be updated. The xfs settings do not correctly apply. Domas has blessed the following output as "good enough":
    root@db1047:~# xfs_info /a | grep swidth
    = sunit=64 swidth=512 blks
    • Note xfsfix will erase the contents of /a. This is fine because there's no data there yet, but be aware.
  3. Update /etc/fstab to mount /a with nobarrier,noatime options instead of deafults. Verify it is mounted via uuid.
    root@db1047:~# cat /etc/fstab
    [excerpted]
    # /a was on /dev/sda6 during installation
    UUID=f1363f7d-8a44-4abe-9e38-bf2171e265c8 /a xfs nobarrier,noatime 0 2
  4. remount /a: umount /a; mount /a
  5. mkdir /a/sqldata /a/tmp ; chown -R mysql: /a
    • Note Puppet makes these directories if they don't exist, so don't be surprised if they magically appear or already exist. You will have to create them after reformatting the partition above.
  6. Install current prod mysql version
    • Currently copying /usr/local/mysql-at-facebook from the source db and symlinking to /usr/local/mysql (To Do: make dpkg of our facebook patched mysql 5.1)
    • Copy over /root/.my.cnf from an existing DB host
    • run 'aptitude update; aptitude install mysql-client-core-5.1'
  7. Empty out /etc/mysql/my.cnf; it is not used. (/etc/my.cnf is used instead)
    • [ -s /etc/mysql/my.cnf ] && > /etc/mysql/my.cnf
    • Note: if you skip this step, you will see mysql trying to start with /var/* as its working directories.

On New/Intended Slave

  • cd /a/sqldata
  • nc -l 9210 | tar xvi

On Existing DB

  • innobackupex-1.5.1 --stream=tar /a/sqldata --user=root --slave-info | nc NEW-SERVER 9210
  • This results in a brief write lock towards the end of the process while the table definition files (but not data) are copied. For enwiki, this is around 200Mb of data. For this reason, this shouldn't be run on a write master if slaves are available.

On New Slave, after data copy

Do this before starting mysql!

  • innobackupex-1.5.1 --apply-log --use-memory=22G --ibbackup=xtrabackup_51 /a/sqldata
  • This is required to make the copied data consistent to a specific binlog position.
  • Start mysql, and CHANGE MASTER based on the information in /a/sqldata/xtrabackup_slave_info on the new host. Important: this should point to the correct binlog position on the actual write master on the cluster at which to start replication. innobackupex will also display the log position on the slave the copy was made from, but we don't want to chain replication.

Make any use-specific tweaks

Some DBs have differences from the normally generated config. These should come from puppet (and many do). This section serves to document some of those tweaks and why they're made.

Research Slave

  • Hosts: db42, db1047, db1048
  • Purpose: slaves of the s1 cluster dedicated to intense queries by the Summer of Research team (aka community intern statisticians).
  • Contact:
  • Tweaks: (note that these are not yet done on db1047)
    • no binlog
    • disable binlogs on these two
    • they also need innodb-locks-unsafe-for-binlog set.
      • hey get lots of ugly long running (4-24 hour) write queries in the style of 'insert mydb.foo select enwiki.revision …'. Without locks-unsafe-for-binlog, such queries set a read lock on the enwiki tables for the duration of the queries which breaks replication
    • set slave_transaction_retries to something insanely high. the researchers occasionally find other ways to set read locks despite locks-unsafe-for-binlog
Personal tools
Namespaces

Variants
Actions
Navigation
Ops documentation
Wiki
Toolbox