Dumps

From Wikitech
(Difference between revisions)
Jump to: navigation, search
 
(25 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 
Docs for end-users of the data dumps at [[MetaWikipedia:Data dumps]].
 
Docs for end-users of the data dumps at [[MetaWikipedia:Data dumps]].
  
For current development plans, see [[Dumps/Development 2011]].  For status of development, see [[Dumps/Development status 2011]].
+
For a list of various information sources about the dumps, see [[Dumps/Other information sources]].
  
For documentation on the "adds/changes" dumps, see [[Dumps/Adds-changes dumps]].
+
*For documentation on the "adds/changes" dumps, see [[Dumps/Adds-changes dumps]].
 
+
*For documentation on the media dumps, see [[Dumps/media]].
For information about the parallel jobs, see [[Dumps/Parallelization]].
+
*For current development plans, see [[Dumps/Development 2012]].
 +
*For historical information about the dumps, see [[Dumps/History]].
  
 +
{| cellspacing="0" cellpadding="0" style="clear: {{{clear|right}}}; margin-bottom: .5em; float: right; padding: .5em 0 .8em 1.4em; background: none; width: {{{width|{{{1|auto}}}}}};"
 +
| __TOC__
 +
|}
 
== Overview ==
 
== Overview ==
  
Line 35: Line 39:
 
The shell script <code>monitor</code> which starts the process simply runs the python script <code>monitor.py</code> in an endless loop.
 
The shell script <code>monitor</code> which starts the process simply runs the python script <code>monitor.py</code> in an endless loop.
  
As with the worker nodes, local copies of the shell script and the python script <small>live on the snapshot hosts in the directory <code>/backups</code> but</small> currently are run out of /backups-atg (since this code is not yet in trunk) in a screen sessions on one host, as the user "backup".
+
As with the worker nodes, local copies of the shell script and the python script <small>live on the snapshot hosts in the directory <code>/backups</code> but</small> currently are run out of /backups-atg (since this code is not yet in trunk) in a screen session on one host, as the user "backup".
  
 
=== Code ===
 
=== Code ===
  
Check [http://svn.wikimedia.org/viewvc/mediawiki/branches/ariel/xmldumps-backup/?sortby=file /branches/ariel/xmldumps-backup] for the python code in use.  Eventually this will make its way back into trunk; it's still a bit gross right now.
+
Check [https://gerrit.wikimedia.org/r/gitweb?p=operations/dumps.git;a=tree;f=xmldumps-backup;hb=ariel /operations/dumps.git, branch 'ariel'] for the python code in use.  Eventually this will make its way back into master; it's still a bit gross right now.
 +
 
 +
Getting a copy:
 +
: <code>git clone https://gerrit.wikimedia.org/r/p/operations/dumps.git</code>
 +
: <code>git checkout ariel</code>
 +
 
 +
Getting a copy as a committer:
 +
: <code>git clone ssh://<user>@gerrit.wikimedia.org:29418/operations/dumps.git</code>
 +
: <code>git checkout ariel</code>
 +
 
 +
=== Programs used ===
 +
 
 +
See also [[Dumps/Software dependencies]].
 +
 
 +
The scripts call mysqldump, getSlaveServer.php, eval.php, dumpBackup.php, and dumpTextPass.php directly for dump generation. These in turn require backup.inc and backupPrefetch.inc and may call ActiveAbstract/AbstractFilter.php and fetchText.php.
 +
 
 +
The generation of XML files relies on Export.php under the hood and of course the entire MW infrastructure.
 +
 
 +
The worker.py script relies on a few C programs for various bz2 operations: checkforbz2footer and recompressxml, both in /usr/local/bin/. These are in the git repo, see [https://gerrit.wikimedia.org/r/gitweb?p=operations/dumps.git;a=tree;f=xmldumps-backup/mwbzutils;h=e76ee6cb52fd40e570e2e62a969f8b57902de1b9;hb=ariel].
  
 
== Setup ==
 
== Setup ==
Line 53: Line 75:
 
For now:
 
For now:
 
# Backups are running test code out of /backups-atg on each host so grab a copy of that from any existing host and copy it into /backups-atg on the new host. This will include conf files, you don't need to specify them separately.
 
# Backups are running test code out of /backups-atg on each host so grab a copy of that from any existing host and copy it into /backups-atg on the new host. This will include conf files, you don't need to specify them separately.
# Check over the configuration file and make sure it looks sane, all the paths point to things that exist, etc.  For too many details see [http://svn.wikimedia.org/viewvc/mediawiki/branches/ariel/xmldumps-backup/README.config?view=markup the README in svn].
+
#: '''In transition, being moved to /backups. To be updated as soon as move is complete.'''
 +
# Check over the configuration file and make sure it looks sane, all the paths point to things that exist, etc.  For too many details see [https://gerrit.wikimedia.org/r/gitweb?p=operations/dumps.git;a=blob_plain;f=xmldumps-backup/README.config;hb=ariel the README.config file in the git repo].
 
#* We run enwiki on its own host.  If this host is going to do that work, check <code>/backups-atg/wikidump.conf.enwiki</code>.
 
#* We run enwiki on its own host.  If this host is going to do that work, check <code>/backups-atg/wikidump.conf.enwiki</code>.
 
#* The next 8 or so largest wikis are run on their own separate host so they don't backlog the smaller wikis.  For that, check <code>/backups-atg/wikidump.conf.bigwikis</code>.
 
#* The next 8 or so largest wikis are run on their own separate host so they don't backlog the smaller wikis.  For that, check <code>/backups-atg/wikidump.conf.bigwikis</code>.
 
#* The remainder of the wikis run on one host.  Check <code>/backups-atg/wikidump.conf</code> for those.
 
#* The remainder of the wikis run on one host.  Check <code>/backups-atg/wikidump.conf</code> for those.
 
<!--We will eventually do...
 
<!--We will eventually do...
# '''svn co http://svn.wikimedia.org/svnroot/mediawiki/trunk/backup/ /backups'''
+
# '''git pul something for public repo ... /backups'''
# '''svn co 'svn+ssh://user@svn.wikimedia.org/svn-private/wmf/xmlsnapshots/conf' conf'
+
# '''git pull something else for private repo with config files in it... /backups/conf'
 
# mv wikidump.conf ../.-->
 
# mv wikidump.conf ../.-->
 
  
 
== Dealing with problems ==
 
== Dealing with problems ==
Line 69: Line 91:
  
 
===Failed runs===
 
===Failed runs===
Logs will be kept of each run ... you can find them at... (better start rerunning all the scripts with this option and update the docs.) You can look at them to see if there are any error messages that were generated for a given run.
+
Logs will be kept of each run. You can find them in the directory for the particular dump, filename <code>dumplog.txt</code>.  You can look at them to see if there are any error messages that were generated for a given run.
  
 
The worker script can send email if a dump does not complete successfully.  (Better enable this.)  It currently sends email to...
 
The worker script can send email if a dump does not complete successfully.  (Better enable this.)  It currently sends email to...
Line 75: Line 97:
 
When one or more steps of a dump fail, the index.html file for that dump includes a notation of the failure and sometimes more information about it. Note that one step of a dump failing does not prevent other steps from running unless they depend on the data from that failed step as input.
 
When one or more steps of a dump fail, the index.html file for that dump includes a notation of the failure and sometimes more information about it. Note that one step of a dump failing does not prevent other steps from running unless they depend on the data from that failed step as input.
  
See [[Dumps/Rerunning a job]] for how to rerun all or part of a given dump.
+
See [[Dumps/Rerunning a job]] for how to rerun all or part of a given dump. This also explains what files may need to be cleaned up before rerunning.
  
===Scripts not running===
+
===Dumps not running===
If the worker script encounters more than three failed dumps in a row (currently configured as such? or did I hardcode that?) it will exit; this avoids generation of piles of broken dumps which later would need to be cleaned up.  Once the underlying problem is fixed, you can go to the screen session of the host running those wikis and rerun the previous command in all the windows. See [[Dumps/Dump servers]] for which hosts do what if you're not sure.
+
This covers restarting after: rebooting a host, rebooting the dataset host with the nfs share where dumps are written (which may cause dumps to hang), or when the dumps stop running for other reasons.
  
If the host crashes while the script running, the status files are left as-is and the display shows it as still running until the monitor node decides the lock file is stale enough to mark is as aborted.  To restart, start a screen session on the host as root and fire up the appropriate number of worker scripts with the appropriate config file option.  See [[Dumps/Dump servers]] for which hosts do what if you're not sure.
+
If the host crashes while the script running, the status files are left as-is and the display shows it as still running until the monitor node decides the lock file is stale enough to mark is as aborted.  To restart, start a screen session on the host as root and fire up the appropriate number of worker scripts with the appropriate config file option.  See [[Dumps/Snapshot hosts]] for which hosts do what; this lists which commands gets run on each host in how many windows.  If the monitor script is not running, restart it in a separate window of the same screen session; see the Dump servers page for the command and for which host it runs on.
  
== File layout ==
+
If the worker script encounters more than three failed dumps in a row (currently configured as such? or did I hardcode that?) it will exit; this avoids generation of piles of broken dumps which later would need to be cleaned up.  Once the underlying problem is fixed, you can go to the screen session of the host running those wikis and rerun the previous command in all the windows. See [[Dumps/Snapshot hosts]] for which hosts do what if you're not sure.
  
* <base>/
+
===Running a specifc dump on request===
** [http://download.wikimedia.org/ index.html] - List of all databases and their last-touched status
+
See [[Dumps/Rerunning a job]] for how to run a specific dump. This is done for special cases only.
** [http://download.wikimedia.org/afwiki/ <db>/]
+
*** <date>/
+
**** [http://download.wikimedia.org/afwiki/20060122/ index.html] - List of items in the database
+
  
Sites are identified by raw database name currently. A 'friendly' name/hostname can be added for convenience of searching in future.
+
== Deploying new code ==
  
== Backup stages ==
+
See [[Dumps/How to deploy]] for this.
  
*First stage: dumps of various database tables, both private and public
+
== Bugs, known limitations, etc. ==
*Second stage: list of page titles, page abstracts for Yahoo
+
*Third stage: page stubs (dumpBackup.php), gzipped
+
*: <small>Possibly additional recombine phase to combine chunks produced in parallel, into one complete file</small>
+
*Fourth stage: XML files with revision texts, bzipped (dumpTextPass.php, fetchText.php)
+
*: <small>Possibly additional recombine phase to combine chunks produced in parallel, into one complete file</small>
+
*Fifth stage: 7z compression of the XML file with all revision texts (full history)
+
*: <small>Possibly additional recombine phase to combine chunks produced in parallel, into one complete file</small>
+
  
== Programs used ==
+
See [[Dumps/Known issues and wish list]] for this.
  
See [[Dumps/Software dependencies]].
+
== File layout ==
  
The scripts call mysqldump, dumpBackup.php, and dumpTextPass.php directly for dump generation.
+
* <base>/
 +
** [http://dumps.wikimedia.org/index.html index.html] - Information about the server
 +
** [http://dumps.wikimedia.org/backup-index.html backup-index.html] - List of all databases and their last-touched status
 +
** [http://dumps.wikimedia.org/afwiki/ <db>/]
 +
*** <date>/
 +
**** [http://dumps.wikimedia.org/afwiki/20060122/ index.html] - List of items in the database
  
== Missing features ==
+
Sites are identified by raw database name currently. A 'friendly' name/hostname can be added for convenience of searching in future.
 
+
Currently, image tarballs are still not being made.
+
 
+
Static HTML dumps might also be included in this mess in future?
+
 
+
"Incremental" dumps?
+
 
+
== Limitations ==
+
 
+
The scripts in the /backups directory on the snapshot hosts are not updated by scap or any of the usual mechanisms.  The php scripts, in contrast, do get updated, and the updated versions will be invoked the next time worker.py starts up, i.e. on the next wiki project by date that is due for a run.  This might be a problem since comprehensive testing of XML dumps is usually not done before a code push.
+
 
+
== Notes ==
+
 
+
(This stuff may not be current.)
+
 
+
Not all error detection is probably working right now. Failures on the mysqldump runs are not detected. Tar failures are not detected. <-- current?
+
 
+
Failures of dumpPages.php should be detected, but indirectly from the failure of mwdumper to parse its XML output. <-- current?
+
  
* The page XML dumps should be consistent, all three outputs draw from one input, which is drawn from one long SQL transaction plus supplementary data loads which should be independent of changes. Weeelll.. we don't lock the tables, so don't count on this either.
 
* The other SQL dumps are not going to be 100% time-consistent. But that's not too important.
 
  
 
[[Category:How-To]]
 
[[Category:How-To]]
 
[[Category:Risk management]]
 
[[Category:Risk management]]
 
[[Category:dumps]]
 
[[Category:dumps]]

Latest revision as of 10:39, 20 June 2012

Docs for end-users of the data dumps at MetaWikipedia:Data dumps.

For a list of various information sources about the dumps, see Dumps/Other information sources.

Contents

[edit] Overview

User-visible files appear at http://download.wikipedia.org/backup-index.html

Dump activity involves a monitor node (running status sweeps) and arbitrarily many worker nodes running the dumps.

[edit] Status

For which hosts are serving data, see Dumps/Dump servers. For which hosts are generating which dumps, see Dumps/Snapshot hosts.

We want mirrors! For more information see Dumps/Mirror status.

[edit] Worker nodes

The worker processes go through the set of available wikis to dump automatically. Dumps are run on a "longest without a dump runs next" schedule. The plan is to have a complete dump for each wiki every 2 weeks, except for enwikipedia, which should have a complete dump once a month.

The shell script worker which starts one of these processes simply runs the python script <worker.py> in an endless loop. Multiple such workers can run at the same time on different hosts, as well as on the same host.

The worker.py script creates a lock file on the filesystem containing the dumps (as of this writing, /mnt/data/xmldatadumps/) in the subdirectory private/name-of-wiki/lock. No other process will try to write dumps for that project while the lock file is in place.

Local copies of the shell script and the python script live on the snapshot hosts in the directory /backups but currently are run out of /backups-atg (since this code is not yet in trunk) in screen sessions on the various hosts, as the user "backup".

[edit] Monitor node

The monitor node checks for and removes stale lock files from dump processes that have died, and updates the central index.html file which shows the dumps in progress and the status of the dumps that have completed (i.e. http://dumps.wikimedia.org/backup-index.html ). It does not start or stop worker processes.

The shell script monitor which starts the process simply runs the python script monitor.py in an endless loop.

As with the worker nodes, local copies of the shell script and the python script live on the snapshot hosts in the directory /backups but currently are run out of /backups-atg (since this code is not yet in trunk) in a screen session on one host, as the user "backup".

[edit] Code

Check /operations/dumps.git, branch 'ariel' for the python code in use. Eventually this will make its way back into master; it's still a bit gross right now.

Getting a copy:

git clone https://gerrit.wikimedia.org/r/p/operations/dumps.git
git checkout ariel

Getting a copy as a committer:

git clone ssh://<user>@gerrit.wikimedia.org:29418/operations/dumps.git
git checkout ariel

[edit] Programs used

See also Dumps/Software dependencies.

The scripts call mysqldump, getSlaveServer.php, eval.php, dumpBackup.php, and dumpTextPass.php directly for dump generation. These in turn require backup.inc and backupPrefetch.inc and may call ActiveAbstract/AbstractFilter.php and fetchText.php.

The generation of XML files relies on Export.php under the hood and of course the entire MW infrastructure.

The worker.py script relies on a few C programs for various bz2 operations: checkforbz2footer and recompressxml, both in /usr/local/bin/. These are in the git repo, see [1].

[edit] Setup

[edit] Adding a new worker box

Install and add to site.pp, copying one of the existing snapshot stanzas in puppet. This does, among other things:

  1. set up the base MW install without apache running
  2. Add worker to /etc/exports/ on dataset2
  3. Add /mnt/data to /etc/fstab of worker host
  4. Build the utfnormal php module (done for lucid)

For now:

  1. Backups are running test code out of /backups-atg on each host so grab a copy of that from any existing host and copy it into /backups-atg on the new host. This will include conf files, you don't need to specify them separately.
    In transition, being moved to /backups. To be updated as soon as move is complete.
  2. Check over the configuration file and make sure it looks sane, all the paths point to things that exist, etc. For too many details see the README.config file in the git repo.
    • We run enwiki on its own host. If this host is going to do that work, check /backups-atg/wikidump.conf.enwiki.
    • The next 8 or so largest wikis are run on their own separate host so they don't backlog the smaller wikis. For that, check /backups-atg/wikidump.conf.bigwikis.
    • The remainder of the wikis run on one host. Check /backups-atg/wikidump.conf for those.

[edit] Dealing with problems

[edit] Space

If the host serving the dumps runs low on disk space, you can reduce the number of backups that are kept. Edit the appropriate file /backups-atg/wikidump.conf* on the host running the set of dumps you would like to adjust, en wiki = wikidump.con.enwiki, the next 8 or so big wikis = wikidump.conf.bigwiki, the rest = wikidump.conf) and change the line that says "keep=<some value>" to some smaller number.

[edit] Failed runs

Logs will be kept of each run. You can find them in the directory for the particular dump, filename dumplog.txt. You can look at them to see if there are any error messages that were generated for a given run.

The worker script can send email if a dump does not complete successfully. (Better enable this.) It currently sends email to...

When one or more steps of a dump fail, the index.html file for that dump includes a notation of the failure and sometimes more information about it. Note that one step of a dump failing does not prevent other steps from running unless they depend on the data from that failed step as input.

See Dumps/Rerunning a job for how to rerun all or part of a given dump. This also explains what files may need to be cleaned up before rerunning.

[edit] Dumps not running

This covers restarting after: rebooting a host, rebooting the dataset host with the nfs share where dumps are written (which may cause dumps to hang), or when the dumps stop running for other reasons.

If the host crashes while the script running, the status files are left as-is and the display shows it as still running until the monitor node decides the lock file is stale enough to mark is as aborted. To restart, start a screen session on the host as root and fire up the appropriate number of worker scripts with the appropriate config file option. See Dumps/Snapshot hosts for which hosts do what; this lists which commands gets run on each host in how many windows. If the monitor script is not running, restart it in a separate window of the same screen session; see the Dump servers page for the command and for which host it runs on.

If the worker script encounters more than three failed dumps in a row (currently configured as such? or did I hardcode that?) it will exit; this avoids generation of piles of broken dumps which later would need to be cleaned up. Once the underlying problem is fixed, you can go to the screen session of the host running those wikis and rerun the previous command in all the windows. See Dumps/Snapshot hosts for which hosts do what if you're not sure.

[edit] Running a specifc dump on request

See Dumps/Rerunning a job for how to run a specific dump. This is done for special cases only.

[edit] Deploying new code

See Dumps/How to deploy for this.

[edit] Bugs, known limitations, etc.

See Dumps/Known issues and wish list for this.

[edit] File layout

Sites are identified by raw database name currently. A 'friendly' name/hostname can be added for convenience of searching in future.

Personal tools
Namespaces

Variants
Actions
Navigation
Ops documentation
Wiki
Toolbox