Search
(→Search Cluster: Shards, Pools, and Load Balancing Oh My!) |
(→Index Sharding) |
||
| (37 intermediate revisions by 7 users not shown) | |||
| Line 11: | Line 11: | ||
The system has two major software components, Extension:MWSearch and lsearchd. | The system has two major software components, Extension:MWSearch and lsearchd. | ||
| + | |||
| + | The version of Lucene is 2.1 and jdk is sun-j2sdk1.6_1.6.0+update30. | ||
=== Extension:MWSearch === | === Extension:MWSearch === | ||
| Line 22: | Line 24: | ||
==== Essentials ==== | ==== Essentials ==== | ||
| − | * | + | * configuration files: |
| − | * started via <tt>/etc/init.d/lsearchd</tt> | + | ** <tt>/etc/lsearch.conf</tt> - per-host local configuration |
| + | *** in puppet: pmtpa: <tt>puppet/templates/lucene/lsearch.conf</tt>, eqiad: <tt>puppet/templates/lucene/lsearch.new.conf</tt> | ||
| + | ** <tt>/home/wikipedia/conf/lucene/lsearch-global-2.1.conf</tt> - cluster-wide shared configuration. | ||
| + | *** in puppet: pmtpa: <tt>puppet/templates/lucene/lsearch-global-2.1.conf.pmtpa.erb</tt>, eqiad: <tt>puppet/templates/lucene/lsearch-global-2.1.conf.eqiad.erb</tt> | ||
| + | * started via <tt>/etc/init.d/lsearchd</tt> in pmtpa and <tt>/etc/init.d/lucene-search-2</tt> in eqiad | ||
* search frontent port 8123, index frontend port 8321; backend - RMI (RMI registry port 1099) | * search frontent port 8123, index frontend port 8321; backend - RMI (RMI registry port 1099) | ||
* logs in <tt>/a/search/logs</tt> | * logs in <tt>/a/search/logs</tt> | ||
| Line 32: | Line 38: | ||
==== Installation ==== | ==== Installation ==== | ||
| − | + | Now is deployed via puppet and without nfs with adding the class <tt>role::lucene::front-end::(pool[1-4]|prefix)</tt> | |
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | See [[#Cluster Host Hardware Failure]] for more details of bringing up a host. | |
| + | ==== Configuration ==== | ||
There is a shared configuration file <tt>/home/wikipedia/conf/lucene/lsearch-global-2.1.conf</tt> that contains information about the roles hosts are assigned in the search cluster. This way lsearchd daemons can communicate with each other to obtain the latest index versions, forward request if necessary, search over many hosts if the index is split, etc.. | There is a shared configuration file <tt>/home/wikipedia/conf/lucene/lsearch-global-2.1.conf</tt> that contains information about the roles hosts are assigned in the search cluster. This way lsearchd daemons can communicate with each other to obtain the latest index versions, forward request if necessary, search over many hosts if the index is split, etc.. | ||
| − | The per-host local configuration file is at <tt>/etc/lsearch.conf</tt>. Most importantly it defines <tt>SearcherPool.size</tt>, which should be set to local number of CPUs+1 if only one index is searched. This prevents CPUs from locking each other out. The other important property is <tt>Search.updatedelay</tt> which prevents all searches from trying to update their working copies of the index at the same time, and thus generate noticeable performance degradation. | + | The per-host local configuration file is at <tt>/etc/lsearch.conf</tt>. Most importantly it defines <tt>SearcherPool.size</tt>, which should be set to local number of CPUs+1 if only one index is searched. This prevents CPUs from locking each other out. The other important property is <tt>Search.updatedelay</tt> which prevents all searches from trying to update their working copies of the index at the same time, and thus generate noticeable performance degradation. |
| + | |||
| + | == Indexing == | ||
| + | In pmtpa, searchidx2 is the indexer. In eqiad, searchidx1001 is the indexer. | ||
| + | * the search indexer serves as the indexer for the cluster | ||
| + | * the search indexer's lsearchd daemon is configured to act as indexer in addition to another proc, the incremental updater | ||
| + | * the incremental updater proc is started with: | ||
| + | root@searchidx1001:~# su -s /bin/bash -c "/a/search/lucene.jobs.sh inc-updater-start" lsearch | ||
| + | * other indexing jobs, like indexing private wikis, spell-check rebuilds etc are in lsearch's crontab on the search indexer | ||
| + | * the search indexer runs rsyncd to allow cluster members to fetch indexes | ||
| + | * other cluster hosts fetch indexes by rsync every 30 seconds, as defined by Search.updateinterval in lsearch-global-2.1.conf | ||
== Search Cluster: Shards, Pools, and Load Balancing Oh My! == | == Search Cluster: Shards, Pools, and Load Balancing Oh My! == | ||
| − | + | This section has been derived from the following configuration: | |
| + | * /home/wikipedia/common/wmf-config/lucene.php | ||
| + | * /home/wikipedia/conf/lucene/lsearch-global-2.1.conf | ||
| + | * /home/wikipedia/conf/pybal/pmtpa/search_pool[1-3] | ||
| − | + | === Index Sharding === | |
| − | + | We shard search indexes across hosts in the cluster to accomodate index data footprint, hardware limitations, and utilization. | |
| − | + | === Pools === | |
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | We use a mixture of single-host and multi-host pools to direct requests to the servers that host the appropriate indexes. Where multi-hosts pools are employed we use pybal/LVS load balancing (running on lvs3) or in-code load balancing. As of Feb 2012 we have the following pool configuration: | |
| − | + | {| class="wikitable sortable" | |
| + | |- | ||
| + | ! host !! mw(?) pool !! lvs pool !! indexed data | ||
| + | |- | ||
| + | | search1 || enwiki || search_pool1 || enwiki.nspart1.sub1<br>enwiki.nspart1.sub2 | ||
| + | |- | ||
| + | | search2 || - || - || enwiki.nspart1.sub1.hl<br>enwiki.spell | ||
| + | |- | ||
| + | | search3 || enwiki || search_pool1 || enwiki.nspart1.sub1<br>enwiki.nspart1.sub2 | ||
| + | |- | ||
| + | | search4 || enwiki || search_pool1 || enwiki.nspart1.sub1<br>enwiki.nspart1.sub2 | ||
| + | |- | ||
| + | | search5 || - || - || enwiki.nspart1.sub2.hl<br>enwiki.spell | ||
| + | |- | ||
| + | | search6 || dewiki<br>frwiki<br>jawiki || search_pool2 || dewiki.nspart1<br>dewiki.nspart2<br>frwiki.nspart1<br>frwiki.nspart2<br>itwiki.nspart1.hl<br>jawiki.nspart1<br>jawiki.nspart2 | ||
| + | |- | ||
| + | | search7 || itwiki<br>nlwiki<br>plwiki<br>ptwiki<br>ruwiki<br>svwiki<br>zhwiki || search_pool3 || itwiki.nspart1<br>nlwiki.nspart1<br>plwiki.nspart1<br>ptwiki.nspart1<br>ruwiki.nspart1<br>svwiki.nspart1<br>zhwiki.nspart1 | ||
| + | |- | ||
| + | | search8 || enwiki.prefix || - || enwiki.prefix | ||
| + | |- | ||
| + | | search9 || enwiki || search_pool1 || enwiki.nspart1.sub1<br>enwiki.nspart1.sub2 | ||
| + | |- | ||
| + | | search10 || - || - || dewiki.spell<br>eswiki.spell<br>frwiki.spell<br>itwiki.spell<br>nlwiki.spell<br>plwiki.spell<br>ptwiki.spell<br>ruwiki.spell<br>svwiki.spell | ||
| + | |- | ||
| + | | search11 || catch-all || - || *?<br>commonswiki.nspart1<br>commonswiki.nspart1.hl<br>commonswiki.nspart2<br>commonswiki.nspart2.hl | ||
| + | |- | ||
| + | | search12 || - || - || (?!(enwiki.|dewiki.|frwiki.|itwiki.|nlwiki.|ruwiki.|svwiki.|plwiki.|eswiki.|ptwiki.|jawiki.|zhwiki.))*.hl<br>enwiki.spell | ||
| + | |- | ||
| + | | search13 || - || - || enwiki.nspart2* | ||
| + | |- | ||
| + | | search14 || eswiki || - || enwiki.nspart1.sub1.hl<br>eswiki | ||
| + | |- | ||
| + | | search15 || dewiki<br>frwiki<br>jawiki || search_pool2 || dewiki.nspart1<br>dewiki.nspart2<br>frwiki.nspart1<br>frwiki.nspart2<br>itwiki.nspart1.hl<br>itwiki.nspart2<br>itwiki.nspart2.hl<br>jawiki.nspart1<br>jawiki.nspart2<br>nlwiki.nspart1.hl<br>nlwiki.nspart2<br>nlwiki.nspart2.hl<br>plwiki.nspart2<br>ptwiki.nspart1.hl<br>ptwiki.nspart2<br>ptwiki.nspart2.hl<br>ruwiki.nspart1.hl<br>ruwiki.nspart2<br>ruwiki.nspart2.hl<br>svwiki.nspart2<br>zhwiki.nspart2 | ||
| + | |- | ||
| + | | search16 || - || - || dewiki.nspart1.hl<br>dewiki.nspart2.hl<br>eswiki.hl<br>frwiki.nspart1.hl<br>frwiki.nspart2.hl<br>itwiki.nspart1.hl<br>itwiki.nspart2.hl<br>nlwiki.nspart1.hl<br>nlwiki.nspart2.hl<br>plwiki.nspart1.hl<br>plwiki.nspart2.hl<br>ptwiki.nspart1.hl<br>ptwiki.nspart2.hl<br>ruwiki.nspart1.hl<br>ruwiki.nspart2.hl<br>svwiki.nspart1.hl<br>svwiki.nspart2.hl | ||
| + | |- | ||
| + | | search17 || - || - || dewiki.nspart1.hl<br>dewiki.nspart2.hl<br>eswiki.hl<br>frwiki.nspart1.hl<br>frwiki.nspart2.hl<br>itwiki.nspart1.hl<br>itwiki.nspart2.hl<br>nlwiki.nspart1.hl<br>nlwiki.nspart2.hl<br>plwiki.nspart1.hl<br>plwiki.nspart2.hl<br>ptwiki.nspart1.hl<br>ptwiki.nspart2.hl<br>ruwiki.nspart1.hl<br>ruwiki.nspart2.hl<br>svwiki.nspart1.hl<br>svwiki.nspart2.hl | ||
| + | |- | ||
| + | | search18 || *.prefix || - || *.prefix | ||
| + | |- | ||
| + | | search19 || - || - || (?!(enwiki.|dewiki.|frwiki.|itwiki.|nlwiki.|ruwiki.|svwiki.|plwiki.|eswiki.|ptwiki.))*.spell<br>enwiki.nspart1.sub1.hl<br>enwiki.nspart1.sub2.hl | ||
| + | |- | ||
| + | | search20 || - || - || enwiki.nspart1.sub1.hl<br>enwiki.nspart1.sub2.hl | ||
| + | |} | ||
| − | + | = Administration = | |
| − | + | == Dependencies == | |
| − | * | + | * all requests from apaches depend on LVS |
| + | * Each front end node depends on the indexer for updated indexes | ||
| + | * The indexer depends on querying all database shards for its incremental updates | ||
| + | * The crons for private wikis depend on database access to the external stores | ||
| + | * the front-end nodes depend on rsync from /home/w/common for up-to-date mediaiwiki confs | ||
| − | + | == Health/Activity Monitoring == | |
| − | + | Currently, the only nagios monitoring is a tcp check on 8321, the port the daemon listens on. More monitoring in the works. | |
| − | + | Ganglia graphs are extremely useful for telling when a node's daemon is stuck in some way, disk is full, etc. | |
| − | + | ||
| − | + | == Software Updates == | |
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | The LuceneSearch.jar is now installed via a package in our apt repo. Deploying a new version of the software involves building a package and adding it to the repo. Puppet will install the newer version. A manual restart of the daemon will probably be required. | |
| − | + | == Stopping and fall back to MediaWiki's search == | |
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
| − | = | + | To disable lucene and fall back to MediaWiki's search, set <tt>$wgUseLuceneSearch = false</tt> in <tt>CommonSettings.php</tt>. |
| − | + | Note: py: I do not beleive that this is a workable solution any longer. | |
| − | + | == Adding new wikis == | |
| − | + | When a new wiki is created, an initial index build needs to be made. First restart the indexer on searchidx2 and searchidx1001 to make sure the indexer knows about the new wikis, and then run the <code>import-db</code> script on appropriate wiki database name (i.e. replace wikidb with the wiki database name, e.g. wikimania2012wiki). Once initial indices are in place, restart the incremental indexer. | |
| − | + | On each individual indexer (current searchidx1001.eqiad and searchidx2.pmtpa) run: | |
| + | |||
| + | root@searchidx1001:~# sudo -u lsearch /a/search/lucene.jobs.sh import-db wikidb | ||
| + | root@searchidx1001:~# killall -g java | ||
| + | root@searchidx1001:~# /etc/init.d/lucene-search-2 start | ||
| + | root@searchidx1001:~# sudo -u lsearch /a/search/lucene.jobs.sh inc-updater-start | ||
| − | + | Then, you must restart lsearchd (/etc/init.d/lucene-search-2 restart) on each search note that should contain an index for the new wiki. This includes every host in its pool (i.e. all search-pool4 nodes, not just the ones that receive front-end queries via lvs) as well as hosts that are shared amongst all pools search as those running the search-prefix indices. | |
| − | + | = Trouble = | |
| + | == What to do if you get a page about a search pool == | ||
| − | + | * Check if any search nodes are unresponsive. This is usually pretty obvious in [http://ganglia.wikimedia.org/latest/?r=hour&cs=&ce=&m=cpu_report&s=descending&c=Search+eqiad&h=&host_regex=&max_graphs=0&tab=m&vn=&sh=1&z=small&hc=4 ganglia] (no cpu activity). Restart anything that's stuck. | |
| + | * People love to DoS search. With the pmtpa cluster it was very easy. With the eqiad cluster it will be slightly harder. Check the api logs to see if an IP is making excessive queries of bogus terms. Block IP. | ||
| + | * check pybal logs on the low-trffic nodes for the data center. Make sure nodes are pooled. | ||
| + | * Look at /a/search/log/log There might be pointers there. | ||
| + | * To test fucntionality of a node, do something like: | ||
| + | curl http://NODENAME:8123/search/??wiki/SomeTerm | ||
| + | where ??wiki is some index that should be on that node (enwiki, dewiki, etc) | ||
| − | + | Which hosts are in which pool? pybal links for [http://noc.wikimedia.org/pybal/eqiad/search_pool1 search_pool1], [http://noc.wikimedia.org/pybal/eqiad/search_pool2 search_pool2], [http://noc.wikimedia.org/pybal/eqiad/search_pool3 search_pool3], [http://noc.wikimedia.org/pybal/eqiad/search_pool4 search_pool4], and [http://noc.wikimedia.org/pybal/eqiad/search_prefix search_prefix] | |
| − | == | + | == Main indexer on searchidx2/searchidx1001 is stuck == |
| − | + | The search indexers very occassionally fall over. This looks like the ganglia load/traffic graphs falling to near-zero, and the cpu idle near 100%. | |
| − | + | If indexing is stuck on searchidx2, run this script this script as user rainman (so he can restart later if necessary): | |
| − | + | ||
| − | + | ||
root@searchidx2:~# sudo -u rainman /home/rainman/scripts/search-restart-indexer | root@searchidx2:~# sudo -u rainman /home/rainman/scripts/search-restart-indexer | ||
| − | + | If indexing is stuck on searchidx1001, do the following: | |
| − | + | ||
| − | + | ||
| − | + | root@searchidx1001:~# killall -g java | |
| + | root@searchidx1001:~# /etc/init.d/lucene-search-2 start | ||
| + | root@searchidx1001:~# su -s /bin/bash -c "/a/search/lucene.jobs.sh inc-updater-start" lsearch | ||
| − | + | == Individual lsearchd processes are crashing or nonresponsive == | |
| − | + | ||
| − | + | * Try starting the lsearch process in the foreground so you can watch what it does: | |
| − | + | start-stop-daemon --start --user lsearch --chuid lsearch --pidfile /var/run/lsearchd.pid --make-pidfile --exec /usr/bin/java -- -Xmx20000m -Djava.rmi.server.codebase=file:///a/search/lucene-search/LuceneSearch.jar -Djava.rmi.server.hostname=$HOSTNAME -jar /a/search/lucene-search/LuceneSearch.jar | |
| − | < | + | * Check log at /a/search/log/log for indications of obvious issues |
| − | + | <pre> | |
| + | root@search3:~# grep "^Caused by" /a/search/log/log|tail -20 | ||
| + | Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded | ||
| + | Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded | ||
| + | Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded | ||
| − | + | (oops, we hit java's memory limit) | |
| − | + | </pre> | |
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | == Space Issues on Cluster Host == | |
| − | + | * check /a/search/indexes for unintended indexes, i.e. cruft from previous configurations, as the daemon doesn't know to delete indexes that are no longer in use. | |
| + | * Can also create new shards. This will involve making a new lvs pool, and new entries into the hash structure in manifests/roles/lucene.pp | ||
| − | == | + | == Cluster Host Hardware Failure == |
| + | * If a host in lvs fails, lvs '''should''' depool it automatically, and a least one other host will pick up the load. If the host is not in lvs, and instead is accessed via RMI, then RMI will take care of the depooling. | ||
| + | * To bring up a new node with the same indexes/role, at it to the has structure in manifests/roles/lucene.pp, and into site.pp with the appropriate role class (ie.: the same as the failed node.) | ||
| + | * Bring up new node with puppet, make sure that the lucene-search-2 daemon is running, and that the rsync of the indexes from the indexer has finished. | ||
| + | * If the node has main namespace indexes, something of the form ??wiki.nspart[12] or ??wiki.nspart[12].sub[12], you can test that it's giving proper responses with something of the form | ||
| + | curl http://NODENAME:8123/search/??wiki/SomeTerm | ||
| + | * If the failed node has main namespace indexes, something of the form ??wiki.nspart[12] or ??wiki.nspart[12].sub[12], then you will need to adjust the pool's pybal configs accordingly (i.e. out with the old, in with the new). | ||
| − | |||
| − | + | == Indexer Host Hardware Failure == | |
| − | + | We are not currently set up to gracefully deal with a massive indexer failure. | |
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | ||
| − | + | Having an indexer in multiple DCs is as well as we have done. The general procedure for bringing up an indexer will be to include role::lucene::indexer, stop the lucene daemon and the incremental indexer, and rsync over the contents of /a/search/indexes from any surviving indexer, and then starting up the lucene/incremental updater. | |
| − | + | == Excess Load on a Cluster Host == | |
| + | * [which logs etc. to check for evidence of i.e. abuse, configuration issues, etc] | ||
[[Category:Software]] | [[Category:Software]] | ||
[[Category:Search]] | [[Category:Search]] | ||
Latest revision as of 01:44, 30 January 2013
Note: this page is about Wikimedia's Lucene implementation, not lucene generally.
[edit] Usage
lucene-search is a search extension for MediaWiki based on the "Apache Lucene" search engine. This page attempts to give some information about the extension and how it is set up in the WikiMedia cluster, and to give details about the Lucene search engine.
[edit] Overview
[edit] Software
The system has two major software components, Extension:MWSearch and lsearchd.
The version of Lucene is 2.1 and jdk is sun-j2sdk1.6_1.6.0+update30.
[edit] Extension:MWSearch
Extension:MWSearch is a MW extension that overrides default search backend and send requests to lsearchd.
[edit] lsearchd
lsearchd (Extension:Lucene-search) is a versatile java daemon that can act as frontend, backend, searcher, indexer, highlighter, spellchecker, ... we use it to searches, highlight, spell-checks and act as an incremental indexer
[edit] Essentials
- configuration files:
- /etc/lsearch.conf - per-host local configuration
- in puppet: pmtpa: puppet/templates/lucene/lsearch.conf, eqiad: puppet/templates/lucene/lsearch.new.conf
- /home/wikipedia/conf/lucene/lsearch-global-2.1.conf - cluster-wide shared configuration.
- in puppet: pmtpa: puppet/templates/lucene/lsearch-global-2.1.conf.pmtpa.erb, eqiad: puppet/templates/lucene/lsearch-global-2.1.conf.eqiad.erb
- /etc/lsearch.conf - per-host local configuration
- started via /etc/init.d/lsearchd in pmtpa and /etc/init.d/lucene-search-2 in eqiad
- search frontent port 8123, index frontend port 8321; backend - RMI (RMI registry port 1099)
- logs in /a/search/logs
- indexes in /a/search/indexes
- jar in /a/search/lucene-search
- test with curl http://localhost:8123/search/enwiki/test
[edit] Installation
Now is deployed via puppet and without nfs with adding the class role::lucene::front-end::(pool[1-4]|prefix)
See #Cluster Host Hardware Failure for more details of bringing up a host.
[edit] Configuration
There is a shared configuration file /home/wikipedia/conf/lucene/lsearch-global-2.1.conf that contains information about the roles hosts are assigned in the search cluster. This way lsearchd daemons can communicate with each other to obtain the latest index versions, forward request if necessary, search over many hosts if the index is split, etc..
The per-host local configuration file is at /etc/lsearch.conf. Most importantly it defines SearcherPool.size, which should be set to local number of CPUs+1 if only one index is searched. This prevents CPUs from locking each other out. The other important property is Search.updatedelay which prevents all searches from trying to update their working copies of the index at the same time, and thus generate noticeable performance degradation.
[edit] Indexing
In pmtpa, searchidx2 is the indexer. In eqiad, searchidx1001 is the indexer.
- the search indexer serves as the indexer for the cluster
- the search indexer's lsearchd daemon is configured to act as indexer in addition to another proc, the incremental updater
- the incremental updater proc is started with:
root@searchidx1001:~# su -s /bin/bash -c "/a/search/lucene.jobs.sh inc-updater-start" lsearch
- other indexing jobs, like indexing private wikis, spell-check rebuilds etc are in lsearch's crontab on the search indexer
- the search indexer runs rsyncd to allow cluster members to fetch indexes
- other cluster hosts fetch indexes by rsync every 30 seconds, as defined by Search.updateinterval in lsearch-global-2.1.conf
[edit] Search Cluster: Shards, Pools, and Load Balancing Oh My!
This section has been derived from the following configuration:
- /home/wikipedia/common/wmf-config/lucene.php
- /home/wikipedia/conf/lucene/lsearch-global-2.1.conf
- /home/wikipedia/conf/pybal/pmtpa/search_pool[1-3]
[edit] Index Sharding
We shard search indexes across hosts in the cluster to accomodate index data footprint, hardware limitations, and utilization.
[edit] Pools
We use a mixture of single-host and multi-host pools to direct requests to the servers that host the appropriate indexes. Where multi-hosts pools are employed we use pybal/LVS load balancing (running on lvs3) or in-code load balancing. As of Feb 2012 we have the following pool configuration:
| host | mw(?) pool | lvs pool | indexed data |
|---|---|---|---|
| search1 | enwiki | search_pool1 | enwiki.nspart1.sub1 enwiki.nspart1.sub2 |
| search2 | - | - | enwiki.nspart1.sub1.hl enwiki.spell |
| search3 | enwiki | search_pool1 | enwiki.nspart1.sub1 enwiki.nspart1.sub2 |
| search4 | enwiki | search_pool1 | enwiki.nspart1.sub1 enwiki.nspart1.sub2 |
| search5 | - | - | enwiki.nspart1.sub2.hl enwiki.spell |
| search6 | dewiki frwiki jawiki |
search_pool2 | dewiki.nspart1 dewiki.nspart2 frwiki.nspart1 frwiki.nspart2 itwiki.nspart1.hl jawiki.nspart1 jawiki.nspart2 |
| search7 | itwiki nlwiki plwiki ptwiki ruwiki svwiki zhwiki |
search_pool3 | itwiki.nspart1 nlwiki.nspart1 plwiki.nspart1 ptwiki.nspart1 ruwiki.nspart1 svwiki.nspart1 zhwiki.nspart1 |
| search8 | enwiki.prefix | - | enwiki.prefix |
| search9 | enwiki | search_pool1 | enwiki.nspart1.sub1 enwiki.nspart1.sub2 |
| search10 | - | - | dewiki.spell eswiki.spell frwiki.spell itwiki.spell nlwiki.spell plwiki.spell ptwiki.spell ruwiki.spell svwiki.spell |
| search11 | catch-all | - | *? commonswiki.nspart1 commonswiki.nspart1.hl commonswiki.nspart2 commonswiki.nspart2.hl |
| search12 | - | - | dewiki.|frwiki.|itwiki.|nlwiki.|ruwiki.|svwiki.|plwiki.|eswiki.|ptwiki.|jawiki.|zhwiki.))*.hl enwiki.spell |
| search13 | - | - | enwiki.nspart2* |
| search14 | eswiki | - | enwiki.nspart1.sub1.hl eswiki |
| search15 | dewiki frwiki jawiki |
search_pool2 | dewiki.nspart1 dewiki.nspart2 frwiki.nspart1 frwiki.nspart2 itwiki.nspart1.hl itwiki.nspart2 itwiki.nspart2.hl jawiki.nspart1 jawiki.nspart2 nlwiki.nspart1.hl nlwiki.nspart2 nlwiki.nspart2.hl plwiki.nspart2 ptwiki.nspart1.hl ptwiki.nspart2 ptwiki.nspart2.hl ruwiki.nspart1.hl ruwiki.nspart2 ruwiki.nspart2.hl svwiki.nspart2 zhwiki.nspart2 |
| search16 | - | - | dewiki.nspart1.hl dewiki.nspart2.hl eswiki.hl frwiki.nspart1.hl frwiki.nspart2.hl itwiki.nspart1.hl itwiki.nspart2.hl nlwiki.nspart1.hl nlwiki.nspart2.hl plwiki.nspart1.hl plwiki.nspart2.hl ptwiki.nspart1.hl ptwiki.nspart2.hl ruwiki.nspart1.hl ruwiki.nspart2.hl svwiki.nspart1.hl svwiki.nspart2.hl |
| search17 | - | - | dewiki.nspart1.hl dewiki.nspart2.hl eswiki.hl frwiki.nspart1.hl frwiki.nspart2.hl itwiki.nspart1.hl itwiki.nspart2.hl nlwiki.nspart1.hl nlwiki.nspart2.hl plwiki.nspart1.hl plwiki.nspart2.hl ptwiki.nspart1.hl ptwiki.nspart2.hl ruwiki.nspart1.hl ruwiki.nspart2.hl svwiki.nspart1.hl svwiki.nspart2.hl |
| search18 | *.prefix | - | *.prefix |
| search19 | - | - | dewiki.|frwiki.|itwiki.|nlwiki.|ruwiki.|svwiki.|plwiki.|eswiki.|ptwiki.))*.spell enwiki.nspart1.sub1.hl enwiki.nspart1.sub2.hl |
| search20 | - | - | enwiki.nspart1.sub1.hl enwiki.nspart1.sub2.hl |
[edit] Administration
[edit] Dependencies
- all requests from apaches depend on LVS
- Each front end node depends on the indexer for updated indexes
- The indexer depends on querying all database shards for its incremental updates
- The crons for private wikis depend on database access to the external stores
- the front-end nodes depend on rsync from /home/w/common for up-to-date mediaiwiki confs
[edit] Health/Activity Monitoring
Currently, the only nagios monitoring is a tcp check on 8321, the port the daemon listens on. More monitoring in the works.
Ganglia graphs are extremely useful for telling when a node's daemon is stuck in some way, disk is full, etc.
[edit] Software Updates
The LuceneSearch.jar is now installed via a package in our apt repo. Deploying a new version of the software involves building a package and adding it to the repo. Puppet will install the newer version. A manual restart of the daemon will probably be required.
[edit] Stopping and fall back to MediaWiki's search
To disable lucene and fall back to MediaWiki's search, set $wgUseLuceneSearch = false in CommonSettings.php.
Note: py: I do not beleive that this is a workable solution any longer.
[edit] Adding new wikis
When a new wiki is created, an initial index build needs to be made. First restart the indexer on searchidx2 and searchidx1001 to make sure the indexer knows about the new wikis, and then run the import-db script on appropriate wiki database name (i.e. replace wikidb with the wiki database name, e.g. wikimania2012wiki). Once initial indices are in place, restart the incremental indexer.
On each individual indexer (current searchidx1001.eqiad and searchidx2.pmtpa) run:
root@searchidx1001:~# sudo -u lsearch /a/search/lucene.jobs.sh import-db wikidb root@searchidx1001:~# killall -g java root@searchidx1001:~# /etc/init.d/lucene-search-2 start root@searchidx1001:~# sudo -u lsearch /a/search/lucene.jobs.sh inc-updater-start
Then, you must restart lsearchd (/etc/init.d/lucene-search-2 restart) on each search note that should contain an index for the new wiki. This includes every host in its pool (i.e. all search-pool4 nodes, not just the ones that receive front-end queries via lvs) as well as hosts that are shared amongst all pools search as those running the search-prefix indices.
[edit] Trouble
[edit] What to do if you get a page about a search pool
- Check if any search nodes are unresponsive. This is usually pretty obvious in ganglia (no cpu activity). Restart anything that's stuck.
- People love to DoS search. With the pmtpa cluster it was very easy. With the eqiad cluster it will be slightly harder. Check the api logs to see if an IP is making excessive queries of bogus terms. Block IP.
- check pybal logs on the low-trffic nodes for the data center. Make sure nodes are pooled.
- Look at /a/search/log/log There might be pointers there.
- To test fucntionality of a node, do something like:
curl http://NODENAME:8123/search/??wiki/SomeTerm
where ??wiki is some index that should be on that node (enwiki, dewiki, etc)
Which hosts are in which pool? pybal links for search_pool1, search_pool2, search_pool3, search_pool4, and search_prefix
[edit] Main indexer on searchidx2/searchidx1001 is stuck
The search indexers very occassionally fall over. This looks like the ganglia load/traffic graphs falling to near-zero, and the cpu idle near 100%.
If indexing is stuck on searchidx2, run this script this script as user rainman (so he can restart later if necessary):
root@searchidx2:~# sudo -u rainman /home/rainman/scripts/search-restart-indexer
If indexing is stuck on searchidx1001, do the following:
root@searchidx1001:~# killall -g java root@searchidx1001:~# /etc/init.d/lucene-search-2 start root@searchidx1001:~# su -s /bin/bash -c "/a/search/lucene.jobs.sh inc-updater-start" lsearch
[edit] Individual lsearchd processes are crashing or nonresponsive
- Try starting the lsearch process in the foreground so you can watch what it does:
start-stop-daemon --start --user lsearch --chuid lsearch --pidfile /var/run/lsearchd.pid --make-pidfile --exec /usr/bin/java -- -Xmx20000m -Djava.rmi.server.codebase=file:///a/search/lucene-search/LuceneSearch.jar -Djava.rmi.server.hostname=$HOSTNAME -jar /a/search/lucene-search/LuceneSearch.jar
- Check log at /a/search/log/log for indications of obvious issues
root@search3:~# grep "^Caused by" /a/search/log/log|tail -20 Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded Caused by: java.lang.OutOfMemoryError: GC overhead limit exceeded (oops, we hit java's memory limit)
[edit] Space Issues on Cluster Host
- check /a/search/indexes for unintended indexes, i.e. cruft from previous configurations, as the daemon doesn't know to delete indexes that are no longer in use.
- Can also create new shards. This will involve making a new lvs pool, and new entries into the hash structure in manifests/roles/lucene.pp
[edit] Cluster Host Hardware Failure
- If a host in lvs fails, lvs should depool it automatically, and a least one other host will pick up the load. If the host is not in lvs, and instead is accessed via RMI, then RMI will take care of the depooling.
- To bring up a new node with the same indexes/role, at it to the has structure in manifests/roles/lucene.pp, and into site.pp with the appropriate role class (ie.: the same as the failed node.)
- Bring up new node with puppet, make sure that the lucene-search-2 daemon is running, and that the rsync of the indexes from the indexer has finished.
- If the node has main namespace indexes, something of the form ??wiki.nspart[12] or ??wiki.nspart[12].sub[12], you can test that it's giving proper responses with something of the form
curl http://NODENAME:8123/search/??wiki/SomeTerm
- If the failed node has main namespace indexes, something of the form ??wiki.nspart[12] or ??wiki.nspart[12].sub[12], then you will need to adjust the pool's pybal configs accordingly (i.e. out with the old, in with the new).
[edit] Indexer Host Hardware Failure
We are not currently set up to gracefully deal with a massive indexer failure.
Having an indexer in multiple DCs is as well as we have done. The general procedure for bringing up an indexer will be to include role::lucene::indexer, stop the lucene daemon and the incremental indexer, and rsync over the contents of /a/search/indexes from any surviving indexer, and then starting up the lucene/incremental updater.
[edit] Excess Load on a Cluster Host
- [which logs etc. to check for evidence of i.e. abuse, configuration issues, etc]