Eqiad Migration Planning

From Wikitech
(Difference between revisions)
Jump to: navigation, search
(* Weekly Countdown meeting http://etherpad.wmflabs.org/pad/p/EqiadMigration - meeting minutes)
Line 5: Line 5:
 
* Weekly Countdown meeting http://etherpad.wmflabs.org/pad/p/EqiadMigration - meeting minutes
 
* Weekly Countdown meeting http://etherpad.wmflabs.org/pad/p/EqiadMigration - meeting minutes
  
=== High Risk & Mitigation ===
+
== Risk & Mitigation ==
*  What could cause failover back to Tampa difficult should migration failed?
+
Identify the high risk migration tasks and ensure we have a way to mitigate or revert without extended downtime.
** should Ceph fails?
+
*  What could cause fallback to Tampa a big problem should migration failed?
 +
** should Ceph fail?
 +
** should Swift@Tampa fail?
 +
**
  
== Needed Server Builds ==  
+
== Outstanding Server/System Readiness ==  
* App, Imagescalers and API Apaches
+
* App, Imagescalers, Bits, Jobrunners and API Apaches
** Image scalers: Ready to deploy @ Eqiad  
+
** Image scalers: Ready to deploy @ Eqiad - PY
** Apache/API: Ready to deploy @ Eqiad (mw1017-mw1019 puppetized for deploy testing)
+
** Apache/API: Ready to deploy @ Eqiad (mw1017-mw1019 puppetized for deploy testing) - PY
* JobRunners
+
** Ready to test deploy @ Eqiad - PY
** Ready to test deploy @ Eqiad
+
**** dependent on Deployment system - ready for test in EQIAD;
+
**** deploy API, Apaches, Imagescalers ( PY )
+
**** need to change apache and mw cfg  - (AI - tbd/RobLa)
+
**** Need to identify / doc test requirements and success criteria (what are the use cases?) - CM/PY
+
**** Chris M will work  with Ops (PY lead guy) on setting up the tests
+
***** Overview of existing UI tests:  https://github.com/wikimedia/qa-browsertests/tree/master/features
+
  
* Swift
+
* Deployment system
 +
** using Git-deploy & ready for testing @ EQIAD - RyanLane, PY and ChrisM
 +
*** need to change apache and mw cfg  - (AI - tbd/RobLa)
 +
*** Need to identify / doc test requirements and success criteria (what are the use cases?) - CM/PY
 +
*** Chris M will work  with Ops (PY lead guy) on setting up the tests
 +
**** Overview of existing UI tests:  https://github.com/wikimedia/qa-browsertests/tree/master/features
 +
 
 +
* Swift in Tampa & Ceph in EQIAD
 +
** Current plan is to have Ceph running at Eqiad (final decision - end of Dec by Mark/Faidon)
 +
** Swift @ Tampa is in production already
 
** servers online; needs cluster replication enabled - netapp replication enabled
 
** servers online; needs cluster replication enabled - netapp replication enabled
** Still need to migrate Math, Captcha, Misc objects from ms7 to Swift
+
** Still need to migrate Math, Captcha, Misc objects from ms7 to Swift - Aaron
** <s>H/w issues need to be resolved - H/w being installed and final batch to ship on 5th Dec</s>
+
 
** Might have to run Swift and ImageScalers in Tampa while the rest of the stack are running in Eqiad
 
** Might have to run Swift and ImageScalers in Tampa while the rest of the stack are running in Eqiad
 
** Aaron to test performance lag
 
** Aaron to test performance lag
Line 31: Line 35:
 
** Ceph update
 
** Ceph update
 
*** overcame several issues/ steep learning curve; cluster more stable
 
*** overcame several issues/ steep learning curve; cluster more stable
*** still an option to use Ceph
+
*** currrently performing stability & stress tests
 +
*** Servers are being provisioned - Faidon
 
*** MW multiwrite for thumbs - Aaron/Mark to discuss details (already happening with NAS)
 
*** MW multiwrite for thumbs - Aaron/Mark to discuss details (already happening with NAS)
  
Line 37: Line 42:
 
** mc01 - mc16 (Tampa) in production - done
 
** mc01 - mc16 (Tampa) in production - done
 
** mc1001-mc1016 OS installed, ready for puppet to be run.
 
** mc1001-mc1016 OS installed, ready for puppet to be run.
** <s>Networked but issues with the Intel 10gNics and Dell's SPF+ - workaround available; getting new SPF+  - resolved</s>
 
 
** Decided to use Redis and use MW multi-write feature to write to both existing MC and the new MC servers, then enable Redis replication from Tampa to Eqiad
 
** Decided to use Redis and use MW multi-write feature to write to both existing MC and the new MC servers, then enable Redis replication from Tampa to Eqiad
  
* Databases - done
+
* Parser Cache servers
** <strike>one more slave is needed per shard</strike>
+
** servers are provisioned; awaiting parser cache sharding  - Asher/Tim
 +
 
 +
* Databases
 +
** servers and replication - ready for switchover
 
** Grants needed (SQL )
 
** Grants needed (SQL )
 +
 
* Poolcounter
 
* Poolcounter
 
** Done: helium and potassium are installed and puppetized
 
** Done: helium and potassium are installed and puppetized
Line 50: Line 58:
  
 
* Deployment server (fenari's deployment support infrastructure part, misc::deployment etc)
 
* Deployment server (fenari's deployment support infrastructure part, misc::deployment etc)
** awaiting new misc server racking next week - done.  server name is Tin
+
** done.  server name is Tin
 +
** This might not be needed if we are using git-deploy
  
 
* Hume equivalent (misc::maintenance) - postponed
 
* Hume equivalent (misc::maintenance) - postponed
Line 58: Line 67:
 
** Done: server 'flourine' for apache logs
 
** Done: server 'flourine' for apache logs
  
* Upload Varnish - done
+
* Setup and Deploy parsoid servers @ Eqiad
  
** <s>Server OS install</s>
+
*  
** Deploy from deployment host to all application servers
+
 
** rsync the deployment code from the primary deployment server to the secondary
+
* Upload Varnish - done
** Require a clean git repo
+
** Application servers in the other datacenter will use the secondary deployment system for rsync
+
  
 
== Software / Config Requirements ==
 
== Software / Config Requirements ==
* Varnish software to handle media streaming efficiently
 
** awaiting patch from Varnish Software (target Sept?) - done
 
** <s>patch MediaWiki to use a different upload hostname for large files. Then we could use Squid or some specialized media streaming proxy for large files.</s>-- n/a here
 
  
 
* MediaWiki deploy support for per colo config variances ([https://bugzilla.wikimedia.org/show_bug.cgi?id=39082 Bugzilla 39082])
 
* MediaWiki deploy support for per colo config variances ([https://bugzilla.wikimedia.org/show_bug.cgi?id=39082 Bugzilla 39082])

Revision as of 21:44, 20 December 2012

Contents

Coordination

Risk & Mitigation

Identify the high risk migration tasks and ensure we have a way to mitigate or revert without extended downtime.

  • What could cause fallback to Tampa a big problem should migration failed?
    • should Ceph fail?
    • should Swift@Tampa fail?

Outstanding Server/System Readiness

  • App, Imagescalers, Bits, Jobrunners and API Apaches
    • Image scalers: Ready to deploy @ Eqiad - PY
    • Apache/API: Ready to deploy @ Eqiad (mw1017-mw1019 puppetized for deploy testing) - PY
    • Ready to test deploy @ Eqiad - PY
  • Deployment system
    • using Git-deploy & ready for testing @ EQIAD - RyanLane, PY and ChrisM
  • Swift in Tampa & Ceph in EQIAD
    • Current plan is to have Ceph running at Eqiad (final decision - end of Dec by Mark/Faidon)
    • Swift @ Tampa is in production already
    • servers online; needs cluster replication enabled - netapp replication enabled
    • Still need to migrate Math, Captcha, Misc objects from ms7 to Swift - Aaron
    • Might have to run Swift and ImageScalers in Tampa while the rest of the stack are running in Eqiad
    • Aaron to test performance lag
    • Ceph update
      • overcame several issues/ steep learning curve; cluster more stable
      • currrently performing stability & stress tests
      • Servers are being provisioned - Faidon
      • MW multiwrite for thumbs - Aaron/Mark to discuss details (already happening with NAS)
  • Memcached servers
    • mc01 - mc16 (Tampa) in production - done
    • mc1001-mc1016 OS installed, ready for puppet to be run.
    • Decided to use Redis and use MW multi-write feature to write to both existing MC and the new MC servers, then enable Redis replication from Tampa to Eqiad
  • Parser Cache servers
    • servers are provisioned; awaiting parser cache sharding - Asher/Tim
  • Databases
    • servers and replication - ready for switchover
    • Grants needed (SQL )
  • Poolcounter
    • Done: helium and potassium are installed and puppetized
  • Netapp
    • /home/wikipedia for deployments (prolly not using it; use git-deploy)
    • /home - completed in Tampa, not strictly necessary in eqiad
  • Deployment server (fenari's deployment support infrastructure part, misc::deployment etc)
    • done. server name is Tin
    • This might not be needed if we are using git-deploy
  • Hume equivalent (misc::maintenance) - postponed
  • Application logging server - for mediawiki wmerrors + apache syslog
    • eqiad version of the udp2log instance on nfs1 that writes to /home/w/logs
    • Done: server 'flourine' for apache logs
  • Setup and Deploy parsoid servers @ Eqiad
  • Upload Varnish - done

Software / Config Requirements


  • replicating the git checkouts, etc. to new /home
    • not an issue

Actually Failing Over

  • deploy db.php with all shards set to read-only in both pmtpa and eqiad
  • deploy squid and mobile + bits varnish configs pointing to eqiad apaches
  • master swap every core db and writable es shard to eqiad
  • deploy db.php in eqiad removing the read-only flag, leave it read-only in pmtpa
    • the above master-swap + db.php deploys can be done shard by shard to limit the time certain projects are read-only
  • dns changes - our current steady state is to point wikipedia-lb.wikimedia.org in the US to eqiad but future scenarios may include external dns switches.
  • Swift replication reversal - from Eqiad to Tampa
  • Rollback plan - needs to add details

Improving Failover

  • pre-generate squid + varnish configs for different primary datacenter roles
  • implement MHA to better automate the mysql master failovers
  • migrate session storage to redis, with redundant replicas across colos

See more

Parking Lot Issues

  • Identify and plan around the deployment/migration date - tentatively Oct 15, 2012 [see below]. Need to communicate date.
    • Migration needs to happen before Fundraising season starts in Nov.
    • Vacation 'freeze'; all hands on deck week before and after deployment
      • Why? Not every person is vital to migration. --second. if you're not vital to migration, this seems like overkill - who are u pls?
    • migrate ns1 from tampa to ashburn, but not a critical item.
  • An update from CT Woo from October 2012 regarding the status of the migration is available here. It looks like it'll be pushed back to January or February 2013 (post-annual fundraiser).
Personal tools
Namespaces

Variants
Actions
Navigation
Ops documentation
Wiki
Toolbox