Profiling

From Wikitech
(Difference between revisions)
Jump to: navigation, search
(fixme)
Line 1: Line 1:
{{fixme|Still accurate?}}
+
One way is udp: {{:UDP based profiling}}
{{fixme|Is this complete?}}
+
  
 +
Another with SQL:
 
<pre>
 
<pre>
 
14:38 TimStarling: ok, here's how to do the profiling
 
14:38 TimStarling: ok, here's how to do the profiling

Revision as of 21:38, 7 June 2006

One way is udp: == What and Where ==

  • $wgProfiler = new ProfilingSimpleUDP; (in index or Settings)
  • $wgUDPProfilerHost = '10.0.6.30'; (in Settings)
  • running on professor:
    • /usr/udpprofile/sbin/collector svn root
      • listens on udp:3811 for profiling packets, provides xml dumps on tcp:3811
    • /usr/udpprofile/sbin/profiler-to-carbon
      • polls collector every minute, inserts count and timing (in ms) data into whisper db's
    • /opt/graphite/bin/carbon-cache.py
      • updates whisper db files for graphite
  • graphite based web interface - uses labs ldap for auth
  • aggregate report web interface

Contents

Using The Graphite Dashboard

Finding Metrics

  • The left sidebar of the graphite dashboard provides two drop down menus - "Metric Type", which is used for providing shortcuts or aliases to certain metrics (which are hardcoded in the dashboard.conf located in puppet/files/graphite) and Category. Then below, a hierarchical finder of everything under the chosen category. This is all straight forward, except what's shown when Category = * is limited to a single level of the hierarchy - you don't want this! If Metric Type == Everything, make sure to select a class in Category.

Graphitemenu.png

  • The dashboard menu option allows sets of graphs to be saved as a named dashboard. Share provides a direct url to a saved dashboard, and Finder lists all shared dashboards.

Combining Metrics

  • Just drag graphs on top of each other to combine.

Types of Metrics

  • count - the number of calls made in the last minute. Note that for a few types of requests, mediawiki profiles 100% of requests, but most are at about 1.5%.
  • tavg - average time in ms, based on everything collected in the sampling time - total-time/count
  • tp50 - 50th percentile in ms, calculated from a bucket of 300 samples
  • tp90 - 90th percentile
  • tp99 - 99th percentile

Examples

  • When the Job Queue depth alerts, but jobs appear to be running, its been difficult to know if the insertion rate has spiked or if execution has slowed. Looking at the pop rate over the insert rate provides an easy to parse picture of health.

Job-queue.png

Pcache tp99 deploys.png

  • The 8 Slowest Parser functions, based on time averages. (99th percentiles here are too scary.) This shows how to group by and limit across lots of stats.

Slow Parser Funcs Avg.png

  • 8 slowest db write queries, by 99% time

Top-master-writes-by-time.png

Another with SQL:

14:38 TimStarling: ok, here's how to do the profiling
14:38 TimStarling: I'm doing it now if you want to watch :)
14:39 TimStarling: you log into suda, and run a "truncate table profiling" query on en
14:39 TimStarling: then (usually in a different window) you open CommonSettings.php
14:40 TimStarling: you search for profiling
14:40 TimStarling: you find a commented-out section
14:40 TimStarling: uncomment it
14:41 TimStarling: like that
14:41 TimStarling: the sample rate is 50 by default, that means it profiles 1 in every 50 requests
14:41 TimStarling: it will take a while at this rate to build up some decent data
14:42 TimStarling: run: sql enwiki -vvv < ~tstarling/sql/profiling.sql
14:43 TimStarling: make a local copy of that SQL and adjust the screen width if you desire

14:45 TimStarling: anyway, once you have enough data, you comment out the section in
-               CommonSettings.php again
14:46 TimStarling: then run the SQL again, copy the data to a subpage of m:Profiling, and analyse
-               the results
Personal tools
Namespaces

Variants
Actions
Navigation
Ops documentation
Wiki
Toolbox