Profiling
From Wikitech
(Difference between revisions)
| Line 1: | Line 1: | ||
One way is udp: {{:UDP based profiling}} | One way is udp: {{:UDP based profiling}} | ||
| + | |||
| + | |||
| + | http://noc.wikimedia.org/cgi-bin/report.py?db=all&sort=real&limit=50 <- this one actually works | ||
| + | |||
| + | clear with: | ||
| + | * clear-profile | ||
Another with SQL: | Another with SQL: | ||
Revision as of 15:27, 23 August 2007
One way is udp: == What and Where ==
- $wgProfiler = new ProfilingSimpleUDP; (in index or Settings)
- $wgUDPProfilerHost = '10.0.6.30'; (in Settings)
- running on professor:
- /usr/udpprofile/sbin/collector svn root
- listens on udp:3811 for profiling packets, provides xml dumps on tcp:3811
- /usr/udpprofile/sbin/profiler-to-carbon
- polls collector every minute, inserts count and timing (in ms) data into whisper db's
- /opt/graphite/bin/carbon-cache.py
- updates whisper db files for graphite
- /usr/udpprofile/sbin/collector svn root
- graphite based web interface - uses labs ldap for auth
- aggregate report web interface
Contents |
Using The Graphite Dashboard
Finding Metrics
- The left sidebar of the graphite dashboard provides two drop down menus - "Metric Type", which is used for providing shortcuts or aliases to certain metrics (which are hardcoded in the dashboard.conf located in puppet/files/graphite) and Category. Then below, a hierarchical finder of everything under the chosen category. This is all straight forward, except what's shown when Category = * is limited to a single level of the hierarchy - you don't want this! If Metric Type == Everything, make sure to select a class in Category.
- The dashboard menu option allows sets of graphs to be saved as a named dashboard. Share provides a direct url to a saved dashboard, and Finder lists all shared dashboards.
Combining Metrics
- Just drag graphs on top of each other to combine.
Types of Metrics
- count - the number of calls made in the last minute. Note that for a few types of requests, mediawiki profiles 100% of requests, but most are at about 1.5%.
- tavg - average time in ms, based on everything collected in the sampling time - total-time/count
- tp50 - 50th percentile in ms, calculated from a bucket of 300 samples
- tp90 - 90th percentile
- tp99 - 99th percentile
Examples
- When the Job Queue depth alerts, but jobs appear to be running, its been difficult to know if the insertion rate has spiked or if execution has slowed. Looking at the pop rate over the insert rate provides an easy to parse picture of health.
- url to generate - http://graphite.wikimedia.org/render?width=800&from=-4hours&until=now&height=600&target=*.job-pop.count&target=*.job-insert.count
- 99% ParserCache get times, with cluster deploys overlaid as vertical lines
- The url to generate this was - http://graphite.wikimedia.org/render?from=-24hours&until=now&width=800&height=600&target=ParserCache.get.tp99&target=drawAsInfinite(deploy.any)
- Overlaying metrics is as simple as appending multiple target options. "&target=drawAsInfinite(deploy.any)" can be added to any graph for the deploy lines.
- The 8 Slowest Parser functions, based on time averages. (99th percentiles here are too scary.) This shows how to group by and limit across lots of stats.
- url to generate: http://graphite.wikimedia.org/render?width=800&from=-8hours&until=now&height=600&vtitle=time_ms&target=highestMax(Parser.*.*.tavg%2C8)&title=8_Slowest_Parser_Functions_Avg
- times are in ms, so 10k = 10 seconds.
- This takes the tavg for everything profiled under the Parser class and selects the 8 with the highest value in the last 8 hours.
- 8 slowest db write queries, by 99% time
http://noc.wikimedia.org/cgi-bin/report.py?db=all&sort=real&limit=50 <- this one actually works
clear with:
- clear-profile
Another with SQL:
14:38 TimStarling: ok, here's how to do the profiling 14:38 TimStarling: I'm doing it now if you want to watch :) 14:39 TimStarling: you log into suda, and run a "truncate table profiling" query on en 14:39 TimStarling: then (usually in a different window) you open CommonSettings.php 14:40 TimStarling: you search for profiling 14:40 TimStarling: you find a commented-out section 14:40 TimStarling: uncomment it 14:41 TimStarling: like that 14:41 TimStarling: the sample rate is 50 by default, that means it profiles 1 in every 50 requests 14:41 TimStarling: it will take a while at this rate to build up some decent data 14:42 TimStarling: run: sql enwiki -vvv < ~tstarling/sql/profiling.sql 14:43 TimStarling: make a local copy of that SQL and adjust the screen width if you desire 14:45 TimStarling: anyway, once you have enough data, you comment out the section in - CommonSettings.php again 14:46 TimStarling: then run the SQL again, copy the data to a subpage of m:Profiling, and analyse - the results




