Version 7 (modified by 15 years ago) ( diff ) | ,
---|
Improving Trac Performance
This is the developer-oriented side of TracPerformance. While in the latter we try to analyse the different factors that come into play, here we'll try to discuss about the practical solutions we can imagine.
For a start, here's a raw list of the tickets tagged with the performance keyword:
Performance Analysis
Load Testing
- using JMeter; see TracPerformance/LoadTestingWithJmeter
- using
ab
(see 0.11.5)
For timing template generation I used ApacheBench and tossed out "warmup" requests, sometimes testing with keep alive (getting slightly better req/s)
ab [-k] -c 1 -n 10 url
Profiling
- #7490 contains some profiling data.
- Shane Caraveo gave some instructions about profiling Trac (0.11.5); see also #8507 which contains his scripts.
Improvement Opportunities
Genshi
The impact of Genshi is especially important for requests generating a big amount of data, like the changeset view or the file browser view, especially when compared to ClearSilver.
From 0.11.5, we can get the following ideas:
- revert to
out.write(_encode(u''.join(list(iterator))))
instead of using the StringIO (that was a change done during #6614, but we could have afavor_speed_over_memory
setting) - - avoid the whitespace filter (same setting)
Additionally, there's the idea to increase the size of the template cache for Genshi (#7842), also something that could be done if a favor_speed_over_memory
configuration is active.
We could also possibly gain some speed by not passing all the content through all the filters, but pre-render some parts in a faster way, then wrap them in Markup
objects. That would make them opaque to the filters, but that would be most of the time OK (or the plugins that really need that could somehow selectively turn this optimization off). See #5499 (browser) and #7975 (changeset).
Also, the generated XHTML itself could certainly be improved: for example, using <pre> instead of tables rows for rendering lines in the browser view (#7055).
gc.collect
According to Shane's analysis, the systematic gc.collect()
call after every request is one of the most critical performance killer for the average request (0.11.5).
See proposed implementation of a secondary thread taking care of this, in #8507.
database level optimizations
- #4425
- Some MySQL specific changes have been suggested, see #6986.
- #6654 - pad revision numbers with leading zeroes
Also, we should also simply fix the current behavior when it's known to be problematic:
- timeout when trying to get a db connection (happens again on t.e.o, could be related to #8443)
- prefer short-lived transactions as much as possible (#3446). We should fetch all data in memory instead of iterating over cursors and doing some work while iterating. This is a memory vs. speed (and concurrency) trade-off.
On the topic of transactions, we should probably use the transaction idea from the WikiRename branch, as refined by rblank on Trac-dev (googlegroups:trac-dev:21d21ad9866fc12b). This would help to visualize the span of write transactions in the code and better handle query failures (#8379).