Edgewall Software

Version 2 (modified by figaro, 7 years ago) ( diff )

Cosmetic changes, added attachments

Trac Load Testing

While not necessarily definitive, some load testing has been done and the results are provided here for the benefit of anyone that has an interest in at least some insight into how Trac scales/performs under load.


This testing was prompted primarily due to investigation of issue #7490 and attempting to reproduce the issue or at least determine the extent of the impact of moving from Trac-0.10x to Trac-0.11x as well as the impact of various configurations (tracd vs. mod_python).

Additional testing may be done in response to these results, but I figured while I was testing that I should probably capture the information somewhere as someone may find it of interest (so here it is).

Test Hardware and Services Configuration

The server that Trac was install on is a Linux server (Gentoo flavored) with the following characteristics:

  • AMD Athlon(tm) 64 X2 Dual Core Processor 4200+
  • Linux 2.6.28-gentoo-r5 #6 SMP Tue May 26 19:03:14 PDT 2009 i686
  • MemTotal: 3633084 kB
  • SwapTotal: 2096472 kB
  • nVidia Corporation MCP55 Ethernet (rev a2)
  • IDE interface: nVidia Corporation MCP55 IDE (rev a1)
  • IDE interface: nVidia Corporation MCP55 SATA Controller (rev a2)
  • Disk (from dmesg)
    • ata8.00: ATA-7: ST3320620AS, 3.AAD, max UDMA/133
    • ata8.00: 625142448 sectors, multi 1: LBA48 NCQ (depth 31/32)
    • ata8.00: configured for UDMA/133
    • scsi 7:0:0:0: Direct-Access ATA ST3320620AS 3.AA PQ: 0 ANSI: 5
    • sd 7:0:0:0: [sda] 625142448 512-byte hardware sectors: (320 GB/298 GiB)
    • sd 7:0:0:0: [sda] Write Protect is off
    • sd 7:0:0:0: [sda] Mode Sense: 00 3a 00 00
    • sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    • sd 7:0:0:0: [sda] 625142448 512-byte hardware sectors: (320 GB/298 GiB)
    • sd 7:0:0:0: [sda] Write Protect is off
    • sd 7:0:0:0: [sda] Mode Sense: 00 3a 00 00
    • sd 7:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
  • GCC info:
    • Using built-in specs.
    • Thread model: posix
    • gcc version 4.3.2 (Gentoo 4.3.2-r3 p1.6, pie-10.1.5)
    • CFLAGS="-march=native -O2 -pipe -fomit-frame-pointer"

The server was attached to a 100MB switch. The load driver (client) had a 1GB network card attached to a 1GB unmanaged switch that was attached to the 100MB managed switch. The load was generated from a client and was not generated on the server, which could skew the results.

Software Configurations

Testing was conducted using three different versions of Trac:

  • Trac Branch 0.11-stable
  • Trac Trunk (r8253)
  • Trac Branch 0.10-stable

Get output from /about on the site being tested as suggested in an email thread — lance

All the test were conducted on a "bare" installation of Trac, that is base install with only one ticket created (required as one of the read tests is for ticket #1, which will fail unless/until the first ticket is created by the clients creating tickets); however, it was also "attached" to a real (but empty) svn repo. Obviously, depending on the amount of change per day (tickets/day and change sets/day) has an impact on a number of the reports (timeline most directly) and will vary significantly from installation to installation. These specifics are difficult to factor into the testing and also can significantly reduce the "reproducability" of the results. Since the focus of this testing is to determine the performance impact from version to version of Trac as well as the performance of various configurations of Trac, I thought it more important to maintain reproducibility (for the sake of others) and also better enable others to compare the performance of their specific hardware and software configurations to the results presented here than it was to contrive or develop "the most realistic" test possible (which, in my (hopefully) humble opinion, while an interesting goal, it is generally not possible).

Testing should also probably be done against a test environment that has a little more history/data, but that is also reproducible (archived in some way that it can be reused over and over for the testing but from a known state other than "empty")… — lance

Testing did not include authentication, which might have an impact. Probably should go back and setup an environment with authentication (as I would assume any "real" or useful scenario would require this and see if that has an impact on the numbers. — lance

Expand test scope to include other backends other than sqlite — lance

Testing also included two different configurations of accessing trac:

  • Tracd (run from the command line, non-daemonized)
    • you could perhaps also try with the new —http11 option (#8020)? Would be nice to get some numbers, I expect something like a 3x improvement, i.e. at least match mod_python performance if not outperform it ;-) — cboos
    • will do and thanks for the pointer! — lance
  • Mod_Python - version 3.3.1-r1 (Gentoo calendar)

For Mod_Python, Apache version 2.2.11 was used and was configured to use the "worker" MPM (more details on Apache config)

  • impact of other MPMs?? I do know that the choice of MPM has an impact on memory usage and potentially thread/process swaping as well as cost of processing a request, depending on MPM — lance

All configurations used the default Sqlite backend (3.6.13).

Python was version 2.5.4-r2 (Gentoo calendar)

Testing Methodology

Initial testing was done with a proprietary tool; however, I have switched to the Apache JMeter project for load testing trac for several reasons:

  • It is OSS and thus easier/cheaper for others to obtain and potentially use to verify/test their specific configurations and thus compare them with the data obtained here (more easily verifiable by others)
  • Not platform specific. The original tool was only (mostly) available on Windows; however JMeter is 100% java and provides both *.bat and *.sh so should be usable regardless of platform (for client, sever was independent of load generator anyway, being HTTP).

For the testing, I attempted to be realistic in the load that was placed on the server from the standpoint of ensuring that some form of "think time" was used in the clients. The current testing scenario also includes a set of clients that are read only and a set of clients that are creating new tickets. Obviously, the fact that tickets are being created (more tickets added over time) means that a number of the read only requests get longer and longer (more time to generate) as the testing continues, since they have to render successively more data as data is added to the system. In some of my initial scenarios, this had the effect that the more transactions per second, the more data that was added by the clients; however, with the current configuration in JMeter, servers that provide more TPS (transactions per second) *should* not be penalized. That is, if you are not careful, the data writers (clients adding new tickets) have a tendency to "equalize" the system by being able to add more tickets to a system with more TPS and thus the "average" performance would seem to be more equal between configurations with two different TPS. I have worked to engineer the testing such that this effect is minimized.

Testing was conducted on a "warm" machine: did not reboot between tests and the first few test runs to warm up the machine/configuration were not recorded.

Test Cases

This section provides the details (raw text) of the test cases created for JMeter. Thankfully, JMeter saves its test cases in XML format, so you can copy and past these into an XML file and download and use JMeter to potentially reproduce these tests on your own hardware for comparison with the results I have posted here.

Two different test cases were used, one for tracd and one for configurations using Apache. The reason for this is that (for the most part and due to Python) tracd is pretty much single threaded while configurations leveraging Apache are not. Since the test server was a dual core, this enabled configurations using Apache to sustain greater/higher TPS than the configurations with tracd and thus I decided to increase the number of client threads for testing of the Apache configurations in order to better represent it's capabilities other than just through a reduced response time, ie show that it can sustain a higher TPS while still maintaining a low response time.



Testing Results

While the testing resulted in many MB of results from log files and test data, I am only providing key metrics for the testing here. If desired the testing could most likely be recreated and (one would hope) similar results could be obtained if anyone so desired.


Testing done on the stable branch of Trac-0.11 (r???).

Tracd Results (without —http11)

needs to be redone since I fixed the issue with the errors on "get ticket #1" — lance

New Ticket711156752953517.48319678667450.00.119577537612188151.0946482788834499374.0
Report All Active Tickets4911664444965848.40686961910490.00.816846672389584319.44447433795492224375.617107942973
Get Wiki Start Page5021054384006545.50242553043090.00.83831395358546213.2550158979060523976.0
Browse Repo495810432633417.173409539942950.00.82832156949366964.9456621834807576114.0
Get Ticket #15401910295026910.12636213974610.094444444444444440.906572651725006313.17396666194493414880.375925925926
Create Ticket7124069557541247.45023248356820.00.116752312435765681.636603931140801614354.169014084508

Trac-0.11-stable (with —http11)

needs to be redone since I fixed the issue with the errors on "get ticket #1" — lance

New Ticket7113731553840597.95169015993790.00.120395916034870721.10213995792077969374.0
Browse Repo507882433776489.67175657439460.00.84604627722950725.0514911513488346114.0
Get Ticket #15232049295540868.18252803263810.0152963671128107070.872428587633199613.49646301780638415841.271510516252
Report All Active Tickets5011718435545925.22019782346930.00.840705588594595320.18252709691858724582.8123752495
Get Wiki Start Page4681156413727563.01479328894740.00.79063402767219093.06988368557092933976.0
Create Ticket7126149566461276.93823705678320.00.105963981201094261.491178067153553714410.239436619719


New Ticket82611761353254.625475567258430.00.14102213531589821.29095849262815379374.0
Get Wiki Start Page910244371529183.27234746411320.01.52454858283994925.919536294308243976.0
Get Ticket #18286411152079306.541574802742960.01.383116622790020222.08127013391007816348.021739130434
Browse Repo866258431288179.2129207578610.01.44970963956468538.655785875291496114.0
Report All Active Tickets87422356872071606.44832030585670.01.463942460696327442.15722812953294629488.181922196796
Create Ticket82844971909440.540673062373060.00.132650453114535581.848373353597577314268.585365853658


Testing done on the "stable" branch of the 0.10 version (r???).


About the presentation

  • what are the units used in the table above (asked by Shane Caraveo on Trac-Users)
  • maybe you could put the jmeter config files as attachments instead of inline (cboos)

About the tests

I tried to run the tests locally and I noticed that the retrieval operations were done only on the main page, the sub-resources (images, CSS, etc.) were not queried. Maybe you should switch that option on instead, as I think this is more representative of a real load? (cboos)

Attachments (2)

Download all attachments as: .zip

Note: See TracWiki for help on using the wiki.