Edgewall Software

Changes between Version 2 and Version 3 of TracPerformance/LoadTestingWithJmeter


Ignore:
Timestamp:
Feb 6, 2015, 9:08:26 AM (9 years ago)
Author:
figaro
Comment:

Cosmetic changes

Legend:

Unmodified
Added
Removed
Modified
  • TracPerformance/LoadTestingWithJmeter

    v2 v3  
    11= Trac Load Testing
    22
    3 While not necessarily definitive, some load testing has been done and the results are provided here for the benefit of anyone that has an interest in at least some insight into how Trac scales/performs under load.
     3While not necessarily definitive, some load testing has been done and the results are provided here for some insight into how Trac scales/performs under load.
    44
    55== Rationale
     
    77This testing was prompted primarily due to investigation of issue #7490 and attempting to reproduce the issue or at least determine the extent of the impact of moving from Trac-0.10x to Trac-0.11x as well as the impact of various configurations (tracd vs. mod_python).
    88
    9 Additional testing may be done in response to these results, but I figured while I was testing that I should probably capture the information somewhere as someone may find it of interest (so here it is).
     9Additional testing may be done in response to these results, but I figured while I was testing that I should probably capture the information somewhere as someone may find it of interest.
    1010
    1111== Test Hardware and Services Configuration
     
    5050''Get output from /about on the site being tested as suggested in an email thread -- lance''
    5151
    52 All the test were conducted on a "bare" installation of Trac, that is base install with only one ticket created (required as one of the read tests is for ticket #1, which will fail unless/until the first ticket is created by the clients creating tickets); however, it was also "attached" to a real (but empty) svn repo.  Obviously, depending on the amount of change per day (tickets/day and change sets/day) has an impact on a number of the reports (timeline most directly) and will vary significantly from installation to installation.  These specifics are difficult to factor into the testing and also can significantly reduce the "reproducability" of the results.  Since the focus of this testing is to determine the performance impact from version to version of Trac as well as the performance of various configurations of Trac, I thought it more important to maintain reproducibility (for the sake of others) and also better enable others to compare the performance of their specific hardware and software configurations to the results presented here than it was to contrive or develop "the most realistic" test possible (which, in my (hopefully) humble opinion, while an interesting goal, it is generally not possible).
     52All the tests were conducted on a "bare" installation of Trac, that is base install with only one ticket created (required as one of the read tests is for ticket #1, which will fail unless/until the first ticket is created by the clients creating tickets); however, it was also "attached" to a real but empty svn repo.
     53
     54Obviously, depending on the amount of change per day (tickets/day and change sets/day) has an impact on a number of the reports (timeline most directly) and will vary significantly from installation to installation. These specifics are difficult to factor into the testing and also can significantly reduce the "reproducability" of the results. Since the focus of this testing is to determine the performance impact from version to version of Trac as well as the performance of various configurations of Trac, I thought it more important to maintain reproducibility and also better enable others to compare the performance of their specific hardware and software configurations to the results presented here than it was to contrive or develop "the most realistic" test possible (which, in my (hopefully) humble opinion, while an interesting goal, it is generally not possible).
    5355
    5456''Testing should also probably be done against a test environment that has a little more history/data, but that is also reproducible (archived in some way that it can be reused over and over for the testing but from a known state other than "empty")... -- lance''
     
    6567
    6668For Mod_Python, Apache version 2.2.11 was used and was configured to use the "worker" MPM (more details on Apache config) 
    67   -  ''impact of other MPMs??  I do know that the choice of MPM has an impact on memory usage and potentially thread/process swaping as well as cost of processing a request, depending on MPM -- lance''
     69  -  ''impact of other MPMs?? I do know that the choice of MPM has an impact on memory usage and potentially thread/process swapping as well as cost of processing a request, depending on MPM -- lance''
    6870
    69 All configurations used the default Sqlite backend (3.6.13).
     71All configurations used the default Sqlite backend version 3.6.13.
    7072
    71 Python was version 2.5.4-!r2 (Gentoo calendar)
     73Python was version 2.5.4-!r2 (Gentoo calendar).
    7274
    7375== Testing Methodology
     
    8789Two different test cases were used, one for tracd and one for configurations using Apache.  The reason for this is that (for the most part and due to Python) tracd is pretty much single threaded while configurations leveraging Apache are not. Since the test server was a dual core, this enabled configurations using Apache to sustain greater/higher TPS than the configurations with tracd and thus I decided to increase the number of client threads for testing of the Apache configurations in order to better represent it's capabilities other than just through a reduced response time, ie show that it can sustain a higher TPS while still maintaining a low response time.
    8890
    89 === tracd ===
    90 
    91 === mod_python ===
     91The tracd file and mod_python file are attached.
    9292
    9393== Testing Results
    9494
    95 While the testing resulted in many MB of results from log files and test data, I am only providing key metrics for the testing here.  If desired the testing could most likely be recreated and (one would hope) similar results could be obtained if anyone so desired.
     95While the testing resulted in many MB of results from log files and test data, I am only providing key metrics for the testing.
    9696
    9797=== Trac-0.11-stable