Changes between Version 2 and Version 3 of TracPerformance/LoadTestingWithJmeter
- Timestamp:
- Feb 6, 2015, 9:08:26 AM (9 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
TracPerformance/LoadTestingWithJmeter
v2 v3 1 1 = Trac Load Testing 2 2 3 While not necessarily definitive, some load testing has been done and the results are provided here for the benefit of anyone that has an interest in at leastsome insight into how Trac scales/performs under load.3 While not necessarily definitive, some load testing has been done and the results are provided here for some insight into how Trac scales/performs under load. 4 4 5 5 == Rationale … … 7 7 This testing was prompted primarily due to investigation of issue #7490 and attempting to reproduce the issue or at least determine the extent of the impact of moving from Trac-0.10x to Trac-0.11x as well as the impact of various configurations (tracd vs. mod_python). 8 8 9 Additional testing may be done in response to these results, but I figured while I was testing that I should probably capture the information somewhere as someone may find it of interest (so here it is).9 Additional testing may be done in response to these results, but I figured while I was testing that I should probably capture the information somewhere as someone may find it of interest. 10 10 11 11 == Test Hardware and Services Configuration … … 50 50 ''Get output from /about on the site being tested as suggested in an email thread -- lance'' 51 51 52 All the test were conducted on a "bare" installation of Trac, that is base install with only one ticket created (required as one of the read tests is for ticket #1, which will fail unless/until the first ticket is created by the clients creating tickets); however, it was also "attached" to a real (but empty) svn repo. Obviously, depending on the amount of change per day (tickets/day and change sets/day) has an impact on a number of the reports (timeline most directly) and will vary significantly from installation to installation. These specifics are difficult to factor into the testing and also can significantly reduce the "reproducability" of the results. Since the focus of this testing is to determine the performance impact from version to version of Trac as well as the performance of various configurations of Trac, I thought it more important to maintain reproducibility (for the sake of others) and also better enable others to compare the performance of their specific hardware and software configurations to the results presented here than it was to contrive or develop "the most realistic" test possible (which, in my (hopefully) humble opinion, while an interesting goal, it is generally not possible). 52 All the tests were conducted on a "bare" installation of Trac, that is base install with only one ticket created (required as one of the read tests is for ticket #1, which will fail unless/until the first ticket is created by the clients creating tickets); however, it was also "attached" to a real but empty svn repo. 53 54 Obviously, depending on the amount of change per day (tickets/day and change sets/day) has an impact on a number of the reports (timeline most directly) and will vary significantly from installation to installation. These specifics are difficult to factor into the testing and also can significantly reduce the "reproducability" of the results. Since the focus of this testing is to determine the performance impact from version to version of Trac as well as the performance of various configurations of Trac, I thought it more important to maintain reproducibility and also better enable others to compare the performance of their specific hardware and software configurations to the results presented here than it was to contrive or develop "the most realistic" test possible (which, in my (hopefully) humble opinion, while an interesting goal, it is generally not possible). 53 55 54 56 ''Testing should also probably be done against a test environment that has a little more history/data, but that is also reproducible (archived in some way that it can be reused over and over for the testing but from a known state other than "empty")... -- lance'' … … 65 67 66 68 For Mod_Python, Apache version 2.2.11 was used and was configured to use the "worker" MPM (more details on Apache config) 67 - ''impact of other MPMs?? I do know that the choice of MPM has an impact on memory usage and potentially thread/process swaping as well as cost of processing a request, depending on MPM -- lance''69 - ''impact of other MPMs?? I do know that the choice of MPM has an impact on memory usage and potentially thread/process swapping as well as cost of processing a request, depending on MPM -- lance'' 68 70 69 All configurations used the default Sqlite backend (3.6.13).71 All configurations used the default Sqlite backend version 3.6.13. 70 72 71 Python was version 2.5.4-!r2 (Gentoo calendar) 73 Python was version 2.5.4-!r2 (Gentoo calendar). 72 74 73 75 == Testing Methodology … … 87 89 Two different test cases were used, one for tracd and one for configurations using Apache. The reason for this is that (for the most part and due to Python) tracd is pretty much single threaded while configurations leveraging Apache are not. Since the test server was a dual core, this enabled configurations using Apache to sustain greater/higher TPS than the configurations with tracd and thus I decided to increase the number of client threads for testing of the Apache configurations in order to better represent it's capabilities other than just through a reduced response time, ie show that it can sustain a higher TPS while still maintaining a low response time. 88 90 89 === tracd === 90 91 === mod_python === 91 The tracd file and mod_python file are attached. 92 92 93 93 == Testing Results 94 94 95 While the testing resulted in many MB of results from log files and test data, I am only providing key metrics for the testing here. If desired the testing could most likely be recreated and (one would hope) similar results could be obtained if anyone so desired.95 While the testing resulted in many MB of results from log files and test data, I am only providing key metrics for the testing. 96 96 97 97 === Trac-0.11-stable