Edgewall Software

Changes between Version 4 and Version 5 of TracPerformance/LoadTestingWithJmeter


Ignore:
Timestamp:
Feb 7, 2015, 4:30:44 PM (5 years ago)
Author:
figaro
Comment:

Cosmetic changes

Legend:

Unmodified
Added
Removed
Modified
  • TracPerformance/LoadTestingWithJmeter

    v4 v5  
    55== Rationale
    66
    7 This testing was prompted primarily due to investigation of issue #7490 and attempting to reproduce the issue or at least determine the extent of the impact of moving from Trac-0.10x to Trac-0.11x as well as the impact of various configurations (tracd vs. mod_python).
     7This testing was prompted primarily because of issue #7490 and attempting to reproduce the issue or at least determine the extent of the impact of moving from Trac-0.10x to Trac-0.11x as well as the impact of various configurations (tracd vs. mod_python).
    88
    9 Additional testing may be done in response to these results, but I figured while I was testing that I should probably capture the information somewhere as someone may find it of interest.
     9Additional testing may be done in response to these results, and this is for reference only.
    1010
    1111== Test Hardware and Services Configuration
    1212
    13 The server that Trac was install on is a Linux server (Gentoo flavored) with the following characteristics:
     13The server that Trac was installed on is a Linux server ([https://www.gentoo.org/ Gentoo flavored]) with the following characteristics:
    1414
    1515 *  AMD Athlon(tm) 64 X2 Dual Core Processor 4200+
     
    5252All the tests were conducted on a "bare" installation of Trac, that is base install with only one ticket created (required as one of the read tests is for ticket #1, which will fail unless/until the first ticket is created by the clients creating tickets); however, it was also "attached" to a real but empty svn repo.
    5353
    54 Obviously, depending on the amount of change per day (tickets/day and change sets/day) has an impact on a number of the reports (timeline most directly) and will vary significantly from installation to installation. These specifics are difficult to factor into the testing and also can significantly reduce the "reproducability" of the results. Since the focus of this testing is to determine the performance impact from version to version of Trac as well as the performance of various configurations of Trac, I thought it more important to maintain reproducibility and also better enable others to compare the performance of their specific hardware and software configurations to the results presented here than it was to contrive or develop "the most realistic" test possible (which, in my (hopefully) humble opinion, while an interesting goal, it is generally not possible).
     54Obviously, depending on the amount of change per day (tickets/day and change sets/day) has an impact on a number of the reports (timeline most directly) and will vary significantly from installation to installation. These specifics are difficult to factor into the testing and also can significantly reduce the "reproducability" of the results. Since the focus of this testing is to determine the performance impact from version to version of Trac as well as the performance of various configurations of Trac, I thought it more important to maintain reproducibility and also better enable others to compare the performance of their specific hardware and software configurations to the results presented here than it was to contrive or develop "the most realistic" test possible.
    5555
    5656''Testing should also probably be done against a test environment that has a little more history/data, but that is also reproducible (archived in some way that it can be reused over and over for the testing but from a known state other than "empty")... -- lance''
     
    7575== Testing Methodology
    7676
    77 Initial testing was done with a proprietary tool; however, I have switched to the Apache JMeter project for load testing trac for several reasons:
    78  *  It is OSS and thus easier/cheaper for others to obtain and potentially use to verify/test their specific configurations and thus compare them with the data obtained here (more easily verifiable by others)
    79  *  Not platform specific.  The original tool was only (mostly) available on Windows; however JMeter is 100% java and provides both *.bat and *.sh so should be usable regardless of platform (for client, sever was independent of load generator anyway, being HTTP).
     77[http://jmeter.apache.org/ Apache JMeter project] was used for the following reasons:
     78 *  It is open source software and thus easier and cheaper for others to obtain and potentially use to verify/test their specific configurations and thus compare them with the data obtained here.
     79 *  Not platform specific: JMeter is 100% java and provides both *.bat and *.sh so should be usable regardless of platform (for client, server was independent of load generator anyway, being HTTP).
    8080
    81 For the testing, I attempted to be realistic in the load that was placed on the server from the standpoint of ensuring that some form of "think time" was used in the clients.  The current testing scenario also includes a set of clients that are read only and a set of clients that are creating new tickets. Obviously, the fact that tickets are being created (more tickets added over time) means that a number of the read only requests get longer and longer (more time to generate) as the testing continues, since they have to render successively more data as data is added to the system. In some of my initial scenarios, this had the effect that the more transactions per second, the more data that was added by the clients; however, with the current configuration in JMeter, servers that provide more TPS (transactions per second) *should* not be penalized. That is, if you are not careful, the data writers (clients adding new tickets) have a tendency to "equalize" the system by being able to add more tickets to a system with more TPS and thus the "average" performance would seem to be more equal between configurations with two different TPS. I have worked to engineer the testing such that this effect is minimized.
     81For the testing, I attempted to be realistic in the load that was placed on the server from the standpoint of ensuring that some form of "think time" was used in the clients. The current testing scenario also includes a set of clients that are read only and a set of clients that are creating new tickets. Obviously, the fact that tickets are being created (more tickets added over time) means that a number of the read only requests get longer and longer (more time to generate) as the testing continues, since they have to render successively more data as data is added to the system. In some of my initial scenarios, this had the effect that the more transactions per second, the more data that was added by the clients; however, with the current configuration in JMeter, servers that provide more TPS (transactions per second) *should* not be penalized. That is, if you are not careful, the data writers (clients adding new tickets) have a tendency to "equalize" the system by being able to add more tickets to a system with more TPS and thus the "average" performance would seem to be more equal between configurations with two different TPS. I have worked to engineer the testing such that this effect is minimized.
    8282
    8383Testing was conducted on a "warm" machine: did not reboot between tests and the first few test runs to warm up the machine/configuration were not recorded.
     
    8585== Test Cases
    8686
    87 This section provides the details (raw text) of the test cases created for JMeter. Thankfully, JMeter saves its test cases in XML format, so you can copy and past these into an XML file and download and use JMeter to potentially reproduce these tests on your own hardware for comparison with the results I have posted here.
     87This section provides the details (raw text) of the test cases created for JMeter. JMeter saves its test cases in XML format, so you can copy and past these into an XML file and download and use JMeter to potentially reproduce these tests on your own hardware for comparison with the results posted here.
    8888
    89 Two different test cases were used, one for tracd and one for configurations using Apache.  The reason for this is that (for the most part and due to Python) tracd is pretty much single threaded while configurations leveraging Apache are not. Since the test server was a dual core, this enabled configurations using Apache to sustain greater/higher TPS than the configurations with tracd and thus I decided to increase the number of client threads for testing of the Apache configurations in order to better represent it's capabilities other than just through a reduced response time, ie show that it can sustain a higher TPS while still maintaining a low response time.
     89Two different test cases were used, one for tracd and one for configurations using Apache. The reason for this is that (for the most part and due to Python) tracd is pretty much single threaded while configurations using Apache are not. Since the test server was a dual core, this enabled configurations using Apache to sustain greater/higher TPS than the configurations with tracd and thus I decided to increase the number of client threads for testing of the Apache configurations in order to better represent its capabilities other than just through a reduced response time, ie show that it can sustain a higher TPS while still maintaining a low response time.
    9090
    9191The [attachment:tracd.xml tracd file] and [attachment:mod_python.xml mod_python file] are attached.
     
    9393== Testing Results
    9494
    95 While the testing resulted in many MB of results from log files and test data, I am only providing key metrics for the testing.
     95While the testing resulted in many MB of results from log files and test data, only the key metrics are provided.
    9696
    9797=== Trac-0.11-stable
     
    148148=== About the presentation
    149149
    150  - what are the units used in the table above (asked by Shane Caraveo on Trac-Users)
    151  - maybe you could put the jmeter config files as attachments instead of inline (cboos)
     150What are the units used in the table above? (asked by Shane Caraveo on Trac-Users).
    152151
    153152=== About the tests