Edgewall Software
Modify

Opened 16 years ago

Closed 14 years ago

Last modified 20 months ago

#7490 closed defect (fixed)

Trac 0.11 really slow, high cpu usage

Reported by: spam@… Owned by: Christian Boos
Priority: high Milestone: 0.11.6
Component: web frontend Version: 0.11-stable
Severity: critical Keywords: needinfo
Cc: eric.petit@…, info@…, xtophe+trac@…, alphafighter1@…, dgollub@…, nathan.schindler@…, seth@…, wellington@…, mail@…, curtnelson@…, ValentinVoigt@…, igor.poboiko@… Branch:
Release Notes:
API Changes:
Internal Changes:

Description (last modified by Christian Boos)

Editorial Note

There are lots of possible causes for slow performance. This ticket has helped identify and fix some of them.

Please go carefully through the TracPerformance checklist first.

If you really don't find a way to fix your problem there, then please don't reopen this ticket (see comment:124), rather follow-up on #8813, thank you.

Original Description

Since I switched to trac 0.11 my trac is really slow. I've tried some performance improvements but none of them is sufficient. Looking through the log file I've noticed the performance problem occurs in the step "Prepare chrome data for request":

2008-07-31 13:09:27,957 Trac[__init__] DEBUG: Dispatching <Request "GET u'/timeline'">
2008-07-31 13:09:28,127 Trac[__init__] DEBUG: Subversion bindings imported
2008-07-31 13:09:28,141 Trac[__init__] DEBUG: action controllers for ticket workflow: ['ConfigurableTicketWor
kflow']
2008-07-31 13:09:28,148 Trac[__init__] DEBUG: Retrieving session for ID 'languitar'
2008-07-31 13:09:28,239 Trac[__init__] DEBUG: Prepare chrome data for request
2008-07-31 13:09:30,890 Trac[__init__] DEBUG: Updating wiki page index
2008-07-31 13:09:31,242 Trac[__init__] DEBUG: 1255 unreachable objects found.

or

2008-07-31 13:15:05,724 Trac[__init__] DEBUG: Dispatching <Request "GET u'/browser'">
2008-07-31 13:15:05,999 Trac[__init__] DEBUG: Subversion bindings imported
2008-07-31 13:15:06,026 Trac[__init__] DEBUG: action controllers for ticket workflow: ['ConfigurableTicketWor
kflow']
2008-07-31 13:15:06,051 Trac[__init__] DEBUG: Retrieving session for ID 'languitar'
2008-07-31 13:15:06,059 Trac[__init__] DEBUG: Prepare chrome data for request
2008-07-31 13:15:09,287 Trac[__init__] DEBUG: 1598 unreachable objects found.

Attachments (6)

WikiStart-ItMill.txt (4.9 KB ) - added by Christian Boos 15 years ago.
ItMill - converting to 0.11 style layout (as discussed in comment:18)
apache2_2.2.8_backtrace_nopython_symbols.txt (2.9 KB ) - added by dgollub 15 years ago.
Apache 2.2.8 backtrace on CPU peak - mod_python debug-symbols missing
trac profile top-down view.txt (31.7 KB ) - added by anonymous 15 years ago.
This is a sampling of my system while trac was loading the default welcome page in a new trac env. This was taken by Shark on Leopard, Dual 2.0 Ghz G5, 3.5 Gig RAM.
trac profile bottom-up view.txt (22.2 KB ) - added by anonymous 15 years ago.
This is the same sampling only in a bottom-up oriented view. This is a sampling of my system while trac was loading the default welcome page in a new trac env. This was taken by Shark on Leopard, Dual 2.0 Ghz G5, 3.5 Gig RAM.
cProfile-stats.txt (2.3 KB ) - added by lloyd@… 15 years ago.
Profile of gateway.run(dispatch_request)
profile.txt.gz (45.4 KB ) - added by Sebastian Krysmanski <sebastian@…> 15 years ago.
profiling report

Download all attachments as: .zip

Change History (131)

comment:1 by Jonas Borgström, 16 years ago

You have to give us some more information about your system like your os, python version, database, frontend (fastcgi, mod_python etc).

comment:2 by languitar, 16 years ago

Debian on a v-host, Python 2.4, mod_python. Database is SQLite.

What I already tried is this:

        Alias /trac/chrome/common /usr/lib/python2.4/site-packages/Trac-0.11-py2.4.egg/trac/htdocs
        <Directory /usr/lib/python2.4/site-packages/Trac-0.11-py2.4.egg/trac/htdocs/>
                Order allow,deny
                Allow from all
        </Directory>

That's akready included in the log above.

comment:3 by Jonas Borgström, 16 years ago

Trac 0.11 is known to be a bit slower than 0.10 due to the change of template engine. For virtual servers with limited ram it's best to avoid using mod_python and instead use tracd or fastcgi since the memory footprint is usually much lower. One option to lower the apache+mod_python memory footprint is to make sure the MPM mode is "worker" instead of "prefork".

comment:4 by languitar, 16 years ago

But I can't notice any swapping while loading trac. How should the memory be the problem then?

comment:5 by Alex, 16 years ago

I have the same problem with high cpu load when starts trac scripts. (apache - 2.0.55, mod_python - 3.3.1, python - 2.4.3-8, python-sqlite - 1.1.6-1, sqlite - 3.1.2-3, subversion - 1.4.2, swig - 1.3.35, trac - 0.11rc1, setuptools - 0.6c8, pygments - 0.9, docutils - 0.4, textile - 2.0.11). And I have the same errors in trac log file.

2008-08-18 10:44:48,660 Trac[main] DEBUG: Dispatching <Request "GET u'/wiki'"> 2008-08-18 10:44:48,730 Trac[svn_fs] DEBUG: Subversion bindings imported 2008-08-18 10:44:48,742 Trac[chrome] DEBUG: Prepare chrome data for request 2008-08-18 10:44:48,753 Trac[api] DEBUG: Updating wiki page index 2008-08-18 10:44:48,828 Trac[session] DEBUG: Retrieving session for ID u'alex' 2008-08-18 10:44:49,210 Trac[main] DEBUG: 500 unreachable objects found.

comment:6 by martin, 16 years ago

I can confirm that it is not a memory problem. I use std cgi not fcgi.

comment:7 by greg, 16 years ago

+1 here.

  • system:
    cat /etc/debian_version
    4.0
    srv:~# uname -a
    Linux srv 2.6.18-5-amd64 #1 SMP Sat Dec 22 20:43:59 UTC 2007 x86_64 GNU/Linux
    
  • packages:
    dpkg -l | egrep "python|apache|sqlite"
    ii  apache2-mpm-prefork        2.2.3-4+etch5                        Traditional model for Apache HTTPD 2.1
    ii  apache2-utils              2.2.3-4+etch5                        utility programs for webservers
    ii  apache2.2-common           2.2.3-4+etch5                        Next generation, scalable, extendable web se
    ii  collectd-apache            3.10.4-1                             statistics collection daemon (Apache plugin)
    ii  libapache2-mod-php5        5.2.0-8+etch11                       server-side, HTML-embedded scripting languag
    ii  libapache2-svn             1.4.2dfsg1-2                         Subversion server modules for Apache
    ii  libsqlite3-0               3.3.8-1.1                            SQLite 3 shared library
    ii  python                     2.4.4-2                              An interactive high-level object-oriented la
    ii  python-central             0.5.12                               register and build utility for Python packag
    ii  python-crypto              2.0.1+dfsg1-1.2                      cryptographic algorithms and protocols for P
    ii  python-genshi              0.3.4-1                              python XML-based template engine
    ii  python-minimal             2.4.4-2                              A minimal subset of the Python language (def
    ii  python-paramiko            1.5.2-0.1                            make SSH2 connections with python
    ii  python-pyopenssl           0.6-2.3                              Python wrapper around the OpenSSL library (d
    ii  python-pysqlite2           2.3.2-2                              python interface to SQLite 3
    ii  python-setuptools          0.6c3-3                              Python Distutils Enhancements
    ii  python-soappy              0.11.3-1.7                           SOAP Support for Python (SOAP.py)
    ii  python-subversion          1.4.2dfsg1-2                         Python bindings for Subversion
    ii  python-support             0.5.6                                automated rebuilding support for python modu
    ii  python-xml                 0.8.4-6                              XML tools for Python
    ii  python2.4                  2.4.4-3+etch1                        An interactive high-level object-oriented la
    ii  python2.4-minimal          2.4.4-3+etch1                        A minimal subset of the Python language (ver
    ii  sqlite3                    3.3.8-1.1                            A command line interface for SQLite 3
    
  • apache2 :
            # Map the Trac CGI into /trac/
            ScriptAliasMatch ^/trac(.*) "/usr/lib/cgi-bin/trac.cgi$1"
    
            SetEnv LC_TIME "fr_FR"
            SetEnv TRAC_ENV "/data/trac/"
    
            # You need something like this to authenticate users
            <Location "/trac/login">
                    AuthType Basic
                    AuthName "Sysadmin Access"
                    AuthUserFile /etc/nagios/htpasswd.users
                    Require valid-user
            </Location>
    
  • cpu consumption for a simple wiki page call:
     5470 www-data  18   0  127m  20m 5264 R   98  0.3   0:00.77 trac.cgi
    

comment:8 by Christian Boos, 16 years ago

Well, that TracCgi is not fast is old news…

The original report was about mod_python though.

comment:9 by anonymous, 16 years ago

I've migrate to WSGI: same behavior but with apache2 processes at 100%. Should I migrate to mysql/postgresql ?

-rw-r--r-- 1 www-data www-data  24M 2008-09-16 17:04 trac.db

comment:10 by greg, 16 years ago

(sorry: previous post posted by greg)

comment:11 by Christian Boos, 16 years ago

Well, 24M for a trac.db is still quite low, so using SQLite shouldn't be a problem unless you have dozens of simultaneous requests.

If that's not the case and you're seeing high CPU usage while viewing a Wiki page when there's a single request involved, then something is really odd. You'd need to debug the apache2 process under GDB (as explained in TracTroubleshooting#SystemErrors) and break the process while it's at 100% cpu and see where it's spending its time (ideally with a python backtrace obtained within GDB, as explained in the above page).

comment:12 by greg, 16 years ago

I've migrate data to a postgresql database: it is quite better, postgresql use ~1% CPU at max, but apache2 is still eating 100% on each requests.

We are only 2 users to use this trac.

comment:13 by jani.laakso@…, 15 years ago

Using openvz (resources are more than enough), Apache 2.2.9, postgresql 8.2, mod_python 2.4.5 and trac 0.11b2.

Hardware is 2 x Xeon 2.4GHz, 2Gb..

Same issue here, I've done some experimenting with this issue and noticed following. Latencies are better when Apache's "keepalive off". With keepalive activated Apache2 processes constantly eat 5-10 times more CPU power even though there exists no active requests. I am not saying that you should not use keepalive, I only told this if it helps someone to track this issue.

Main wiki page (simple content) takes typically 2-5 seconds to generate with one user only, can't imagine why it is so slow to generate simple page even though it comes from postgresql.

Timeline takes minimum of 6-15 seconds to generate, but there's plenty of rows to process and display. Still, it's pretty darn slow when we are having only single user load.

I am afraid what will happen when we got 100 concurrent users within one minute due to "slashdot" effect. I'd really like to know if Trac's wiki can be boosted somehow a lot faster. I understand that Trac is mainly used for project development, still Trac is so cool that we would like to use it for our community wiki pages!

Another aspect

  • with keepalive on we got constant 80% CPU utilization on our server
  • with keepalive off we got constant 15% CPU utilization on our server
    • CPU load comes from Apache processes

(in our case load was two requests per minute)

Actual site is http://dev.itmill.com

comment:14 by Remy Blank, 15 years ago

I've been looking at page load times on the site above using Firebug, and it seems that for both WikiStart and the timeline, the page is served within about 250ms. What takes an awful long time to load are the associated resources like CSS and images. For example in the timeline, there are many small images that each take about 1.8s to load.

You could try to serve the static content for Trac directly from Apache instead of going through Trac. The idea is explained in TracCgi#MappingStaticResources.

comment:15 by jani.laakso@…, 15 years ago

The actual question is this: After I set "keepalive off" for apache2.conf, both http://dev.itmill.com and trac ticket pages are now loading a lot faster, worst latency is now 0.1-0.5seconds which is very good.

Does anyone have idea why mod_python/Apache/Trac combination acts this way in our installation?

—-

Timeline load time (only for the HTML, no images fetched) is >90% 5-7 seconds, minimum 4 seconds. Network latency can be ignored, it is few milliseconds.

—-

Load times are now decent, I did some changes to Apache configuration. A day ago this was not the case, you can check the statistics graphs from here: http://jani.virtuallypreinstalled.com/stats/

Latency statistics explained:

  • each graph shows latency per service in seconds
  • blue dot means that latency poller is online (working)
  • grey dot means that latency poller is offline (down)
  • one horizontal pixel tells maximum latency within 10 minutes

The tool generates roughly one session/http get per minute per service.

(Never mind of the spikes once at night, they happen because of the I/O from backup scripts, this issue will be fixed later)

—-

Yes, I've been thinking of setting up a simple proxy in apache which would cache page requests. This would handle "slashdot" effect in a robust way. But I have not yet thought how to refresh cache when someone edits wiki page, tickets etc. Most likely we would force caching only for main page and timeline, and refresh cache once per minute.

MappingStaticResources is nice idea too, I will look into it, thanks.

in reply to:  14 comment:16 by anonymous, 15 years ago

Replying to rblank:

I've been looking at page load times on the site above using Firebug, and it seems that for both WikiStart and the timeline, the page is served within about 250ms. What takes an awful long time to load are the associated resources like CSS and images. For example in the timeline, there are many small images that each take about 1.8s to load.

Do not benchmark this with firebug, the "net" tab ignores the time spend on server (mod_python/Trac/SQL/postgresql).

Here is a simple testcase:

dev@jani:~/test$ ping -c 1 dev.itmill.com
PING dev.itmill.com (194.100.97.95) 56(84) bytes of data.
64 bytes from 194.100.97.95: icmp_seq=1 ttl=63 time=0.089 ms

--- dev.itmill.com ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms
dev@jani:~/test$ time wget http://dev.itmill.com/timeline
--08:48:27--  http://dev.itmill.com/timeline
           => `timeline'
Resolving dev.itmill.com... 194.100.97.95
Connecting to dev.itmill.com|194.100.97.95|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 294,590 (288K) [text/html]

100%[=================================================================>] 294,590       --.--K/s             

08:48:33 (9.49 MB/s) - `timeline' saved [294590/294590]


real	0m5.468s
user	0m0.004s
sys	0m0.000s
dev@jani:~/test$  

ping demonstrates that network roundtrip latency can be ignored, it is negligible.

wget fetches only html, no images/css/js/.., it always takes at least 5 seconds.

—-

PS. Please notice my question regarding "keepalive off" ⇒ wiki pages and ticket pages are now faster, I'd like to understand why this happens, in case I got to optimize this.

in reply to:  13 ; comment:17 by Christian Boos, 15 years ago

Replying to jani.laakso@…:

Using openvz (resources are more than enough), Apache 2.2.9, postgresql 8.2, mod_python 2.4.5 and trac 0.11b2.

Also, use recent versions of your software packages: mod_python 3.3.1 and Trac 0.11.1 or 0.11-stable. Same for Genshi, use a recent version (milestone:0.5.1 or 0.5-stable).

in reply to:  17 comment:18 by Christian Boos, 15 years ago

And btw, you could also reduce your load on /timeline by setting [timeline] default_daysback = 10 or some lower value than the default 30.

[OT] if you're going to upgrade Trac to some released version, you'll need to adapt your front page to use a #!div based layout. Check WikiHtml. I'll attach a sample conversion to get you started.

by Christian Boos, 15 years ago

Attachment: WikiStart-ItMill.txt added

ItMill - converting to 0.11 style layout (as discussed in comment:18)

comment:19 by jani.laakso@…, 15 years ago

Thanks a lot for the comments, all feedback is more than welcome!

We'll receive new hardware (8 core cpu, 16Gb memory, 4 SAS disks..) in a week or so. First we will transfer current Trac directly to that beast and run some scalability / robustness tests against Trac with Apache jMeter and ab. If test results are not good enough, next step might be to upgrade trac (/ mod_python / genshi / ..).

I'll get back to this when tests are run, we got some indication that current setup might have I/O issue for some reason. In any case, it's interesting to see how Trac scales..

comment:20 by Remy Blank, 15 years ago

Keywords: needinfo added

comment:21 by stef@…, 15 years ago

I'm experiencing the same problem. I upgraded a 0.10.3 site to 0.11.2 and the slow down is massive.

The WikiStart page takes 5 seconds to load for example, verified using wget from the same LAN that Trac is installed on. This start page has nothing on it besides normal wiki formatting, no macros, page list or anything fancy.

I'm using mod_python, but it's as slow as if I was using CGI.

mod_python 3.1.4, apache 2.2, FreeBSD 6.2, python 2.4.2

comment:22 by jheathco, 15 years ago

Same problem here as well

mod_python 3.3.1, Apache2, Python 2.4.3, 0.11.2

comment:23 by jheathco, 15 years ago

Priority: normalhigh

Upping priority on this, it seems there are a lot of users with this problem…

comment:24 by Christian Boos, 15 years ago

Including t.e.o. today … anything unusual going on, Jonas?

comment:25 by anonymous, 15 years ago

cboos, do you want access to my server to take a look? Shoot me an email at jheathco [at] gmail.com

comment:26 by jani.laakso@…, 15 years ago

Everything in our server is now fast (enough), but we got it covered by simply investing more for raw computing power (8 core, lots of memory..). In any case, I hope my comments help for getting Trac fast for other users aswell.

comment:27 by anonymous, 15 years ago

I've upgraded to Python 2.5.2 and the problem still persists. I have also turned KeepAlive off with limited success. I have one gig of RAM on the server — could that be the issue (more seems like a lot required for trac)?

comment:28 by anonymous, 15 years ago

tracd performance is MUCH higher - I can continue to use that for the time being…

comment:29 by anonymous, 15 years ago

i have the same problem, trac 0.11.2, apache 2.2.2, and use python_module, the cpu will go up to 80% while i get a wiki page in trach

comment:30 by Christian Boos, 15 years ago

Could someone try the following: run apache in debug mode (-X) from the debugger (as described in TracTroubleshooting#DebuggingSegmentationFaults), then trigger such a CPU spike by accessing a Wiki page and then interrupt apache (Control-C) while it's 100% busy. The backtrace obtaining here (bt) should be informative.

by dgollub, 15 years ago

Apache 2.2.8 backtrace on CPU peak - mod_python debug-symbols missing

comment:31 by Vieira, 15 years ago

+1 This is annoying.

comment:32 by sam@…, 15 years ago

Trac 0.11 is very slow, and more specifically very, very slow serving requests for the timeline. This could be, and apparently already is, used to cause high load on an attacked server. Multiple consecutive and/or parallel requests to the timeline page, with a large date range, causes Trac to use 100% CPU all the time with extreme load (> 20). Is there anything that can be done to prevent this ? How does trac.edgewall.com protects itself from this problem other than using high-end hardware to cope for the performance problem induced by the software ?

I'm running Trac 0.11, SQLite backend, Python 2.5 on Debian 5.x (sid).

in reply to:  32 comment:33 by anonymous, 15 years ago

Replying to sam@…:

Trac 0.11 is very slow, and more specifically very, very slow serving requests for the timeline. This could be, and apparently already is, used to cause high load on an attacked server. Multiple consecutive and/or parallel requests to the timeline page, with a large date range, causes Trac to use 100% CPU all the time with extreme load (> 20). Is there anything that can be done to prevent this ? How does trac.edgewall.com protects itself from this problem other than using high-end hardware to cope for the performance problem induced by the software ?

I'm running Trac 0.11, SQLite backend, Python 2.5 on Debian 5.x (sid).

Which version of Apache are you running?

Since TRAC is performing really slow, especially during the /timeline assembling, Apache spawns more threads to serve further request. I worked around the issue by reducing the number of MaxClients in the Apache process to 15 (default was 150, not using Debian). But again, that's just a workaround to avoid such a doomsday load.

Another workaround: reduce the default value of the timeline to 2 days instead of 14 days (not measured the impact of this yet)

Another workaround: prevent robots to index the timeline or other stuff which has an entire history

comment:34 by anonymous, 15 years ago

I'm seeing the same behavior as the reporter:

2009-01-14 13:08:42,031 Trac[main] DEBUG: Dispatching <Request "GET u'/chrome/site/your_project_logo.png'">
2009-01-14 13:08:42,141 Trac[chrome] WARNING: File your_project_logo.png not found in any of ['/var/trac/htdocs']
2009-01-14 13:08:42,155 Trac[session] DEBUG: Retrieving session for ID u'<yournamehere>'
2009-01-14 13:08:42,165 Trac[main] WARNING: 404 Not Found (File your_project_logo.png not found)
2009-01-14 13:08:43,038 Trac[chrome] DEBUG: Prepare chrome data for request
2009-01-14 13:08:43,066 Trac[api] DEBUG: action controllers for ticket workflow: ['ConfigurableTicketWorkflow']
2009-01-14 13:08:44,799 Trac[main] DEBUG: 9295 unreachable objects found.
2009-01-14 13:09:04,999 Trac[main] DEBUG: Dispatching <Request "GET u'/timeline'">
2009-01-14 13:09:05,028 Trac[svn_fs] DEBUG: Subversion bindings imported
2009-01-14 13:09:05,081 Trac[session] DEBUG: Retrieving session for ID u'<yournamehere>'
2009-01-14 13:09:05,598 Trac[chrome] DEBUG: Prepare chrome data for request
2009-01-14 13:09:10,140 Trac[api] DEBUG: Updating wiki page index
2009-01-14 13:09:15,241 Trac[api] DEBUG: Updating wiki page index
2009-01-14 13:09:18,585 Trac[loader] DEBUG: Loading trac.ticket.web_ui from /usr/lib/python2.5/site-packages/Trac-0.11.2.1-py2.5.egg
Linux 2.6.22-14-generic #1

Apache 2.2.4 Python 2.5 mod_python 3.3.1-2

comment:35 by jjmbcom@…, 15 years ago

Severity: normalcritical

Hello I am new to Trac. I was able to follow the basic installation instructions and got all the software installed. I tried to initenv and new project and CPU and MEM went through the roof. I read the threads above and did not see an obvious fix for this…is this accurate?

Here is information about my environment:

tracd —version tracd 0.11.2.1

python -V Python 2.4.1

httpd -V Server version: Apache/2.0.54 Server built: May 23 2005 08:12:24 Server's Module Magic Number: 20020903:9 Architecture: 32-bit Server compiled with….

-D APACHE_MPM_DIR="server/mpm/prefork" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D HTTPD_ROOT="/etc/httpd" -D SUEXEC_BIN="/usr/sbin/suexec" -D DEFAULT_PIDLOG="logs/httpd.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_LOCKFILE="logs/accept.lock" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="conf/mime.types" -D SERVER_CONFIG_FILE="conf/httpd.conf"

in reply to:  25 ; comment:36 by Christian Boos, 15 years ago

Replying to anonymous:

cboos, do you want access to my server to take a look?

Sorry, didn't have the opportunity so far to follow-up on this, but it looks like that - at least part of - the speed penalty came from a failure to update the svn cache:

2008-12-11 13:00:25,568 Trac[cache] INFO: repos rev [324] != cached rev [265]

(repeated at each request)

So this looks like the problem recently discussed on Trac-dev, now recorded as #8067.

in reply to:  36 comment:37 by dgollub, 15 years ago

Replying to cboos:

Replying to anonymous:

cboos, do you want access to my server to take a look?

Sorry, didn't have the opportunity so far to follow-up on this, but it looks like that - at least part of - the speed penalty came from a failure to update the svn cache:

2008-12-11 13:00:25,568 Trac[cache] INFO: repos rev [324] != cached rev [265]

(repeated at each request)

So this looks like the problem recently discussed on Trac-dev, now recorded as #8067.

Btw. it seems when using other SCM plugins (e.g. GIT) the issue doesn't appear. This observation might fit with #8067

comment:38 by Eric Petit <eric.petit@…>, 15 years ago

Cc: eric.petit@… added

I also noticed the higher CPU usage after upgrading my Debian server from Etch to Lenny (moving from Trac 0.10.3 to 0.11.1), which runs a couple of Trac sites (with lighttpd/trac.fcgi). The average CPU usage on the server jumped from ~10% to ~30%, and trac.fcgi is consistently among the top 3 processes in top. It does not makes the Trac sites noticeably slower, though.

comment:39 by Thijs Triemstra <info@…>, 15 years ago

Cc: info@… added

comment:40 by xtophe+trac@…, 15 years ago

Cc: xtophe+trac@… added

comment:41 by anonymous, 15 years ago

Same here,

mod_python 3.3.1 Trac 0.11.3 Apache 2.2.8 Python 2.5.2 mod_ssl 2.2.8

Trac installed from Source, everything else from Ubuntu 8.04-server repositories. Having the same issue with mod_python, wsgi and Tracd. Trac response es very-very slow, ~5/15 secs each request with a very small project and two developers.

My server is a KVM virtualized Linux environment, 512MB. Maybe I'll have to downgrade to 10.4 from repositories?

in reply to:  41 comment:42 by anonymous, 15 years ago

Replying to anonymous:

Same here,

mod_python 3.3.1 Trac 0.11.3 Apache 2.2.8 Python 2.5.2 mod_ssl 2.2.8

anonymous Trac installed from Source, everything else from Ubuntu 8.04-server repositories. Having the same issue with mod_python, wsgi and Tracd. Trac response es very-very slow, ~5/15 secs each request with a very small project and two developers.

My server is a KVM virtualized Linux environment, 512MB. Maybe I'll have to downgrade to 10.4 from repositories?

I found that I wasn't using KVM hypervisor, but QEMU. When reinstalled the VM with KVM (hardware virtualization) Trac speed is what I expect, so, no problems here. Don't know if it has to be with virtualization or QEMU's poor performance.

comment:43 by anonymous, 15 years ago

Same here. Upgraded my Debian virtual server to Lenny, which also brought my trac installations to 0.11. Since then I have noticable slowdowns :(

comment:44 by dgollub, 15 years ago

For clarification - i'm seening this issue on a bare metal machine (not a VM)

comment:45 by anonymous, 15 years ago

I'm a new user of trac & am getting terrible performance (~10 seconds to load the default welcome page on a brand new trac env).

My setup

Box: Dual G5 (not a speed demon in todays terms, but definitely up to the task)
OS: Leopard 
trac 0.11.3
fastcgi on apache included w/ Leopard

I was thinking about swapping to mod_python but it doesn't sound like that will solve my issues. It looks like this is still an issue. =(

comment:46 by anonymous, 15 years ago

I should note that swapping to trac 0.10.x solves my fastCGI performance problems. If I get a spare moment I'll do some profiling & let you guys know what I find.

by anonymous, 15 years ago

This is a sampling of my system while trac was loading the default welcome page in a new trac env. This was taken by Shark on Leopard, Dual 2.0 Ghz G5, 3.5 Gig RAM.

by anonymous, 15 years ago

This is the same sampling only in a bottom-up oriented view. This is a sampling of my system while trac was loading the default welcome page in a new trac env. This was taken by Shark on Leopard, Dual 2.0 Ghz G5, 3.5 Gig RAM.

comment:47 by anonymous, 15 years ago

I got a chance to do some profiling of my system while trac/apache was serving up the default welcome page. I've attached two views of the sampling to this ticket. The first is a top-down view and the second a bottom-up view of the same sampling.

I haven't done any python (or trac) profiling before so I'm not sure if the symbols we're seeing are from the python interpreter or if they're from trac's python code. Anyway, here's hoping this is helpful!

comment:48 by alphafighter1@…, 15 years ago

Cc: alphafighter1@… added

Adding me to the Cc please.

in reply to:  15 comment:49 by Christian Boos, 15 years ago

Component: generalweb frontend
Milestone: 0.11.5
Owner: set to Christian Boos

Replying to jani.laakso@…:

The actual question is this: After I set "keepalive off" for apache2.conf, both http://dev.itmill.com and trac ticket pages are now loading a lot faster, worst latency is now 0.1-0.5seconds which is very good.

Does anyone have idea why mod_python/Apache/Trac combination acts this way in our installation?

When I first saw that, I thought it was a kind of local configuration issue. But now I think Trac has serious trouble with HTTP 1.1 and in particular with Keep-Alive … See #8020.

in reply to:  47 comment:50 by Christian Boos, 15 years ago

Replying to anonymous:

I got a chance to do some profiling of my system while trac/apache was serving up the default welcome page.

Well, no, it doesn't help much in isolation. Eventually together with the timings as seen in the trac.log at debug level and the access.log… Essentially the point is to see where the CPU and the delays go, if it's Trac at all, etc. (see comment:15 - changing the connection mode makes CPU usage go down a lot).

Inside Trac code, most of the real processing time will be spent inside Genshi template rendering, this we already know … But that should be fast enough on a modern machine and outside a VM. What we need to know is from where the additional delays come. One possible source of delays is the repository resync checks - this is being dealt with in the MultipleRepositorySupport branch and the possibility to do the notify changeset_added from within the post-commit hooks. Other causes of delays need to be identified.

One easy way to help for all of you who are experiencing the Trac 0.11 really slow, high cpu usage issue is to setup an alternative test server on the same machine using tracd, pointing on the same environment(s). It's very easy to set up, just pick another port (e.g. -p 8910) and check if the slowdown and CPU usage is comparable to your problematic production installation.

comment:51 by dgollub@…, 15 years ago

Cc: dgollub@… added

Note: apache server-tuning.conf got adjusted to not spawn then 15 Processes (default was 150 which killed the host in the past).

:~> time w3m -dump http://127.0.0.1:9988/trac/timeline > /dev/null

real    0m1.935s
user    0m0.032s
sys     0m0.008s
:~> time w3m -dump http://host/timeline > /dev/null

real    0m2.340s
user    0m0.028s
sys     0m0.012s
:~> time for n in `seq 1 10`; do w3m -dump http://127.0.0.1:9988/trac/timeline > /dev/null; done

real    0m19.987s
user    0m0.264s
sys     0m0.064s

---

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
16861 wwwrun    20   0  693m  53m 7236 S 51.9  5.3   0:10.92 httpd2-prefork     
15853 wwwrun    20   0  696m  57m 7672 S 23.3  5.7   0:25.96 httpd2-prefork     
17118 wwwrun    20   0  687m  45m 6544 R 10.0  4.6   0:00.88 httpd2-prefork     
17119 wwwrun    20   0  685m  45m 6816 D  2.7  4.5   0:00.66 httpd2-prefork  

:~> time for n in `seq 1 10`; do w3m -dump http://host/timeline > /dev/null; done

real    0m20.970s
user    0m0.288s
sys     0m0.084s

---

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND           
15029 user      20   0  218m  29m 5936 S 84.2  2.9   1:01.88 tracd 

Don't have the experience of the load with tracd, but i could imagine that it's not supposed to get this high. Another major difference here is that Apache is spawing several processes, where tracd is run just in one. IIRC apache is spawing them to serve further request or something like that - but it's also because that trac rendering might be not fast enough. So with more processes the problem just get multiplied.

in reply to:  51 comment:52 by dgollub@…, 15 years ago

Replying to dgollub@…:

Note: apache server-tuning.conf got adjusted to not spawn then 15 Processes (default was 150 which killed the host in the past).

Sorry, i mixed up the stats. tracd load is running on port 9988, apache load is running on port 80.

comment:53 by Daniel Gollub <dgollub@…>, 15 years ago

(small) Trac instance 0.11.3, with SVN repository, approx. 1000 commits: {{{{ :~> time for n in seq 1 10; do w3m -dump http://host2/timeline > /dev/null; done

real 0m11.102s user 0m0.216s sys 0m0.040s

(approx. 3-4 apache processes with a CPU% load over 10% - in sum approx. 80% cpu utilization) }}}

Same Trac instance, repository_dir unset in trac.ini

:~> time for n in `seq 1 10`; do w3m -dump http://host2/timeline > /dev/null; done

real    0m5.064s
user    0m0.168s
sys     0m0.072s

(none of the apache processes over utilize CPU more then 25%)

Hope that helps. Again, the same host worked fine with 0.11.x with Trac with Git plugin. Loaded problems started with SVN Trac instanced got migrated on this host. (SVN Trac instance's previous host had also load problems). NOt quite sure if this is related to #8067 or not.

I'm happy to test more things to find the root cause…

in reply to:  53 ; comment:54 by Christian Boos, 15 years ago

Replying to Daniel Gollub <dgollub@…>:

Again, the same host worked fine with 0.11.x with Trac with Git plugin. … problems started with SVN Trac instanced got migrated on this host.

Which SVN version? Although this involves some work on your side, it would be helpful to test with some 1.4.x vs. 1.5.x and see if there's any difference.

Not quite sure if this is related to #8067 or not.

Well, it's simple: is the sync working or not? Check in the log to see if there are things like rev [xyz] != [abc]. Also, how much time does the sync check take? (you may have to add some time measurement around the call to repos.sync() in trac/versioncontrol/api.py).

Also, is there anything special with your setup? (repository_dir pointing to a non-local disk for example)

in reply to:  54 comment:55 by Daniel Gollub <dgollub@…>, 15 years ago

Replying to cboos:

Replying to Daniel Gollub <dgollub@…>:

Again, the same host worked fine with 0.11.x with Trac with Git plugin. … problems started with SVN Trac instanced got migrated on this host.

Which SVN version? Although this involves some work on your side, it would be helpful to test with some 1.4.x vs. 1.5.x and see if there's any difference.

Currently it's running on subversion 1.5.2 Going to update to 1.5.6 if there is any difference - and if there is no difference downgrade to 1.4 (if possible).

Btw. the only trace of repository related debug message i found are:

2009-03-31 16:25:02,177 Trac[svn_fs] DEBUG: Subversion bindings imported
2009-03-31 16:25:04,042 Trac[svn_fs] DEBUG: Subversion bindings imported
2009-03-31 16:25:11,990 Trac[svn_fs] DEBUG: Subversion bindings imported
2009-03-31 16:25:24,330 Trac[svn_fs] DEBUG: Subversion bindings imported
2009-03-31 16:25:38,926 Trac[svn_fs] DEBUG: Subversion bindings imported
}}
Not quite sure if this is of any help.

> 
> > Not quite sure if this is related to #8067 or not.
> Well, it's simple: is the sync working or not? Check in the log to see if there are things like `rev [xyz] != [abc]`. Also, how much time does the sync check take? (you may have to add some time measurement around the call to repos.sync() in trac/versioncontrol/api.py).

Couldn't find trace in the log file like describe in #8067 and in the googlegroups posting..

> 
> Also, is there anything special with your setup? (repository_dir pointing to a non-local disk for example)
> 

The repository is running on a software raid. But i had the same issue on the previous host without software raid as well. It's on a local filesystem (ext3).

comment:56 by Daniel Gollub <dgollub@…>, 15 years ago

Subversion 1.5.6 didn't bring any improvement. Changed in Apache's server-tuning.conf MaxClients/ServerLimit back to 150 (which was distro default) and did in parallel five /timeline GETs in a infinite loop. Within few minutes the average load of system was about 8.

I'll try to reproduce this locally on my workstation - so i don't have to touch the productive environment.

comment:57 by cpeterson@…, 15 years ago

I am also seeing this 90 - 100% CPU spike, in my case this is hosted on a small instance Amazon EC2 virtual instance with the SVN / Trac data on S3 storage. Environment details:

Trac: 0.11.4 Python: 2.5.4 (r254:67916, Dec 23 2008, 15:10:54) [MSC v.1310 32 bit (Intel)] setuptools: 0.6c9 SQLite: 3.5.2 pysqlite: 2.4.0 Genshi: 0.5.1 mod_python: 3.3.1 Pygments: 1.0 Subversion: 1.5.4 (r33841) jQuery: 1.1.3.1

This is a new install with data migrated from the legacy server, hosted on Windows 2003 Enterprise server.

I have many lines in my trac log with debug level logging enabled (number of objects is all over the place):

DEBUG: 523 unreachable objects found. DEBUG: 11575 unreachable objects found.

CPU usage during SVN actions is minimal, yet any page viewed in Trac seem to spike the cpu and take longer than it should. This high CPU usage happens even after an apache service restart, the CPU spikes right away on the first and subsequent requests.

I set the error logging back to ERROR (I had it set for DEBUG) and my CPU usage has dropped substantially. I wonder if the logging module itself is causing this issue? I'm not saying I have *no* cpu load now, I still peak very briefly to 90%+ CPU, but my usage graph is much calmer now that DEBUG logging is disabled.

Just tossing out info to see if it helps!

comment:58 by Christian Boos, 15 years ago

Thanks for the hint about the debug level. Also, do you have gdb on that platform? If yes, please see comment:30.

comment:59 by Nate Schindler <nathan.schindler@…>, 15 years ago

Cc: nathan.schindler@… added

comment:60 by Rakshasa, 15 years ago

I was having some really weird issues with pages only returning blanks, and getting segfaults. Turning off logging to file, and setting log_level to ERROR made the problem disappear.

comment:61 by Camilo Perez <seth@…>, 15 years ago

Cc: seth@… added

adding me as cc on this

comment:62 by mattl@…, 15 years ago

Having the same problems.

comment:63 by Christian Boos, 15 years ago

Anyone using mod_deflate? A recent mail on Trac-users (googlegroups:trac-users:ab070d251f5a0d11) hints that this could be a cause of such slow downs.

Besides, does it make a difference when using 0.11.5dev (revision ≥ r8215)?

comment:64 by Wellington Castellani <wellington@…>, 15 years ago

Cc: wellington@… added

I´m having the same problems.

have tried with fcgi and now runnig wsgi.

wsgi:

w3m -dump https://pluto/trac/teste/  0,06s user 0,02s system 2% cpu 3,828 total

fcgi:

w3m -dump https://pluto/trac_fcgi/teste/  0,04s user 0,02s system 1% cpu 4,446 total

mod_deflate:

SetOutputFilter DEFLATE
SetEnvIfNoCase Request_URI \
        \.(?:css|html)$ no-gzip dont-vary

w3m -dump https://pluto/trac/teste/  0,06s user 0,00s system 1% cpu 6,164 total


some infos: System Information

Trac:	0.11
Python:	2.4.4 (#2, Oct 22 2008, 19:52:44) [GCC 4.1.2 20061115 (prerelease) (Debian 4.1.1-21)]
setuptools:	0.6c9
SQLite:	3.3.8
pysqlite:	2.3.2
Genshi:	0.5.1
Agilo:	0.7.4.1-r1624-20090611
Subversion:	1.4.2 (r22196)
RPC:	1.0.2
jQuery:	1.2.3

in reply to:  64 ; comment:65 by Christian Boos, 15 years ago

Replying to Wellington Castellani <wellington@…>:

Agilo: 0.7.4.1-r1624-20090611 RPC: 1.0.2

When troubleshooting performance issues, please always test against Trac core, i.e. without any plugin. If the performance issue persists, then it's something we can do something about.

Even with that, note that until now no Trac developer was able to reproduce the issue, so if someone feels like giving me ssh access to some box where I could try to debug the problem "live", don't hesitate to contact me privately (that's how at least one report here has been "fixed", see comment:36).

in reply to:  65 comment:66 by Wellington Castellani <wellington@…>, 15 years ago

Replying to cboos: I´m trying to solve my issue with bit knowledge i have. I Will try without any plugin. Thank you for helping.

comment:67 by mail@…, 15 years ago

Cc: mail@… added

Adding me to CC.

comment:68 by Christian Boos, 15 years ago

Milestone: 0.11.60.11.5

Keeping it on the radar for 0.11.5 - I somehow have the feeling that with 0.11.5, this issue could eventually go away.

I'd be very interested in feedback from someone who had this issue and who will retry with latest 0.11-stable (≥ r8287). See TracDownload#Tracstable or wait this week-end for 0.11.5rc1.

by lloyd@…, 15 years ago

Attachment: cProfile-stats.txt added

Profile of gateway.run(dispatch_request)

comment:69 by lloyd@…, 15 years ago

I've generated a cProfile/pstats set of statistics for a bunch of requests on my system and I've attached a simple report from one of the data files. It mostly only shows that the render_template function in chrome.py takes a long time to run.

I have a NetBSD 5.0 server running apache-2.2.11nb4, ap22-py25-python-3.3.1, trac-0.11.4, py25-genshi-0.5.1, py25-babel-0.9.4 and python25-2.5.4nb1. I'm using mod_python.

comment:70 by Christian Boos, 15 years ago

Also, what about the restrict_owner setting? Is it enabled in those problematic installations? (see #4245, #8034, #8212 …)

comment:71 by cmarschalek, 15 years ago

I had similar problems (see #7785) and after upgrading to 0.11.5stable-r8335 changesets load a lot faster. Performance is still not optimal though (but that could change if/when we update the server).

comment:72 by sebastian@…, 15 years ago

Cc: sebastian@… added

Adding me to cc

comment:73 by ThurnerRupert, 15 years ago

did anybody try psycho to speed up genshi, http://psyco.sourceforge.net/psycoguide/node9.html ?

comment:74 by Andreas Schneider <mail@…>, 15 years ago

I got back to normal trac speed after configuring apache2 to deliver the static content. I think that's the problem of genshi.

comment:75 by curtnelson@…, 15 years ago

Cc: curtnelson@… added

Adding me to CC.

comment:76 by Sebastian Krysmanski <sebastian@…>, 15 years ago

Just to add some numbers here. I've done some profiling and will post the results here. Note that this was only one request but it should give some insight into this matter (or at least some basis for discussion).

The summary looks like this (I've attached the full report):

Profiling of: /query?status=accepted&status=assigned&status=new&status=reopened&group=severity&col=id&col=summary&col=status&col=owner&col=type&col=component&col=complexity&order=status
Tickets: 45
No plugins enabled (though installed)

Trac: 0.11.4
Python: 2.6
Genshi: 0.5.1
Backend: sqlite
Access: mod_wsgi
Using Psyco 1.6.1
Processor: Intel Atom 230 (1.6 GHz)
RAM: 2GB

Legend:

 * num        : Number of different functions
 * ncalls     : Number of calls (function calls/primitve calls -- non-primitve-calls = function-calls - primitive-calls)
 * tottime    : Overall time spent in this function (without time spent in child functions) - in seconds
 * percall(1) : Time per single call (without time spent in child functions) - i.e. tottime/ncalls
 * cumtime    : Overall time spent in this function (including time spent in child functions) - in seconds
 * percall(2) : Time per single call (including time spent in child functions) - i.e. cumtime/ncalls
 * .../10%    : ... among the first 10% (sorted by tottime)
 * grep       : Expression used in the grep command for this type

Type                              num         %   ncalls        %  tottime        %  num/10%        %  ncalls/10%        %  tottime/10%        %  grep
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Trac functions:                    791   26.12%   30,588    4.55%    0.723    8.22%       41   13.53%      21,823    3.47%        0.522    6.41%  'Trac-0.11.4-py2.6.egg'
Genshi functions:                  357   11.79%  386,008   57.48%    3.587   40.78%       62   20.46%     382,147   60.75%        3.511   43.09%  'genshi'
Python 2.6 compiler functions:     351   11.59%  148,332   22.09%    2.503   28.46%      106   34.98%     138,696   22.05%        2.440   29.95%  'compiler'
Python SRE functions:               47    1.55%   58,568    8.72%    0.777    8.83%       20    6.60%      53,415    8.49%        0.756    9.28%  'sre_'
SVN Python functions:              139    4.59%      389    0.06%    0.093    1.06%        7    2.31%          12    0.00%        0.091    1.12%  'svn-python'
Setuptools functions:              162    5.35%   18,400    2.74%    0.559    6.36%       28    9.24%      13,393    2.13%        0.480    5.89%  'setuptools-0.6c9'
Other functions:                 1,181   39.01%   29,291    4.36%    0.553    6.29%       39   12.88%      19,542    3.11%        0.348    4.26%
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Overall functions:               3,028  100.00%  671,576  100.00%    8.795  100.00%      303  100.00%     629,028  100.00%        8.148  100.00%

by Sebastian Krysmanski <sebastian@…>, 15 years ago

Attachment: profile.txt.gz added

profiling report

comment:77 by anonymous, 15 years ago

Resolution: fixed
Status: newclosed

comment:78 by Sebastian Krysmanski <sebastian@…>, 15 years ago

Resolution: fixed
Status: closedreopened

I doubt that "anonymous" should close this ticket without leaving a comment.

comment:79 by anonymous, 15 years ago

I doubt that anonymous should have right to do that in the first place.

comment:80 by anonymous, 15 years ago

If you're interested I solved this problem in my own way. I just started standalone tracd and set nginx up as proxy to it.

comment:81 by anonymous, 15 years ago

Idem:

Aprox 5-10 seconds for view tickert, view worklog, timeline, view wiki page…

Network:

100Mb Ethernet

Server (dedicated):

2GiB ram/ 2 core / 4xSAS HD

Software:

FreeBSD 6.1 trac-0.11.5rc ( idem with previous trac-0.11.3) apache 2.2.2, python 2.5.3 (mod_python-3.3.1_2) database on postgresql-server-8.3.6

svn server: version 1.5.5 (r34862). At the same machine.

static files served by apache (2/3 seconds speed up per page): Alias /static/ /home/trac/tracXXX/static/htdocs/common/ AliasMatch /.*/chrome(.*) /home/trac/tracXXX/static/htdocs$1 <Location /static/>

SetHandler None

</Location> <LocationMatch /.*/chrome.*>

SetHandler None

</LocationMatch> <Directory /home/trac/tracXXX/static/htdocs/>

Order allow,deny Allow from all

</Directory>

Page:

Only 20/25% busy cpu while viewing a ticket page. Idle 100% rest of time.

Ticket view (6 seconds):

tty ipsd0 ipsd1 ipsd2 cpu

tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id

0 78 19.00 4 0.07 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100

0 78 14.00 2 0.03 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 78 14.00 6 0.08 0.00 0 0.00 0.00 0 0.00 2 0 1 0 97 0 77 14.00 2 0.03 0.00 0 0.00 0.00 0 0.00 20 0 5 0 75 0 77 13.56 9 0.12 0.00 0 0.00 0.00 0 0.00 24 0 3 0 73 0 77 30.14 14 0.41 0.00 0 0.00 0.00 0 0.00 25 0 3 0 71 0 77 22.18 11 0.24 0.00 0 0.00 0.00 0 0.00 25 0 1 0 74 0 677 15.00 2 0.03 0.00 0 0.00 0.00 0 0.00 25 0 3 0 71 0 77 15.00 4 0.06 0.00 0 0.00 0.00 0 0.00 2 0 0 0 98 0 77 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 78 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 1 0 99 0 77 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 78 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100

View timeline (only 10 days back): 10 seconds

0 78 16.00 4 0.06 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100

0 78 16.00 2 0.03 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 78 13.20 15 0.19 0.00 0 0.00 0.00 0 0.00 4 0 4 0 92 0 677 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 19 0 9 0 72 0 77 27.71 7 0.19 0.00 0 0.00 0.00 0 0.00 24 0 2 0 74 0 77 20.77 47 0.95 0.00 0 0.00 0.00 0 0.00 24 0 5 0 72 0 77 128.00 1 0.12 0.00 0 0.00 0.00 0 0.00 25 0 1 0 74 0 78 86.00 3 0.25 0.00 0 0.00 0.00 0 0.00 24 0 2 0 73 0 77 128.00 2 0.25 0.00 0 0.00 0.00 0 0.00 24 0 3 0 73 0 78 48.67 6 0.28 0.00 0 0.00 0.00 0 0.00 24 0 2 0 74 0 77 48.00 7 0.33 0.00 0 0.00 0.00 0 0.00 19 0 2 0 79 0 77 9.00 2 0.02 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 78 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100 0 78 0.00 0 0.00 0.00 0 0.00 0.00 0 0.00 0 0 0 0 99 0 77 9.00 6 0.05 0.00 0 0.00 0.00 0 0.00 0 0 0 0 99

tty ipsd0 ipsd1 ipsd2 cpu

tin tout KB/t tps MB/s KB/t tps MB/s KB/t tps MB/s us ni sy in id

0 77 9.00 2 0.02 0.00 0 0.00 0.00 0 0.00 1 0 0 0 99 0 231 9.00 2 0.02 0.00 0 0.00 0.00 0 0.00 0 0 0 0 100

comment:82 by anonymous, 15 years ago

Idem:

Aprox 5-10 seconds for view tickert, view worklog, timeline, view wiki page…

Network:

100Mb Ethernet

Server (dedicated):

2GiB ram/ 2 core / 4xSAS HD

Software:

FreeBSD 6.1 trac-0.11.5rc ( idem with previous trac-0.11.3) apache 2.2.2, python 2.5.3 (mod_python-3.3.1_2) database on postgresql-server-8.3.6

svn server: version 1.5.5 (r34862). At the same machine.

static files served by apache (2/3 seconds speed up per page):
Alias /static/ /home/trac/tracXXX/static/htdocs/common/
AliasMatch ^/.*/chrome(.*) /home/trac/tracXXX/static/htdocs$1
<Location /static/>
  SetHandler None
</Location>
<LocationMatch ^/.*/chrome.*>
  SetHandler None
</LocationMatch>
<Directory /home/trac/tracXXX/static/htdocs/>
  Order allow,deny
  Allow from all
</Directory>

Page:

Only 20/25% busy cpu while viewing a ticket page. Idle 100% rest of time.

Ticket view (6 seconds):

      tty           ipsd0            ipsd1            ipsd2             cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0   78 19.00   4  0.07   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 14.00   2  0.03   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 14.00   6  0.08   0.00   0  0.00   0.00   0  0.00   2  0  1  0 97
   0   77 14.00   2  0.03   0.00   0  0.00   0.00   0  0.00  20  0  5  0 75
   0   77 13.56   9  0.12   0.00   0  0.00   0.00   0  0.00  24  0  3  0 73
   0   77 30.14  14  0.41   0.00   0  0.00   0.00   0  0.00  25  0  3  0 71
   0   77 22.18  11  0.24   0.00   0  0.00   0.00   0  0.00  25  0  1  0 74
   0  677 15.00   2  0.03   0.00   0  0.00   0.00   0  0.00  25  0  3  0 71
   0   77 15.00   4  0.06   0.00   0  0.00   0.00   0  0.00   2  0  0  0 98
   0   77  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  1  0 99
   0   77  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100

View timeline (only 10 days back): 10 seconds

   0   78 16.00   4  0.06   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 16.00   2  0.03   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 13.20  15  0.19   0.00   0  0.00   0.00   0  0.00   4  0  4  0 92
   0  677  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00  19  0  9  0 72
   0   77 27.71   7  0.19   0.00   0  0.00   0.00   0  0.00  24  0  2  0 74
   0   77 20.77  47  0.95   0.00   0  0.00   0.00   0  0.00  24  0  5  0 72
   0   77 128.00   1  0.12   0.00   0  0.00   0.00   0  0.00  25  0  1  0 74
   0   78 86.00   3  0.25   0.00   0  0.00   0.00   0  0.00  24  0  2  0 73
   0   77 128.00   2  0.25   0.00   0  0.00   0.00   0  0.00  24  0  3  0 73
   0   78 48.67   6  0.28   0.00   0  0.00   0.00   0  0.00  24  0  2  0 74
   0   77 48.00   7  0.33   0.00   0  0.00   0.00   0  0.00  19  0  2  0 79
   0   77  9.00   2  0.02   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
   0   77  9.00   6  0.05   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
      tty           ipsd0            ipsd1            ipsd2             cpu
 tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0   77  9.00   2  0.02   0.00   0  0.00   0.00   0  0.00   1  0  0  0 99
   0  231  9.00   2  0.02   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100

comment:83 by Eric Petit <eric@…>, 15 years ago

I profiled a tracd instance running on my server:

   ncalls        tottime  percall  cumtime  percall filename:lineno(function)
      394       1118.734    2.839 1118.734    2.839 {method 'accept' of '_socket.socket' objects}
60792/60791       25.505    0.000   28.628    0.000 {apply}
    19733         14.328    0.001   14.328    0.001 /usr/local/lib/python2.5/site-packages/Trac-0.11.5-py2.5.egg/trac/db/sqlite_backend.py:48(_rollback_on_error)
8910392/1572503   14.319    0.000  105.688    0.000 build/bdist.linux-i686/egg/genshi/template/base.py:489(_eval)
      393         11.233    0.029   11.245    0.029 {gc.collect}
  2892261          8.750    0.000   10.060    0.000 build/bdist.linux-i686/egg/genshi/path.py:188(_test)
2854776/1432243    8.523    0.000   92.344    0.000 build/bdist.linux-i686/egg/genshi/path.py:132(_generate)
8422885/1586609    8.448    0.000   96.839    0.000 build/bdist.linux-i686/egg/genshi/template/base.py:549(_flatten)
1049624/1045823    8.303    0.000   13.078    0.000 {built-in method sub}
1308091/1306850    7.681    0.000  139.430    0.000 build/bdist.linux-i686/egg/genshi/output.py:686(__call__)
8348387/1657423    7.645    0.000  128.082    0.000 build/bdist.linux-i686/egg/genshi/core.py:264(_ensure)
1298695/1285095    5.940    0.000  150.721    0.000 build/bdist.linux-i686/egg/genshi/output.py:302(__call__)
  1427012          5.759    0.000    5.759    0.000 {method 'join' of 'unicode' objects}
      443          5.297    0.012    5.297    0.012 {method 'commit' of 'sqlite3.Connection' objects}
4425466/1564431    5.114    0.000  113.814    0.000 build/bdist.linux-i686/egg/genshi/template/base.py:563(_include)
4423691/1571793    4.808    0.000  107.304    0.000 build/bdist.linux-i686/egg/genshi/template/base.py:541(_exec)
2856415/1433887    4.527    0.000   80.780    0.000 build/bdist.linux-i686/egg/genshi/template/markup.py:231(_strip)
    20149          4.323    0.000    4.989    0.000 /usr/local/lib/python2.5/site-packages/Trac-0.11.5-py2.5.egg/trac/db/sqlite_backend.py:166(__init__)
4583800/1564929    4.300    0.000  111.926    0.000 build/bdist.linux-i686/egg/genshi/template/markup.py:224(_match)
1319938/1303642    3.973    0.000  143.477    0.000 build/bdist.linux-i686/egg/genshi/output.py:573(__call__)

The full profile is available at http://eric.lapsus.org/trac/Trac_0.11.5_20090729.cProfile

As I mentioned in an earlier comment, Trac isn't perceptibly slow for me, but CPU usage has about tripled when upgrading from 0.10.x to 0.11.x.

Trac: 0.11.5
Python: 2.5.2 (r252:60911, Jan 4 2009, 17:40:26) [GCC 4.3.2]
setuptools: 0.6c8
SQLite: 3.5.9
pysqlite: 2.3.2
Genshi: 0.5.1
Subversion: 1.5.1 (r32289)
jQuery: 1.2.6

comment:84 by mattmadia@…, 15 years ago

Cc: mattmadia@… added

in reply to:  83 comment:85 by anonymous, 15 years ago

I profiled a tracd instance running on my server:

       393         11.233    0.029   11.245    0.029 {gc.collect}

Trac makes many cyclic reference and gc sweeps them. Cyclic reference and gc make response time slow sometime.

comment:86 by matthieu@…, 15 years ago

We have major performance issue on our active TRAC and will have to upgrade the server to a more powerful one because of this bug - is anyone from trac listening? it would be great to have feedback from the devs and know when this is going to be adressed. Thank you!

comment:87 by Christian Boos, 15 years ago

Well, as you can see this issue is stalled, mainly because:

  • none of the developers is apparently able to reproduce the issue, and nobody having the issue has offered a reproduction recipe or has given an ssh access to a box displaying the issue,
  • there's no feedback to the various requests for additional information I asked for above (in particular, nobody said if the mod_deflate "fix" was confirmed, or if the problem persists with Trac 0.11.5, or if restrict_owner was used when people are experiencing slow response time for tickets, etc. just go through the ticket)

So please, before someone feels like adding another me too, put yourself in the perspective of a Trac developer for a minute and provide the information which can really make things go forward.

I also think it would be a good idea to make a summary of all the possible causes of slowdown (a lot of them have already been identified and fixed, again go through this ticket), which would also include the information provided by Shane Caraveo on Trac-dev (googlegroups:trac-dev:a48473cd1dcfbd7c). I'll start a TracPerformance page, to get that ball rolling.

comment:88 by rlpowell@…, 15 years ago

OK. I'm yet another person with this issue, but I'm going to do my best to give you everything I can to fix it.

Let's start with:

Linux dev 2.6.22.18 #1 Thu Mar 19 14:46:22 IST 2009 armv5tejl GNU/Linux

(yep, it's an ARM box; it's a http://www.marvell.com/products/embedded_processors/developer/kirkwood/sheevaplug.jsp )

It's not a VM of any kind, but it is tiny:

rlpowell@dev /etc/apache2 $ cat /proc/cpuinfo Processor : ARM926EJ-S rev 1 (v5l) BogoMIPS : 1192.75 Features : swp half thumb fastmult edsp CPU implementer : 0x56 CPU architecture: 5TE CPU variant : 0x2 CPU part : 0x131 CPU revision : 1 Cache type : write-back Cache clean : cp15 c7 ops Cache lockdown : format C Cache format : Harvard I size : 16384 I assoc : 4 I line length : 32 I sets : 128 D size : 16384 D assoc : 4 D line length : 32 D sets : 128

Hardware : Feroceon-KW Revision : 0000 Serial : 0000000000000000 rlpowell@dev /etc/apache2 $ cat /proc/meminfo MemTotal: 515636 kB MemFree: 98144 kB Buffers: 0 kB Cached: 318352 kB SwapCached: 0 kB Active: 183756 kB Inactive: 162208 kB SwapTotal: 0 kB SwapFree: 0 kB Dirty: 0 kB Writeback: 0 kB AnonPages: 27640 kB Mapped: 13772 kB Slab: 67460 kB SReclaimable: 27708 kB SUnreclaim: 39752 kB PageTables: 924 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 257816 kB Committed_AS: 294256 kB VmallocTotal: 122880 kB VmallocUsed: 564 kB VmallocChunk: 122296 kB

Ubuntu Jaunty.

Lots of logs of the problem at http://teddyb.org/~rlpowell/public_media/ldg/

Using mod_python, but I'll try any other method out you like.

KeepAlive Off doesn't help. Turning off mod_deflate doesn't help.

Note the vmstat in particular: it's not a memory problem in any normal sense. Just suddenly 5+ apache procs start taking all the CPU they can grab, and then after a while they stop.

I have strace as well, lots of them. Haven't posted that because it's not my machine; the owner says:

We're willing to allow shell access if needed, but there's confidential information on the server (including in Trac), so it'll have to be under an NDA. Odds are good you won't even see anything we care about, so it's mostly a pro forma requirement. Sorry for the hassle.

So, if you're up for that, email fm@…

Or, ask me (rlpowell@…) for any more information you want.

-Robin

comment:89 by anonymous, 15 years ago

The system seems to think it's very important to squash email addresses. Ask Google for "Robin Lee Powell", or use rlpowell at digitalkingdom dot org

As I said, I'll help figure this out any way I can.

-Robin

in reply to:  88 ; comment:90 by Christian Boos, 15 years ago

Replying to rlpowell@…: Thanks for the info! However, you didn't specify:

  • which version of Trac you were using (if not 0.11.5, could you retry with this one?)
  • what kind of requests trigger the problem (any, only timeline, only browser or changeset, always/sometimes etc.). What would be a reliable way to trigger the issue?

Lots of logs of the problem at http://teddyb.org/~rlpowell/public_media/ldg/

Ok, I see that you're using several plugins, including one of yours which could do anything and the TH:GitPlugin. Could you try to reproduce the issue without any plugin enabled?

Using mod_python, but I'll try any other method out you like.

Is the problem reproducible with tracd?

We're willing to allow shell access if needed

Yep, I'll contact you privately about that, if there's indeed a problem with Trac 0.11.5 without plugins.

in reply to:  90 ; comment:91 by anonymous, 15 years ago

Replying to cboos:

Replying to rlpowell@…: Thanks for the info! However, you didn't specify:

  • which version of Trac you were using (if not 0.11.5, could you retry with this one?)

Duh. Sorry.

It was probably installed originally as an ubuntu package:

ii trac 0.11.1-2.1 Enhanced wiki and issue tracking system for

but since then I've updated:

rlpowell@dev ~ $ sudo easy_install —upgrade Trac [sudo] password for rlpowell: Searching for Trac Reading http://pypi.python.org/simple/Trac/ Reading http://trac.edgewall.com/ Reading http://trac.edgewall.org/wiki/TracDownload Reading http://projects.edgewall.com/trac Reading http://projects.edgewall.com/trac/wiki/TracDownload Best match: Trac 0.11.5 Processing Trac-0.11.5-py2.6.egg Trac 0.11.5 is already the active version in easy-install.pth Installing trac-admin script to /usr/local/bin Installing tracd script to /usr/local/bin

Using /usr/local/lib/python2.6/dist-packages/Trac-0.11.5-py2.6.egg Processing dependencies for Trac Finished processing dependencies for Trac

That's what I was running for my tests.

  • what kind of requests trigger the problem (any, only timeline, only browser or changeset, always/sometimes etc.). What would be a reliable way to trigger the issue?

*Anything*. The test I was using was the main wiki page, which doesn't appear to have been altered at all; it's the standard Trac 0.11.2 welcome page.

Lots of logs of the problem at http://teddyb.org/~rlpowell/public_media/ldg/

Ok, I see that you're using several plugins, including one of yours which could do anything and the TH:GitPlugin. Could you try to reproduce the issue without any plugin enabled?

I moved the plugins directory out of the way. It seems to have done the trick.

I've updated my files; the vmstat and trac.log with the -2 in them are with the plugins moved.

Using mod_python, but I'll try any other method out you like.

Is the problem reproducible with tracd?

Haven't tried it yet.

We're willing to allow shell access if needed

Yep, I'll contact you privately about that, if there's indeed a problem with Trac 0.11.5 without plugins.

'k.

-Robin

comment:92 by anonymous, 15 years ago

tracd seems to be noticeably better:

rlpowell@dev ~ $ vmstat 1 procs —————-memory————— —-swap— ——-io—— -system— ——cpu——

r b swpd free buff cache si so bi bo in cs us sy id wa 0 0 0 106056 0 298192 0 0 0 0 0 1 1 0 99 0 0 0 0 106056 0 298192 0 0 0 0 118 46 0 0 100 0 0 0 0 106056 0 298192 0 0 0 0 115 43 0 0 100 0 0 0 0 106056 0 298192 0 0 0 0 114 44 0 0 100 0 0 0 0 106056 0 298192 0 0 0 0 114 43 0 0 100 0 0 0 0 106056 0 298192 0 0 0 0 102 17 0 0 100 0 0 0 0 106056 0 298192 0 0 0 0 114 41 0 0 100 0 1 0 0 105924 0 298200 0 0 0 0 116 63 50 4 47 0 0 0 0 105932 0 298200 0 0 0 0 114 47 88 0 12 0 2 0 0 105908 0 298200 0 0 0 0 143 95 60 0 40 0 4 0 0 105884 0 298200 0 0 0 0 139 105 47 2 51 0 0 0 0 106172 0 298200 0 0 0 0 126 83 86 0 14 0 0 0 0 106172 0 298200 0 0 0 0 110 15 0 0 100 0 0 0 0 106172 0 298200 0 0 0 0 114 43 0 0 100 0 0 0 0 106172 0 298200 0 0 0 0 113 43 0 0 100 0

(note that that's vmstat 1, the others have been vmstat 5).

However, that's without SSL, so I'm not sure how much it means. I'll see about getting it working with Apache mod_proxy.

-Robin

comment:93 by anonymous, 15 years ago

Oh, there's no way to edit the lovely mess I just left. Lovely.

Here it is again:

rlpowell@dev ~ $ vmstat 1
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
 r  b   swpd   free   buff  cache   si   so    bi    bo   in   cs us sy id wa
 0  0      0 106056      0 298192    0    0     0     0    0    1  1  0 99  0
 0  0      0 106056      0 298192    0    0     0     0  118   46  0  0 100  0
 0  0      0 106056      0 298192    0    0     0     0  115   43  0  0 100  0
 0  0      0 106056      0 298192    0    0     0     0  114   44  0  0 100  0
 0  0      0 106056      0 298192    0    0     0     0  114   43  0  0 100  0
 0  0      0 106056      0 298192    0    0     0     0  102   17  0  0 100  0
 0  0      0 106056      0 298192    0    0     0     0  114   41  0  0 100  0
 1  0      0 105924      0 298200    0    0     0     0  116   63 50  4 47  0
 0  0      0 105932      0 298200    0    0     0     0  114   47 88  0 12  0
 2  0      0 105908      0 298200    0    0     0     0  143   95 60  0 40  0
 4  0      0 105884      0 298200    0    0     0     0  139  105 47  2 51  0
 0  0      0 106172      0 298200    0    0     0     0  126   83 86  0 14  0
 0  0      0 106172      0 298200    0    0     0     0  110   15  0  0 100  0
 0  0      0 106172      0 298200    0    0     0     0  114   43  0  0 100  0
 0  0      0 106172      0 298200    0    0     0     0  113   43  0  0 100  0

in reply to:  91 ; comment:94 by Christian Boos, 15 years ago

Replying to anonymous:

Replying to cboos: … Could you try to reproduce the issue without any plugin enabled?

I moved the plugins directory out of the way. It seems to have done the trick.

I've updated my files; the vmstat and trac.log with the -2 in them are with the plugins moved.

Thanks for the feedback.

The file http://teddyb.org/~rlpowell/public_media/ldg/trac.log-2.txt still contains messages about PyGIT all the way long, only the (apparently custom made) /var/lib/trac/projects/armageddon_city/plugins/trac_ldg_msg-0.0.2-py2.6.egg plugin seems to be no longer loaded, in the last one-sixth of the file.

This seems to indicate that the TH:GitPlugin is not to blame here, rather your trac_ldg_msg plugin, which btw also has unicode issues (look in the log for "Traceback").

My suspicion is that you're abusing the IRequestFilter interface to do some heavy handed operation there in pre_process_request, without taking appropriate care of not executing the code for requests for which it's not needed. That problem in combination with a lack of mapping of the static resources (as http://teddyb.org/~rlpowell/public_media/ldg/httpd.conf.txt shows), which means that Trac will serve every secondary request, could well explain the slowness.

Please check:

  • TracCgi#MappingStaticResources for the missing part in httpd.conf
  • TracDev/Performance/0.11.5 for additional caveats when writing your plugin (as the problem could also be in the ITemplateStreamFilter interface and filter_stream method instead or in addition to place already mentioned)

in reply to:  94 ; comment:95 by anonymous, 15 years ago

Replying to cboos:

Replying to anonymous:

Replying to cboos: … Could you try to reproduce the issue without any plugin enabled?

I moved the plugins directory out of the way. It seems to have done the trick.

I've updated my files; the vmstat and trac.log with the -2 in them are with the plugins moved.

Thanks for the feedback.

The file http://teddyb.org/~rlpowell/public_media/ldg/trac.log-2.txt still contains messages about PyGIT all the way long, only the (apparently custom made) /var/lib/trac/projects/armageddon_city/plugins/trac_ldg_msg-0.0.2-py2.6.egg plugin seems to be no longer loaded, in the last one-sixth of the file.

This seems to indicate that the TH:GitPlugin is not to blame here, rather your trac_ldg_msg plugin, which btw also has unicode issues (look in the log for "Traceback").

I'm confused: moving the plugins *did not fix the problem*. tracd seems to help, but it's still not great, and that's without SSL.

I'm not sure how to get the other plugin(s) gone; suggestions welcome.

-Robin

in reply to:  95 comment:96 by Christian Boos, 15 years ago

Replying to anonymous:

I'm confused: moving the plugins *did not fix the problem*. tracd seems to help, but it's still not great, and that's without SSL.

Ok, my mistake, I took "that did the trick" for "that fixed the performance issue", when you only meant "that removed the plugins".

Still, make sure you fixed the mapping of static resources, as hinted above.

I'm not sure how to get the other plugin(s) gone; suggestions welcome.

You can simply disable them, see TracIni#components-section or TH:GitPlugin#Configuration.

comment:97 by Valentin Voigt <ValentinVoigt@…>, 15 years ago

Cc: ValentinVoigt@… added

comment:98 by tmcintos@…, 15 years ago

FWIW, I upgraded my SGI Challenge S system from Trac 0.9.6 running under IRIX 6.5.22 to Trac 0.11.2 running under NetBSD 5.0 and noticed a huge slowdown. CPU usage, not memory usage, seems to be the issue. I don't have time to investigate further or try upgrading to 0.11.5 at the moment.

Trac version: 0.11.2 running tracd behind Apache proxy.

% uname -a NetBSD hoth.astro.net 5.0 NetBSD 5.0 (GENERIC32_IP2x) #0: Mon Apr 27 06:08:08 UTC 2009 builds@…:/home/builds/ab/netbsd-5-0-RELEASE/sgimips/200904260229Z-obj/home/builds/ab/netbsd-5-0-RELEASE/src/sys/arch/sgimips/compile/GENERIC32_IP2x sgimips

% python -V Python 2.4.5

% svn —version svn, version 1.5.4 (r33841)

compiled Apr 25 2009, 23:53:20

avail memory = 245 MB cpu0 at mainbus0: MIPS R4400 CPU (0x460) Rev. 6.0 with MIPS R4010 FPC Rev. 0.0 cpu0: 16KB/16B direct-mapped L1 Instruction cache, 48 TLB entries cpu0: 16KB/16B direct-mapped write-back L1 Data cache cpu0: 1024KB/128B direct-mapped write-back L2 Unified cache int0: bus 100MHz, CPU 200MHz

comment:99 by thereisnocowlevel@…, 15 years ago

I got several noticeable performance boosts by doing the following in my 0.11.5 server:

  • Switch from a sqllite database to a mysql one.
  • Switch from mod_python to mod_wsgi.
  • Reduce the log level from DEBUG to ERROR. DEBUG seems to be the default.

Reducing the log level seems to help with that initial lag spike that I get when I access the server.

comment:100 by anonymous, 15 years ago

Have a little bit of info here. I enabled KeepAlive (disabled in my Centos VM) and now sometimes is fast, sometimes is slow. The culprit seem to be:

2009-09-08 22:43:38,239 Trac[loader] DEBUG: Adding plugin SvnAuthzAdminPlugin 0.1.2 from /opt/globant/projects/data/trac/glb-projects/plugins/SvnAuthzAdminPlugin-0.1.2-0.11.egg
2009-09-08 22:43:38,246 Trac[loader] DEBUG: Adding plugin tracsuperuser 0.2 from /opt/globant/projects/data/trac/glb-projects/plugins/tracsuperuser-0.2-py2.4.egg
2009-09-08 22:43:38,248 Trac[loader] DEBUG: Adding plugin TracAccountManager 0.2.1dev from /opt/globant/projects/data/trac/glb-projects/plugins/TracAccountManager-0.2.1dev-py2.4.egg
2009-09-08 22:43:38,275 Trac[loader] DEBUG: Loading trac.ticket.web_ui from /usr/lib/python2.4/site-packages/Trac-0.11.5-py2.4.egg
2009-09-08 22:43:38,306 Trac[loader] DEBUG: Loading trac.mimeview.php from /usr/lib/python2.4/site-packages/Trac-0.11.5-py2.4.egg
2009-09-08 22:43:38,308 Trac[loader] DEBUG: Loading trac.ticket.query from /usr/lib/python2.4/site-packages/Trac-0.11.5-py2.4.egg
2009-09-08 22:43:38,314 Trac[loader] DEBUG: Loading trac.attachment from /usr/lib/python2.4/site-packages/Trac-0.11.5-py2.4.egg
2009-09-08 22:43:38,317 Trac[loader] DEBUG: Loading trac.versioncontrol.svn_fs from /usr/lib/python2.4/site-packages/Trac-0.11.5-py2.4.egg

And a big bunch of other Adding/Loading. Some requests do not include these entries in the trac log, and are much faster. This seems to be related to the KeepAlive timeout.

Tracd is way faster.

Thank you all

comment:101 by anonymous, 15 years ago

Notes on previous comment:

I finally managed to install wsgi + daemon mode. The problem is solved at least for me. I noticed no difference with wsgi + embedded mode.

The problem was very strange, looks like all the trac initialization is done on every request, and that is avoided with daemon mode.

It works for me now.

  • Pablo

comment:102 by anonymous, 15 years ago

Other comment here (related to previous comment):

with:

WSGIDaemonProcess trac threads=25 processes=1 maximum-requests=10000 display-name=httpd-trac

It works good enough.

But if I use two processes:

WSGIDaemonProcess trac threads=25 processes=2 maximum-requests=10000 display-name=httpd-trac

It is again very slow (cpu 100% for 3 to 5 seconds).

I noticed that in the second case, trac logs a line of the form:

2009-09-09 09:22:38,622 Trac[env] INFO: Reloading environment due to configuration change

And with just one wsgi process this reloading never happens. Does this gives anyone a clue?

  • Pablo

in reply to:  102 ; comment:103 by osimons, 15 years ago

Replying to anonymous:

I noticed that in the second case, trac logs a line of the form: 2009-09-09 09:22:38,622 Trac[env] INFO: Reloading environment due to configuration change And with just one wsgi process this reloading never happens. Does this gives anyone a clue?

I have a feeling one of your plugins writes to the config file when loading, so that each time the request alternates between processes it will reload - causing a new touching of config, which again causes the other process to reload its environment.

I'd look in any of the plugins to see if they override __init__() on initialization or does env.config.set() somehow as part of loading and or running. Try disabling plugins selectively to identify the offending plugin.

Quite certain your issue is a PluginIssue, and not an issue for the Trac project itself. Don't add more comments to this ticket which really is about something else.

in reply to:  103 ; comment:104 by Christian Boos, 15 years ago

Replying to osimons:

Quite certain your issue is a PluginIssue, and not an issue for the Trac project itself. Don't add more comments to this ticket which really is about something else.

Well, I don't know what this ticket is really about… Fact is that there are many factors that can eventually make Trac appears to be slow. I think that this ticket is nothing more than the focus point of people finding Trac slow for a variety of different reasons, and ending up here after searching for trac+slow, via #7029

As I noted in a previous comment, I tried to make a user-oriented summary of those issues in the TracPerformance page, in the form of a "check-list", and also a developer-oriented summary of points that could be enhanced in Trac to get some speed-ups, in the TracDev/Performance page.

I find contributions like comment:102 to be actually helpful in this perspective.

in reply to:  104 comment:105 by anonymous, 15 years ago

Replying to cboos:

I follow up with hope that someone finds this helpful. osimos was right in comment 103: it was a plugin issue. In this case it was the agilo plugin. This was fixed in agilo-0.8.2.0.2. I now have wsgi running multiple processes.

The easiest way to diagnose this is to ls —full-time trac.ini and see if there are changes in the last modification date.

I found that running wsgi in daemon mode with just 1 process can be used as a workaround for misbehaving plugins that constantly touches trac.ini.

I guess that logging touch trac.ini will make this easier to diagnose. It can make it even easier to identify which plugin is the culprit.

Thanks cboos for your support.

in reply to:  82 ; comment:106 by anonymous, 15 years ago

Replying to anonymous:

Idem:

Aprox 5-10 seconds for view tickert, view worklog, timeline, view wiki page…

Network:

100Mb Ethernet

Server (dedicated):

2GiB ram/ 2 core / 4xSAS HD

Software:

FreeBSD 6.1 trac-0.11.5rc ( idem with previous trac-0.11.3) apache 2.2.2, python 2.5.3 (mod_python-3.3.1_2) database on postgresql-server-8.3.6

svn server: version 1.5.5 (r34862). At the same machine.

static files served by apache (2/3 seconds speed up per page):
Alias /static/ /home/trac/tracXXX/static/htdocs/common/
AliasMatch ^/.*/chrome(.*) /home/trac/tracXXX/static/htdocs$1
<Location /static/>
  SetHandler None
</Location>
<LocationMatch ^/.*/chrome.*>
  SetHandler None
</LocationMatch>
<Directory /home/trac/tracXXX/static/htdocs/>
  Order allow,deny
  Allow from all
</Directory>

Page:

Only 20/25% busy cpu while viewing a ticket page. Idle 100% rest of time.

Ticket view (6 seconds):

      tty           ipsd0            ipsd1            ipsd2             cpu
  tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0   78 19.00   4  0.07   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 14.00   2  0.03   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 14.00   6  0.08   0.00   0  0.00   0.00   0  0.00   2  0  1  0 97
   0   77 14.00   2  0.03   0.00   0  0.00   0.00   0  0.00  20  0  5  0 75
   0   77 13.56   9  0.12   0.00   0  0.00   0.00   0  0.00  24  0  3  0 73
   0   77 30.14  14  0.41   0.00   0  0.00   0.00   0  0.00  25  0  3  0 71
   0   77 22.18  11  0.24   0.00   0  0.00   0.00   0  0.00  25  0  1  0 74
   0  677 15.00   2  0.03   0.00   0  0.00   0.00   0  0.00  25  0  3  0 71
   0   77 15.00   4  0.06   0.00   0  0.00   0.00   0  0.00   2  0  0  0 98
   0   77  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  1  0 99
   0   77  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100

View timeline (only 10 days back): 10 seconds

   0   78 16.00   4  0.06   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 16.00   2  0.03   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78 13.20  15  0.19   0.00   0  0.00   0.00   0  0.00   4  0  4  0 92
   0  677  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00  19  0  9  0 72
   0   77 27.71   7  0.19   0.00   0  0.00   0.00   0  0.00  24  0  2  0 74
   0   77 20.77  47  0.95   0.00   0  0.00   0.00   0  0.00  24  0  5  0 72
   0   77 128.00   1  0.12   0.00   0  0.00   0.00   0  0.00  25  0  1  0 74
   0   78 86.00   3  0.25   0.00   0  0.00   0.00   0  0.00  24  0  2  0 73
   0   77 128.00   2  0.25   0.00   0  0.00   0.00   0  0.00  24  0  3  0 73
   0   78 48.67   6  0.28   0.00   0  0.00   0.00   0  0.00  24  0  2  0 74
   0   77 48.00   7  0.33   0.00   0  0.00   0.00   0  0.00  19  0  2  0 79
   0   77  9.00   2  0.02   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100
   0   78  0.00   0  0.00   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
   0   77  9.00   6  0.05   0.00   0  0.00   0.00   0  0.00   0  0  0  0 99
      tty           ipsd0            ipsd1            ipsd2             cpu
 tin tout  KB/t tps  MB/s   KB/t tps  MB/s   KB/t tps  MB/s  us ni sy in id
   0   77  9.00   2  0.02   0.00   0  0.00   0.00   0  0.00   1  0  0  0 99
   0  231  9.00   2  0.02   0.00   0  0.00   0.00   0  0.00   0  0  0  0 100

I profiled this instance:

         935179 function calls (910454 primitive calls) in 6.699 CPU seconds

   Ordered by: cumulative time

   ncalls  tottime  percall  cumtime  percall filename:lineno(function)
        1    0.000    0.000    6.699    6.699 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/web/trac_profile.py:155(inspected)
        1    0.000    0.000    6.699    6.699 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/web/wsgi.py:90(run)
        1    0.000    0.000    6.699    6.699 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/web/main.py:314(dispatch_request)
        1    0.000    0.000    4.782    4.782 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/web/main.py:432(_dispatch_request)
        1    0.000    0.000    4.782    4.782 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/web/main.py:139(dispatch)
    43/41    0.003    0.000    2.894    0.071 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/core.py:65(extensions)
  396/382    0.006    0.000    2.891    0.008 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/core.py:194(__getitem__)
      105    0.157    0.001    2.786    0.027 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/env.py:222(is_component_enabled)
        1    0.000    0.000    2.290    2.290 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/web/chrome.py:706(render_template)
      2/1    0.000    0.000    1.867    1.867 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/env.py:546(open_environment)
      6/2    0.001    0.000    1.733    0.867 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/loader.py:134(lo
ad)
      4/2    0.000    0.000    1.732    0.866 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/loader.py:242(_i
nstantiate)
      4/2    0.001    0.000    1.732    0.866 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/base.py:345(__in
it__)
        1    0.000    0.000    1.476    1.476 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/web/chrome.py:689(load_template)
    11699    0.098    0.000    1.423    0.000 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/config.py:394(options)
        1    0.000    0.000    1.390    1.390 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/env.py:175(__init__)
      189    0.007    0.000    1.349    0.007 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/eval.py:63(__ini
t__)
        1    0.001    0.001    1.316    1.316 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/loader.py:101(load_components)
        1    0.004    0.004    1.312    1.312 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/loader.py:38(_load_eggs)
    11722    0.249    0.000    1.141    0.000 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/config.py:296(get)
      382    0.163    0.000    1.102    0.003 {method 'sort' of 'list' objects}
      189    0.012    0.000    1.010    0.005 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/eval.py:423(_com
pile)
   806/67    0.010    0.000    1.006    0.015 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/base.py:412(_pre
pare)
        4    0.025    0.006    1.004    0.251 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/markup.py:72(_pa
rse)
        1    0.000    0.000    0.995    0.995 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/ticket/report.py:81(process_request)
       67    0.001    0.000    0.988    0.015 build/bdist.freebsd-6.1-RELEASE-i386/egg/pkg_resources.py:1911(load)
    63000    0.572    0.000    0.936    0.000 /usr/local/lib/python2.5/site-packages/Trac-0.11.5rc2-py2.5.egg/trac/env.py:236(<lambda>)
 2748/830    0.013    0.000    0.793    0.001 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/core.py:264(_ensure)
      940    0.020    0.000    0.762    0.001 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/interpolation.py
:39(interpolate)
       99    0.001    0.000    0.705    0.007 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/template/directives.py:69
(attach)
        1    0.000    0.000    0.674    0.674 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/core.py:152(render)
        1    0.002    0.002    0.674    0.674 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/output.py:33(encode)
      231    0.004    0.000    0.667    0.003 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/output.py:302(__call__)
      234    0.001    0.000    0.662    0.003 /usr/local/lib/python2.5/site-packages/Genshi-0.5.1-py2.5-freebsd-6.1-RELEASE-i386.egg/genshi/output.py:751(__call__)

in reply to:  106 comment:107 by Christian Boos, 15 years ago

Thanks for the feedback, but please next time, use an attachment, that ticket is already big enough…

This profile really looks like a startup profile, but I assume it's not, as you're saying that you get 5-10 seconds for every requests. So it looks like a bug where every request end up triggering an environment reload. As written in the TracPerformance#Log checklist, do you see a lot of Reloading environment due to configuration change messages in your trac.log?

I also noticed you were talking about "view worklog", isn't that a module from AgiloForScrum? They acknowledged having had such a bug in one of their plugin in some 0.8xxx version, maybe you should upgrade.

comment:108 by anonymous, 14 years ago

If I access trac via Firefox 3.0 it takes 15 to 20 seconds to render page.

If I access trac with Opera 9.xomething it takes approx 1 second.

Hence, is this a browser issue?

Until I tried it in Opera I thought it was a server issue as it appeared really slow, exactly as described often in earlier posts.

15 to 20 seconds to render any page. (Ubuntu 8.04 server, Apache2.2, trac 0.10.4). I have no plugins other than tracpygments.

I'm thinking keepalive ???,

salut,

Bartolomeo da Firenze

comment:109 by anonymous, 14 years ago

Actually, forget keepalive, it has everything to do with local caching. Opera is set up to keep a local cache of all the static elements of the page. D'oh. That's it for me. BTW the server is an el-cheapo virtual private server with ca. 300MB RAM and 2.13GHz Quad core Xeon. Hope this helps a little for someone.

Bart.

comment:110 by mattmadia@…, 14 years ago

Cc: mattmadia@… removed

comment:111 by igor.poboiko@…, 14 years ago

Cc: igor.poboiko@… added

comment:112 by m.moeller@…, 14 years ago

do you have many entries in the permissions table? We have about 1000 entries there, and it looks like trac is reading the whole table like 10 times before displaying a ticket.

comment:113 by m.moeller@…, 14 years ago

well, it has to do with restrict_owner and the number of permissions in your system.

from perms.py:

def get_users_with_permissions(self, permissions):

# get_user_permissions() takes care of the magic 'authenticated' group. # The optimized loop we had before didn't. This is very inefficient, # but it works.

comment:114 by glc <glc.nwl@…>, 14 years ago

I've installed Trac 0.11.5 for first time 2 weeks ago with Apache 2 / mod_python and my Trac installation was VERY slow too (with all Trac environments I set up) : about 10 /15 seconds to display each page :-(

After spending a few days at full time to solve this problem, I got the solution, hope this can help :

I had initially set the plugin cache in my httpd.conf like stated here but I thought something was wrong, as the directory I made for egg cache remained empty.

I finally discovered that mod_python DOES NOT use SetEnv directives, so I used :

PythonOption PYTHON_EGG_CACHE /path/to/my-plugin-cache

instead of :

SetEnv PYTHON_EGG_CACHE /path/to/my-plugin-cache

as indicated in the last two lines of this post

And now all the pages of my Trac environments display in less than 1 second, it seems that among some Trac plug-ins I installed, TracNav needed to write a css file, because the only folder that is now defined automatically in my plugin cache directory is a TracNav directory which contains just a directory tree leading to one file (tracnav.css)

I guess that similar problems can append with other plug-ins, Trac becomes very slow when a plug-in cannot be cached as it is expected to be because of some misconfiguration.

So now my Trac installation works perfectly with good and normal response times, despite of the following strange (but temporary) server configuration :

  • Ubuntu server 9.04 (headless) within a VirtualBox VM on Windows XP (with only 750 Mb RAM allocated for this VM)
  • Pentium 4 - 1.3 Ghz / 1,5 Gb RAM

So, this is a proof that :

  1. When plugin cache issues are solved Trac 0.11.5 can work fast even on unoptimized hardware configurations.
  1. A single css file (that is not cached by a plug-in as it is expected to be) can slow all your Trac installation.

;-)

in reply to:  114 comment:115 by glc <glecoz@…>, 14 years ago

Replying to glc <glc.nwl@…>:

A single css file (that is not cached by a plug-in as it is expected to be) can slow all your Trac installation.


… Just because in my case plug-ins are shared by all Trac projects with the following directives in my trac.ini :

[inherit]
plugins_dir = /var/lib/trac/projects-common/plugins
templates_dir = /var/lib/trac/projects-common/templates

comment:116 by Christian Boos, 14 years ago

Thanks a lot for your detailed analysis.

I wonder if / how we could detect this situation and show a WARNING about it. Ideas/patches welcomed.

comment:117 by Christian Boos, 14 years ago

(something like catching the ExtractionError exception and printing a more precise error than the setting the PYTHON_EGG_CACHE environment hint given by that setuptools exception itself)

comment:118 by Remy Blank, 14 years ago

[8724] fixed the [trac] auto_reload option so that disabling auto-reloading of templates actually works. It seems that it's been enabled by default for quite a long time. I have no idea what the performance impact was, but every little bit helps ;-)

comment:119 by stef@…, 14 years ago

I've had this problem since upgrading to 0.11.x. I'm now running 0.11.5 and finally got tired of it. Running in WSGI Daemon mode, as described at the link below, fixed this problem for me.

http://code.google.com/p/modwsgi/wiki/IntegrationWithTrac

comment:120 by anonymous, 14 years ago

Up for having performance issues. Default 11.5, CentOs, no extra plugins. Followed the install guide on the site. Turned off KeepAlives and raised the debug level.

Still quite slow/high cpu usage.

comment:121 by Christian Boos, 14 years ago

Description: modified (diff)

Please go through the TracPerformance checklists.

comment:122 by cmarschalek, 14 years ago

Don't know if it is any help, but since I reported this in the first place, and we didn't yet upgrade the server (not that easy…):

My issues seem to be related to opening rather large changesets, or when I open a few in seperate tabs. But it's really hard to pinpoint… Here's what top showed while the tabs where loading:

top - 18:04:22 up 388 days, 12:58,  1 user,  load average: 4.86, 1.71, 0.61
Tasks:  68 total,   1 running,  67 sleeping,   0 stopped,   0 zombie
Cpu(s):  2.0% us,  1.0% sy,  0.0% ni,  0.0% id, 96.7% wa,  0.3% hi,  0.0% si
Mem:    507792k total,   502728k used,     5064k free,     7456k buffers
Swap:  1020088k total,   103036k used,   917052k free,    73688k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 6557 wwwrun    15   0  106m  66m 8880 D  0.7 13.5   2:31.24 httpd2-prefork
14225 wwwrun    15   0 89224  57m 8672 D  0.7 11.5   1:00.90 httpd2-prefork
14443 wwwrun    15   0  101m  66m 8672 D  0.3 13.3   1:16.92 httpd2-prefork
20251 wwwrun    15   0 87636  56m 8932 D  0.3 11.4   1:05.53 httpd2-prefork
27381 wwwrun    18   0 71996  43m 8648 D  0.3  8.7   0:08.97 httpd2-prefork
27382 wwwrun    18   0 70744  41m 8628 D  0.3  8.4   0:04.22 httpd2-prefork

Interesting is the percentage the cpu is waiting…

comment:123 by cmarschalek, 14 years ago

Sorry this should've gone to #7785

comment:124 by Christian Boos, 14 years ago

Description: modified (diff)
Resolution: fixed
Status: reopenedclosed

Let's summarize and close the ticket, which is too "big" to be of any further usefulness, at this point.

The outcome of the various contributions to this ticket are numerous, thanks to all who have given feedback.

  • a lot of possible trouble with installation and configuration issues have been identified, refer to the TracPerformance checklist,
  • for Trac 0.11.6, a few improvements have been committed, mainly r8644:8645, which should help to reduce startup time, and [8795] which should help detect excessive environment restarts

Please leave this ticket closed, rather follow-up on #8813- Trac 0.11 really slow, high cpu usage (continued) .

comment:125 by Sebastian Krysmanski <sebastian@…>, 9 years ago

Cc: sebastian@… removed

Modify Ticket

Change Properties
Set your email in Preferences
Action
as closed The owner will remain Christian Boos.
The resolution will be deleted. Next status will be 'reopened'.
to The owner will be changed from Christian Boos to the specified user.

Add Comment


E-mail address and name can be saved in the Preferences .
 
Note: See TracTickets for help on using tickets.