Edgewall Software

Opened 14 years ago

Closed 14 years ago

Last modified 13 years ago

#4885 closed defect (worksforme)

Possible memory leak (after update to 0.11dev-r4930)

Reported by: ilias@… Owned by: Christopher Lenz
Priority: normal Milestone:
Component: web frontend/mod_python Version: devel
Severity: critical Keywords: memory
Cc: Branch:
Release Notes:
API Changes:
Internal Changes:


The update from r43xx to r4930 worked fine.

One problem:

The VPS on which i run trac runs out of resources after a few hours.

Has anyone experienced similar problems?

Is there anything I can do to isolate the problem cause?

(apache mod_python)

Attachments (0)

Change History (12)

comment:1 by ilias@…, 14 years ago

Severity: normalcritical

can I please have some information on this? I've updated to the latest (r4937), again the same problem.

comment:2 by Christian Boos, 14 years ago

Component: generalmod_python frontend
Owner: changed from Jonas Borgström to Christopher Lenz

Are you sure that it is mod_python related? i.e. it doesn't happen with tracd?

If yes:

  • What version of mod_python are you using?
  • What happens when you run apache httpd in debug mode (one process / one thread, using httpd -X)?
  • What kind of requests are involved, any?

Also, can you trace the problem more precisely, between 43xx and 4930 is a bit vague… in particular, does that also happen for revisions before r4819 (the setuptools merge)?

I've looked at the memory usage of my apache threads, and for me, they seem to reach a peak (around 180M) and then stop growing, which doesn't suggest there's a leak. If you give more information, I might be able to make more specific tests.

comment:3 by ilias@…, 14 years ago

I am not sure if it is an apache mod_python issue. I'll invest some time to trace the problem and will inform you here.

in reply to:  2 comment:4 by ilias@…, 14 years ago

Replying to cboos:

Are you sure that it is mod_python related? i.e. it doesn't happen with tracd?

my local installation is on windows with tracd. should I look a the memory consumption of the python-task to detect if there's a leak?

As for my remote server: I've managed to lock the way back, thus I'm not able to go back to the overall r4353 installation which worked fine.

Possibly the system configuration tells you something for now (I continue to try to switch back):

System Information
Trac: 	0.11dev-r4939
Python: 	2.4 (#1, Mar 22 2005, 21:42:42) [GCC 3.3.5 20050117 (prerelease) (SUSE Linux)]
setuptools: 	0.6c2
SQLite: 	3.3.4
pysqlite: 	2.1.3
Genshi: 	0.4dev-r505
Pygments: 	0.7.1
Subversion: 	1.1.3 (r12730)

comment:5 by ilias@…, 14 years ago

I have started the tracd server on the remove linux VPS with

 tracd -p 8000 -e ./

the resources consumption seems much better and stable. So this is an indication that the problem is with mod_pyton (although I am not sure)

comment:6 by ilias@…, 14 years ago

I've switched back to r4353 (although the dependencies are not recreated, don't know the revision numbers used, e.g. for genshi).

But the behaviour is now again ok, here's the memory usage of the apache processes:

PID 	%CPU 	%MEM 	CommandAscending 	Nice 	Pri 	RSS 	Stat 	Time 	User
21794 	0.0 	0.6 	/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	0 	23 	25896 	S 	00:00:05 	0 	
13838 	1.8 	1.5 	/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	0 	24 	60648 	S 	00:00:24 	30 	
13940 	1.5 	1.6 	/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	0 	23 	65216 	S 	00:00:19 	30 	
15959 	1.0 	1.3 	/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	0 	23 	56440 	S 	00:00:11 	30 	
23736 	1.1 	1.0 	/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	0 	24 	44296 	S 	00:00:05 	30 	
26461 	3.0 	0.9 	/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	0 	24 	38776 	S 	00:00:03 	30

comment:7 by ilias@…, 14 years ago

can someone post the memory usage of his apache tasks? after a few hours of work? I have the feeling that there's possibly no memory leak, but just a normal consumption which hits on the limits of the VPS.

comment:8 by Christian Boos, 14 years ago

For example (with mod_python 3.3.1, subversion 1.4.3):

# ps -e -o vsize,size,rss,cmd | grep httpd
 75468 16528  5752 /opt/apache-2.0.59/bin/httpd -k start
189276 77668  1920 /opt/apache-2.0.59/bin/httpd -k start
157580 52276 41148 /opt/apache-2.0.59/bin/httpd -k start
155888 50584 39348 /opt/apache-2.0.59/bin/httpd -k start
156424 51120 39912 /opt/apache-2.0.59/bin/httpd -k start
154976 49672 38176 /opt/apache-2.0.59/bin/httpd -k start
154132 48828 37676 /opt/apache-2.0.59/bin/httpd -k start
155908 50604 39524 /opt/apache-2.0.59/bin/httpd -k start
155288 49984 38948 /opt/apache-2.0.59/bin/httpd -k start
153500 48196 36948 /opt/apache-2.0.59/bin/httpd -k start
144684 39380 27996 /opt/apache-2.0.59/bin/httpd -k start
153084 47780 36624 /opt/apache-2.0.59/bin/httpd -k start

comment:9 by ilias@…, 14 years ago

with modpython 3.1.3-42, subversion 1.1.3 (r12730), other stuff as above

# ps -e -o vsize,size,rss,cmd | grep httpd
 32824  8216 15508 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
 17156  6748  6164 /usr/sbin/fcgi-pm        -f /etc/apache2/httpd.conf
 81872 55244 61260 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
 92544 65916 72040 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
 77156 50528 56596 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
 83284 56656 63220 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf
 58828 32200 38228 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf

The VPS is at 90% of resources (close to 'collapse').

But I interpret that my memory consumption is OK.

So I guess that the memory consumption of trac has grown a little, and that's the reason I'm hitting at my VPS resource limits.

Or am I wrong?

comment:10 by ilias@…, 14 years ago

clarification: the above consumption is with trac r4955. here the numbers when it is a 100% resource consumption with r4955 (cannot login via SSH, thus copied from web-management tool):

Command 	RSS 	Stat 	Time 	User 	
/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	80412 	S 	00:01:41 	30 	
/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	73636 	S 	00:01:36 	30 	
/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	71752 	S 	00:01:42 	30 	
/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	68500 	S 	00:01:38 	30 	
/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	68544 	S 	00:00:38 	30 	
/usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf 	14288 	S 	00:00:00 	0

comment:11 by ilias@…, 14 years ago

Resolution: worksforme
Status: newclosed

I've ugraded my VPS server at my hosting provider.

Everything is fine now.

It looks like the memory usage of trac or of one of its dependencies has increased.

This should possibly stated somewhere in the docs, thus users of VPS/mod_python are aware of it (the system would NOT work on my providers smallest VPS. I had the mid-sized, now I have the biggest one).

Closing as worksforme, because there's no memory leak.

comment:12 by ilias@…, 13 years ago

Keywords: memory added

Modify Ticket

Change Properties
Set your email in Preferences
as closed The owner will remain Christopher Lenz.
The resolution will be deleted. Next status will be 'reopened'.
to The owner will be changed from Christopher Lenz to the specified user.

Add Comment

E-mail address and name can be saved in the Preferences .
Note: See TracTickets for help on using tickets.