Opened 18 years ago
Closed 17 years ago
#4081 closed defect (worksforme)
trac server leaks memory/objects
Reported by: | Owned by: | Jonas Borgström | |
---|---|---|---|
Priority: | normal | Milestone: | |
Component: | general | Version: | devel |
Severity: | major | Keywords: | memory |
Cc: | Branch: | ||
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
Since updating to r4045 with this modification:
-
trac/db/sqlite_backend.py
139 139 """Connection wrapper for SQLite.""" 140 140 141 141 __slots__ = ['_active_cursors'] 142 poolable = have_pysqlite and sqlite_version >= 30301142 poolable = False # have_pysqlite and sqlite_version >= 30301 143 143 144 144 def __init__(self, path, params={}): 145 145 assert have_pysqlite > 0
my server stays up for more than a day at a time, apparently giving it time to leak a significant amount of memory or object references. It is currently up to 252m virtual/166m resident. Browsing the site will now occassionally give tracebacks like this one:
Traceback (most recent call last): File "/home/trac/Projects/trac/trunk/trac/web/api.py", line 382, in send_error 'text/html') File "/home/trac/Projects/trac/trunk/trac/web/chrome.py", line 475, in render_template return stream.render(method, doctype=doctype) File "/home/trac/Projects/genshi/trunk/genshi/core.py", line 146, in render output = u''.join(list(generator)) File "/home/trac/Projects/genshi/trunk/genshi/output.py", line 200, in __call__ for kind, data, pos in stream: File "/home/trac/Projects/genshi/trunk/genshi/output.py", line 486, in __call__ for kind, data, pos in chain(stream, [(None, None, None)]): File "/home/trac/Projects/genshi/trunk/genshi/output.py", line 436, in __call__ for kind, data, pos in stream: File "/home/trac/Projects/genshi/trunk/genshi/core.py", line 207, in _ensure for event in stream: File "/home/trac/Projects/genshi/trunk/genshi/core.py", line 207, in _ensure for event in stream: File "/home/trac/Projects/trac/trunk/trac/web/chrome.py", line 478, in _strip_accesskeys for kind, data, pos in stream: File "/home/trac/Projects/genshi/trunk/genshi/filters.py", line 313, in __call__ for kind, data, pos in stream: File "/home/trac/Projects/genshi/trunk/genshi/template.py", line 1145, in _match content = list(content) File "/home/trac/Projects/genshi/trunk/genshi/filters.py", line 313, in __call__ for kind, data, pos in stream: File "/home/trac/Projects/genshi/trunk/genshi/template.py", line 1120, in _match for event in stream: File "/home/trac/Projects/genshi/trunk/genshi/template.py", line 1109, in _strip event = stream.next() File "/home/trac/Projects/genshi/trunk/genshi/template.py", line 930, in _eval result = data.evaluate(ctxt) File "/home/trac/Projects/genshi/trunk/genshi/eval.py", line 101, in evaluate {'data': data}) File "/home/trac/Projects/trac/trunk/templates/error.html", line 148, in <Expression u"shorten_line(repr(value))"> <td><code>${shorten_line(repr(value))}</code></td> MemoryError
rendering the site more or less unusable and putting quite a load on the server.
Attachments (0)
Change History (10)
comment:1 by , 18 years ago
comment:3 by , 18 years ago
Milestone: | → none |
---|---|
Severity: | normal → major |
This is definitely a valid issue that needs more investigation, but it's probably related to PySqlite, so I move that to the none milestone.
comment:4 by , 18 years ago
Keywords: | memory added |
---|
comment:5 by , 18 years ago
FWIW, I am experiencing memory leaks with sqlite too. After a day or two's worth of runtime, tracd consumes all the RAM and all the swap, slowing the system down to a halt.
follow-up: 8 comment:6 by , 17 years ago
I'm experiencing memory leak with tracd and PostgreSQL backend.
After several days of running it grows more than half a gig…
comment:7 by , 17 years ago
Keywords: | needinfo added |
---|---|
Milestone: | not applicable → 0.11 |
Please mention the exact version of trac you're using (e.g. if it's 0.11dev, which revision)
comment:8 by , 17 years ago
Replying to tomasz.sterna@sensisoft.com:
I'm experiencing memory leak with tracd and PostgreSQL backend.
After several days of running it grows more than half a gig…
Are you using a scoped repository? If so, it may be related to #5213.
comment:9 by , 17 years ago
Keywords: | needinfo removed |
---|
When using latest Genshi [G781] and with #5213 now fixed, it seems we don't have critical memory leaks anymore.
The memory usage can still be high, but (in my testings) always stabilizes after a while, for a given set of pages visited.
If you happen to find a specific usage pattern that clearly demonstrates a leak, please reopen and document it here (e.g. "I repeatedly query /log/…?… and each request increases the memory usage of … and it never stops increasing").
comment:10 by , 17 years ago
Milestone: | 0.11 |
---|---|
Resolution: | → worksforme |
Status: | new → closed |
So, I actually wanted to close it for now…
Can you check what the GC output says:
trac/web/main.py
In my tests with tracd, I always end up with empty garbage lists.
Using sqlite, it seems that there's some memory leaking, though (I haven't checked with latest pysqlite yet), but with postgresql the memory seems to stay fairly constant.