#3446 closed defect (fixed)
Rate of `database is locked` errors is too high
Reported by: | Christian Boos | Owned by: | Christian Boos |
---|---|---|---|
Priority: | high | Milestone: | 0.11.6 |
Component: | general | Version: | devel |
Severity: | critical | Keywords: | database lock pysqlite |
Cc: | trac@…, carlos@… | Branch: | |
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
I still get way too many database is locked
errors. This morning, I simply tried to log in and the server was not loaded (approx. half of the threads were "_" Waiting for connection).
Traceback (most recent call last): File "/usr/lib/python2.3/site-packages/trac/web/main.py", line 314, in dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.3/site-packages/trac/web/main.py", line 186, in dispatch req.session = Session(self.env, req) File "/usr/lib/python2.3/site-packages/trac/web/session.py", line 52, in __init__ self.promote_session(sid) File "/usr/lib/python2.3/site-packages/trac/web/session.py", line 125, in promote_session "AND authenticated=0", (sid,)) File "/usr/lib/python2.3/site-packages/trac/db/util.py", line 47, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "/usr/lib/python2.3/site-packages/trac/db/sqlite_backend.py", line 44, in execute args or []) File "/usr/lib/python2.3/site-packages/trac/db/sqlite_backend.py", line 36, in _rollback_on_error return function(self, *args, **kwargs) OperationalError: database is locked
Jonas/Christopher, can one of you attach a recent gzipped dump of the t.e.o Trac log here? I'd like to make some stats about the locks frequency. Ideally I'd like to understand why this still happens so frequently and find ways to improve the situation…
Beside locks, I'm also frequently seeing blank pages when going on the Markup Trac. But probably that's a different issue.
Attachments (1)
Change History (43)
comment:1 by , 18 years ago
comment:2 by , 18 years ago
Some random documents about SQLite and concurrent writes, for reference:
- Proposal for Improving Concurrency in SQLite
- Synchronized Locking in SQLite
- SQLite MultiThreading
- Improving Concurrency In SQLite, in particular section 4.0 Shorter transactions
- the Shadow Pager idea, now that would be nice ;)
Also, don't forget about locks triggered by concurrent writes, which we should now handle correctly (#2170 and #ps126).
comment:3 by , 18 years ago
… and we also shouldn't write into the db when it's not necessary: see r3616.
It's probably worth to upgrade t.e.o, in order to see how much this improves the situation.
comment:4 by , 18 years ago
Owner: | changed from | to
---|
Well, despite r3616 we still have a high rate of "database is locked" error.
See #3503 for a short-term solution (an nice TracError explaining what happens instead of a backtrace).
Long-term, I think we should do the following:
- Shorten our transactions. In particular, a main reason for the problem seems to be the long periods of time during which SHARED locks are held. The common programming idiom in Trac is to iterate over cursors, performing some operation for each of the returned row. While this may sounds a good approach which lowers memory consumption, it lengthens the period during which a SHARED lock is held and during which not a single write can be done. We should instead read everything in memory first, then process the data. Of course, if there are some very straightforward tasks (like filtering) that can be done while reading the data, it could still be done by iterating over the cursor.
- Also, if there are situations where not all the rows from a cursor are fetched, the cursor should be closed, so that the SHARED lock can be release as soon as possible.
- Repeatable transactions: instead of doing writes into the database as soon as we feel like doing so, we could pass a callback to the database layer which will be used for writing all the information. The advantage of this would be to be able to retry transparently the write sequence if a
database is locked
exception is detected. This should also play well with the journalling approach described in TracDev/Proposals/Journaling.
follow-up: 6 comment:5 by , 18 years ago
Did you guys try setting the busy_timeout on the SQLite connection? It will cause a process to wait for a lock (up to the timeout) before throwing the database is locked exception. The wait is usually only a few milliseconds. It is a super easy way to get your "Repeatable transactions".
I just submitted a patch to Rails (http://dev.rubyonrails.org/ticket/6126/) for the same functionality. :)
— Will Reese
comment:6 by , 18 years ago
Replying to wreese@gmail.com:
Did you guys try setting the busy_timeout on the SQLite connection?
The timeout is set, that helps a bit, but it's by no mean a miracle solution to the issue… it can even introduce its own problems, like the dead-locks on concurrent writes. If you wish, you can have a look at the various documents I've given pointers to in comment:2.
comment:8 by , 18 years ago
I have noticed that even this site having the same problem. Seems to be happening more any more.
follow-up: 10 comment:9 by , 18 years ago
Milestone: | → 2.0 |
---|
What should perhaps be made clear is that this issue is only seen with the PySqlite DatabaseBackend. If the locks are becoming an issue, an option is to convert the database to use another backend (e.g. th:SqliteToPgScript). If you know beforehand that you will get some high traffic and have an important number of developers that will use your Trac, you can also directly create the Trac environment with one of the alternative database backend.
SQLite has by design been created to allow multiple process to access independantly to the same database file, even in theory by processes running on different machines. This requires the use of OS-level file locking facilities (flock) and basically you can't have more than one thread or process writing to the database at the same time, and a write can happen only when there's no longer any thread or process reading the database.
We are nevertheless interested in getting the situation improved for SQLite. Besides shortening the read and write periods as outlined above, I also plan to check whether we can gain some performance by sharing connections between threads, which is possible with SQLite ≥ 3.3.1 (see SQLite:MultiThreading). OTOH, the shared cache mode doesn't seem to be able to help here, as this is only about sharing a cache between different connections in the same thread and we don't have more than one connection per thread.
Finally, there's also a hint about making use of temporary table, in the abovementioned wiki page:
When you use temporary tables, the main database is not locked (…) By creating a temporary table containing the results of a large query for processing, rather than processing it directly out of the main database, you greatly reduce lock contentions.
Using in-memory temporary tables should be fast.
comment:10 by , 18 years ago
Replying to myself:
I also plan to check whether we can gain some performance by sharing connections between threads, which is possible with SQLite ≥ 3.3.1 (see SQLite:MultiThreading).
Implemented in r3830. Not only does it seem to be faster and still reliable, but I've not been able to trigger a single lock so far! More testing welcomed.
comment:11 by , 18 years ago
Milestone: | 2.0 → 0.10.1 |
---|---|
Status: | new → assigned |
I've seen only good things so far after r3830, so I propose to backport this to [0.10.1]. Any objection?
comment:13 by , 18 years ago
r3830 is really a big plus for avoiding locks; given a set of test requests that I launch simultaneously (i.e. tabs in Firefox → reload all tabs), most of them doing session updates, some others simply doing time consuming queries, I get:
- repeatably 1 or 2 "Database is Locked" errors without r3830, among all the loaded tabs, at each Reload All action
- not a single "Database is Locked" errors so far with r3830 (and I tried hard ;)
My current theory about this is that SQLite has a very high likelihood of getting locks while creating connections for the first time. Given the complexity of the hot journal checks (see http://www.sqlite.org/lockingv3.html), that would be not that surprising and I guess there are optimizations in this procedure when the connection is not closed.
Now all seems good with pooling Connection objects, except that it's perhaps the cause of crashes in SQLite itself, see #3790, in particular the last two reports. We'll see…
comment:14 by , 18 years ago
Keywords: | needinfo added |
---|---|
Milestone: | 0.10.3 → 0.10.1 |
follow-up: 16 comment:15 by , 18 years ago
Indeed the database is locked error can no longer be reproduced. I tried this hard too, with the help of a script http://home.wanadoo.nl/italy4ever/trac/r3446.py. It can put any load of requests on the trac server. The findings are exactly as you described. No more locks. Only the IntegrityError: columns sid, authenticated are not unique. The script reproduces this if you add a sleep time of some seconds between the requests. May be an academic case. Good news is that I did not see any segfaults or errors like that on Windows 2000. Only once, a python message ended in the Apache access.log, which is weird:
Exception <ProgrammingError instance at 0x020E8558> in <bound method PooledConnection.__del__ of <trac.db.pool.PooledConnection object at 0x0194B4B8>> ignored
If you still wish to explain why the locking errors no longer occur, I may have a theory. The main difference in the sql processing is, as I see it, that by reusing PySQLite databasae connections, the PySQLite statement cache is kept intact. Therefore the sqlite prepare phase is skipped most of the time. This is not the case otherwise, as Trac constantly disconnects and reconnects and hence destroys the statement cache.
This has the following effect on the locking phases. First here is the old situation:
- begin statement (by PySQLite) — still UNLOCKED
- sqlite3_prepare — SHARED
- sqlite3_step — RESERVED
- commit — PENDING/EXCLUSVE/UNLOCKED
If two users happen to be in the prepare phase at the same time, only one can promote his lock from SHARED to RESERVED. The other will get a deadlock type of database is locked error. The chance of two users being in the prepare phase at the same time is small, considering the fraction of a milisecond that it takes. But I suspect there is a dependency.
In the new situation, the sqlite3_prepare step is mostly skipped. So a lock is promoted from UNLOCKED to RESERVED. This seems not vulnarable to deadlock.
follow-up: 17 comment:16 by , 18 years ago
Replying to ed.pasma@wanadoo.nl:
Indeed the database is locked error can no longer be reproduced.
Glad to hear that!
Exception <ProgrammingError instance at 0x020E8558> in <bound method PooledConnection.__del__ of <trac.db.pool.PooledConnection object at 0x0194B4B8>> ignored
Hm, this probably comes from close
being called twice on the same connection, from within the destructor. Which revision were you using?
If you still wish to explain why the locking errors no longer occur, I may have a theory. The main difference in the sql processing is, as I see it, that by reusing PySQLite databasae connections, the PySQLite statement cache is kept intact.
Yes, this may play a big role.
In the new situation, the sqlite3_prepare step is mostly skipped. So a lock is promoted from UNLOCKED to RESERVED. This seems not vulnarable to deadlock.
I think the SHARED lock must always be acquired (see http://www.sqlite.org/lockingv3.html 5.0).
comment:17 by , 18 years ago
Exception <ProgrammingError instance at 0x020E8558> in <bound method PooledConnection.__del__ of <trac.db.pool.PooledConnection object at 0x0194B4B8>> ignored
Hm, this probably comes from
close
being called twice on the same connection, from within the destructor. Which revision were you using?
It was trac version 0.10.2. But may be it is less important. I see from the logs that this occured while stopping the apache server and while it was busy.
In the new situation, the sqlite3_prepare step is mostly skipped. So a lock is promoted from UNLOCKED to RESERVED. This seems not vulnarable to deadlock.
I think the SHARED lock must always be acquired (see http://www.sqlite.org/lockingv3.html 5.0).
You are right here and I wrong. But what about the whole theory?
follow-up: 19 comment:18 by , 18 years ago
Milestone: | 0.10.1 → 0.10.4 |
---|---|
Priority: | normal → high |
Unfortunately, it looks like r3830 is causing crashes on Linux (64-bits, Python 2.4.2, sqlite-3.3.8, PySqlite-2.3.2).
I think I'll disable this feature, except on Windows where it's working great.
OTOH, even without r3830 on Linux, it seems to be still difficult to trigger a database is locked error.
comment:19 by , 18 years ago
Hello Christian, If the locking issue might again turn up after the connection-pooling gets disabled, there is no need to worry. I have a very simple patch to pysqlite that definitely fixes it. It is already in the tracker, see http://initd.org/tracker/pysqlite/ticket/184. May be we should raise its priority. Thanks, Edzard Pasma
comment:20 by , 18 years ago
Milestone: | 0.10.4 → none |
---|
Thanks for the suggestion, Edzard. I indeed saw pysqlite:ticket:184, but didn't realize that you made your testings with Trac!
I think this is issue is more or less solved on the Trac side. We simply need to wait for the time we can revert r4493, i.e. when pysqlite:ticket:187 gets fixed, or, as you said, if/when your patch in #184 gets integrated.
So I'm now post-poning the ticket, waiting for external resolution.
Finally, while testing 2.3.2 on Linux (with the pooling enabled), I've noticed for the first time that I got stuck at the get_db lock's timeout. I'm not sure this is a bug in the pool code (again…) or a side-effect of the instabilities related to the connection pooling.
follow-up: 22 comment:21 by , 18 years ago
Yes, even with pooled connections you may run into database locks. I would hope that this occured when you run the test right after starting the server. Then it can be explained because there are no pooled connections available yet (whatever the further theory is).
comment:22 by , 18 years ago
Replying to ed.pasma@wanadoo.nl:
… I would hope that this occured when you run the test right after starting the server. Then it can be explained because there are no pooled connections available yet (whatever the further theory is).
Right, this is what is happening, when I do a Reload All Tabs in the web browser after I just started tracd.
comment:23 by , 18 years ago
If you wish to try the pysqlite patch for this, i uploaded it to http://home.wanadoo.nl/italy4ever/trac/cursor.2.3.2%23184.c.
comment:24 by , 18 years ago
Cc: | added |
---|
I've been able to reliably repeat the database lock error on Trac 0.10.2 and WebAdmin r4240.
I go to the admin page → Permissions. Whatever alteration I do there results in a traceback similar or equal to the one below.
I don't exactly know what triggered this behaviour, it was working fine before.
2006-12-28 14:51:37,862 Trac[main] ERROR: database is locked Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 387, in dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 238, in dispatch resp = chosen_handler.process_request(req) File "/usr/lib/python2.4/site-packages/TracWebAdmin-0.1.2dev_r4240-py2.4.egg/webadmin/web_ui.py", line 109, in process_request path_info) File "/usr/lib/python2.4/site-packages/TracWebAdmin-0.1.2dev_r4240-py2.4.egg/webadmin/perm.py", line 60, in process_admin_request perm.revoke_permission(subject, action) File "/usr/lib/python2.4/site-packages/trac/perm.py", line 198, in revoke_permission self.store.revoke_permission(username, action) File "/usr/lib/python2.4/site-packages/trac/perm.py", line 152, in revoke_permission (username, action)) File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 50, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "/usr/src/build/539311-i386/install//usr/lib/python2.4/site-packages/sqlite/main.py", line 255, in execute OperationalError: database is locked
This is another traceback I get less often (this stops on commit, instead of execute):
Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 387, in dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 238, in dispatch resp = chosen_handler.process_request(req) File "/usr/lib/python2.4/site-packages/TracWebAdmin-0.1.2dev_r4240-py2.4.egg/webadmin/web_ui.py", line 109, in process_request path_info) File "/usr/lib/python2.4/site-packages/TracWebAdmin-0.1.2dev_r4240-py2.4.egg/webadmin/perm.py", line 60, in process_admin_request perm.revoke_permission(subject, action) File "/usr/lib/python2.4/site-packages/trac/perm.py", line 198, in revoke_permission self.store.revoke_permission(username, action) File "/usr/lib/python2.4/site-packages/trac/perm.py", line 154, in revoke_permission db.commit() File "/usr/src/build/539311-i386/install//usr/lib/python2.4/site-packages/sqlite/main.py", line 539, in commit OperationalError: database is locked
I just can't use WebAdmin Permissions management anymore. As a note, I made sure there was no one else using Trac at the time.
Would upgrading/patching pysqlite solve it?
Well, I can't way until SqlAlchemy goes to trunk so I can use Oracle.
comment:25 by , 18 years ago
Keywords: | needinfo removed |
---|
Note: this ticket is for discussing long term plans for improving the behavior of Trac using PySqlite in presence of concurrent access, not for reporting once again the OperationalError: database is locked
issue…
comment:27 by , 16 years ago
comment:28 by , 15 years ago
Milestone: | not applicable → 0.11.6 |
---|---|
Severity: | major → critical |
As the recent activity on #8468 lead me to work on this topic again, I decided to give the EagerCursor approach a try.
Well, I must say the results are a bit surprising. A very good thing is that I couldn't reproduce the Database is locked error again, no matter how hard I tried. Contrary to what I expected, the memory usage doesn't seem to be affected, but the speed is quite worse (60-80% more time needed for big timeline requests).
I'll attach the patch I have for now and will try to improve upon that.
EagerCursor is enabled by default, but can be turned off by adding ?cursor=
to the database connection string.
by , 15 years ago
Attachment: | t3446-EagerCursor-r8488.diff added |
---|
use the EagerCursor (see comment:27)
comment:29 by , 15 years ago
Well, more testing on my side shows that the bad speed penalty was only that dramatic when the Repository checkins provider was enabled… and then it has probably something to do with the fact that I was testing 0.11-stable on a multirepos Trac DB, which contained two entries for each changeset number in the revision table (different svn repositories).
So… twice as much data retrieved, a bit less than twice the time, seems fine to me now ;-)
Of course with such a change, the performance becomes dependent on the quality of the SQL queries: if we're selecting too broadly or too many data, we can't cope with that anymore by bailing out early of the iteration over the result set.
comment:30 by , 15 years ago
If there is a decision for better and worse performance, trac took always the decision for worse? This strategy brought trac close to "beeing not usable any more", and phrases like "Redmine is Trac done right".
The database is locked error was no problem any more for year(s) now, btw. Why make something which is working slower ("if we're selecting too broadly, we can't cope ….")?
follow-up: 33 comment:31 by , 15 years ago
I've got a setup for session save problem as described in #8468 that without session.__setitem__()
patch trigger Database is locked
consistently and with ease as Trac tries to tries to save a new session for every Timeline reload ('Reload All Tabs' in FF).
Using the eager-cursor patch this changes. I can now add my ~10 Timeline tabs to my ~25 Bitten tabs and get all tabs to reload fine quite often - without the patch, even 2 Timeline tabs cause locking issues for up to half the tabs. However, this is not consistent and only works on about half of my attempts. For the failing attempts, my highly informal testing has some interesting properties:
- Success rate is ~50% for reloading of all tabs without issues, compared to 100% failure for just two /timeline tabs without patch.
- Timeline tabs actually now loads quite well even when needing save, and errors are now actually more frequent in regular select&display tabs (but do sometimes also happen in timeline)
- Full tabset reload takes noticeable more time to complete - especially with failures. However, regular page browsing does not feel slower at least so perhaps this is something that gets worse as load rise?
- Frequently errors are triggered by early request session & auth fetching - may be a symptom of auth+session, or perhaps really just the consequence of the new request thread not being able to get assigned a database seeing session is always first db selects done.
- Massive amount of test failures:
Ran 873 tests in 182.325s FAILED (failures=92, errors=13)
.
That's what I got so far.
comment:32 by , 15 years ago
I'll add logging of number of results when [trac] debug_sql = yes
, that should be enough to troubleshoot the few queries that take longer than necessary, in plugins.
Does anyone foresee a particular problem with this, or is it OK to commit and see what feedback we have?
follow-up: 34 comment:33 by , 15 years ago
Replying to osimons:
I've got a setup for session save problem as described in #8468 that without
session.__setitem__()
patch triggerDatabase is locked
consistently and with ease as Trac tries to tries to save a new session for every Timeline reload ('Reload All Tabs' in FF).
why there is a need to write the session on reloading the timeline?
comment:34 by , 15 years ago
Replying to anonymous:
Replying to osimons:
I've got a setup for session save problem as described in #8468 that without
session.__setitem__()
patch triggerDatabase is locked
consistently and with ease as Trac tries to tries to save a new session for every Timeline reload ('Reload All Tabs' in FF).why there is a need to write the session on reloading the timeline?
There isn't of course - which is the whole point of my excercise and the fix now applied for ticket #8468 that I mentioned.
comment:35 by , 15 years ago
There was a bunch of test failures because of missing __iter__
on EagerCursor, as this gets used directly by the tests, instead of the IterableCursor wrapper.
With this fixed, all tests pass, so I'm going to commit it.
follow-up: 37 comment:36 by , 15 years ago
Since there's suddenly activity on this ticket again, I'm reminded to mention that I think r4493 can long since be reverted.
I have a fairly high activity server running relatively recent versions of SQLite and pysqlite. When I removed this, it seemed to cut down the frequency of 'database is locked' significantly.
comment:37 by , 15 years ago
Replying to ebray:
Since there's suddenly activity on this ticket again, I'm reminded to mention that I think r4493 can long since be reverted.
I have a fairly high activity server running relatively recent versions of SQLite and pysqlite. When I removed this, it seemed to cut down the frequency of 'database is locked' significantly.
Thanks for the reminder. Any idea what PySqlite or SQLite version should I pick for enabling pooling on Linux? Otherwise, I think I'll go for the very recent ones only (SQLite ≥ 3.6.10, PySqlite ≥ 2.5.5).
Also, the reason I have not committed the change yet is that I didn't test it on Linux. Could you give a try to the patch, with some heavy stress testing? (the change mentioned in comment:35 is only relevant for the tests, not for real life use).
follow-up: 40 comment:38 by , 15 years ago
Ok, did some testing on Linux this morning and I couldn't get a single lock or any other issue (mod_python 3.3.1, SQLite 3.3.8, PySqlite 2.5.0), when using the eager cursor patch and enabling the pool.
As I don't have the time for pinpointing for which versions the pool started to work (in reference to r4493), I'll use the versions above as the minimal requirements. Report of success for older versions could help lower the requirements.
comment:39 by , 15 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
eager cursor patch committed as r8589, with a follow-up in r8591 for the test support.
Besides, revert of r4493 committed as r8590, which reintroduce connection pooling for SQLite connections on non-Windows platforms, as discussed above.
Starting with 0.11.6, PySqlite should hold the comparison with PostgreSQL and MySQL backends in heavy demanding setups ;-)
If anyone is still seeing the Database is locked error from now on, please:
- double check you have a recent version of Trac (Trac ≥ 0.11.6)
- make sure you have recent versions of SQLite and the bindings (SQLite ≥ 3.3.8 and PySqlite ≥ 2.5.0)
- verify if the problem can also happen without using plugins (if not, mention the plugin which introduces the problem)
If all the above conditions are met, then open a new ticket, don't reopen this one, thanks!
comment:40 by , 15 years ago
Replying to cboos:
Ok, did some testing on Linux this morning and I couldn't get a single lock or any other issue (mod_python 3.3.1, SQLite 3.3.8, PySqlite 2.5.0), when using the eager cursor patch and enabling the pool.
As I don't have the time for pinpointing for which versions the pool started to work (in reference to r4493), I'll use the versions above as the minimal requirements. Report of success for older versions could help lower the requirements.
Sorry for the lack of response on this—I don't appear to be getting notifications on tickets where I haven't explicitly added myself to the CC list. At any rate, the versions you went with work for me.
comment:41 by , 15 years ago
Thanks for the feedback. Yes, notifications are a bit in trouble on t.e.o…
As for the changes, if you or anybody else have success with lower versions, please notify me and we could adapt the checks.
comment:42 by , 15 years ago
I've just upgraded to 0.11.6 and am now intermittently getting the 'database is locked' error. This is on a RHEL 5 machine with SQLite: 3.3.6 and pysqlite: 2.3.3. The trac.db is clean. Let me guess what y'all will suggest: upgrade sqlite and pysqlite to new versions? Before undertaking that I though I'd ask: have the versions I'm running been known to cause this issue? What else might I pursue?
Trac detected an internal error: OperationalError: database is locked Most recent call last: File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 450, in _dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 176, in dispatch chosen_handler) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 296, in _pre_process_request chosen_handler = filter_.pre_process_request(req, chosen_handler) File "/usr/lib/python2.4/site-packages/trac/versioncontrol/api.py", line 86, in pre_process_request self.get_repository(req.authname).sync() File "/usr/lib/python2.4/site-packages/trac/web/api.py", line 195, in __getattr__ value = self.callbacks[name](self) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 134, in authenticate authname = authenticator.authenticate(req) File "build/bdist.linux-x86_64/egg/acct_mgr/web_ui.py", line 389, in wrapFile "build/bdist.linux-x86_64/egg/acct_mgr/web_ui.py", line 400, in authenticateFile "/usr/lib/python2.4/site-packages/trac/web/auth.py", line 70, in authenticate authname = self._get_name_for_cookie(req, req.incookie['trac_auth']) File "build/bdist.linux-x86_64/egg/acct_mgr/web_ui.py", line 454, in _get_name_for_cookieFile "/usr/lib/python2.4/site-packages/trac/db/util.py", line 64, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "/usr/lib/python2.4/site-packages/trac/db/sqlite_backend.py", line 80, in execute PyFormatCursor.execute(self, *args) File "/usr/lib/python2.4/site-packages/trac/db/sqlite_backend.py", line 59, in execute args or []) File "/usr/lib/python2.4/site-packages/trac/db/sqlite_backend.py", line 51, in _rollback_on_error return function(self, *args, **kwargs) Trac: 0.11.6 Python: 2.4.3 (#1, Jun 11 2009, 14:09:37) [GCC 4.1.2 20080704 (Red Hat 4.1.2-44)] setuptools: 0.6c9 SQLite: 3.3.6 pysqlite: 2.3.3 Genshi: 0.5.1 Pygments: 1.0
#3536 marked as duplicate.
Quite obviously, this is not only an issue with Trac's Trac, but a more general one due to our database access patterns… which currently limit our concurrency to at most one write access each few seconds (if nobody starts a long read query in the meantime, of course …).
This has been filed as #3455.