#3503 closed defect (fixed)
OperationalError: database is locked
Reported by: | anonymous | Owned by: | Christian Boos |
---|---|---|---|
Priority: | highest | Milestone: | 0.10.1 |
Component: | general | Version: | devel |
Severity: | normal | Keywords: | session pysqlite database lock |
Cc: | exarkun@… | Branch: | |
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
recent trunk, trac/web/session.py, line 210, in save, attempts to commit a transaction without performing properly error handling. If an OperationalError is raised here, it will propagate all the way up to the top of the application without being handled.
A stacktrace recently logged on my server:
File "/home/trac/Projects/Twisted/trunk/twisted/web2/wsgi.py", line 138, in run result = self.application(self.environment, self.startWSGIResponse) File "/home/trac/Run/trac/external.py", line 74, in __call__ return self.application(environ, start_response) File "/home/trac/Run/trac/external.py", line 115, in tracApplication return trac.web.main.dispatch_request(environ, start_response) File "/home/trac/Projects/trac/trunk/trac/web/main.py", line 367, in dispatch_request req.send_error(sys.exc_info(), status=500) File "/home/trac/Projects/trac/trunk/trac/web/api.py", line 365, in send_error exc_info) File "/home/trac/Projects/trac/trunk/trac/web/main.py", line 335, in dispatch_request dispatcher.dispatch(req) File "/home/trac/Projects/trac/trunk/trac/web/main.py", line 236, in dispatch req.session.save() File "/home/trac/Projects/trac/trunk/trac/web/session.py", line 210, in save db.commit() pysqlite2.dbapi2.OperationalError: database is locked
Attachments (0)
Change History (45)
comment:1 by , 18 years ago
Cc: | added |
---|
comment:2 by , 18 years ago
comment:5 by , 18 years ago
Milestone: | → 0.10 |
---|---|
Owner: | changed from | to
We should probably transform this into a TracError …
comment:6 by , 18 years ago
Since two weeks (still with 0.9.6) we are experiencing "database locked" errors quite often, maybe 3 or 4 times per week. What can I do to help debugging this ? Until now I only have the error message which appears in Trac. Somebody does something, then something goes wrong, and maybe the first user after that event gets a "database locked". How can I find out more ?
Alex
follow-up: 17 comment:7 by , 18 years ago
Unfortunately, "database locked" errors are somewhat inherent the to default architecture used by Trac (see also #3446 for enhancement proposals).
This ticket is about the suggestion of transforming the "database locked" error into a more "friendly" error message, if that's possible.
The only situations where the "database locked" errors are indeed problematic are the ones in which the database remains locked for all subsequent requests. Fortunately, we haven't seen that since a while, now.
comment:8 by , 18 years ago
Just as a note: we are running trac in fastcgi-mode. If the lock happens, it doesn't go away. We have to login to the server and kill the trac.fcgi processes with killall. At least one of them can't be killed friendly, but SIGKILL has to be used. Then we have to restart apache. We are running Trac 0.9.4 and sqlite etc. from march or april.
Alex
comment:9 by , 18 years ago
Similarly, I'm using WSGI and if the lock happens, I also have to SIGKILL the server process to get it to exit.
SQLite, subversion, and Python bindings for both are the versions packaged in Ubuntu Breezy.
follow-up: 12 comment:10 by , 18 years ago
At this point we're not in need of more feedback from 0.9.x versions, since there have been improvement in the trunk since then. So, if you're interested in providing more testing please upgrade to 0.10b1 or the trunk.
cboos: do you have a plan for addressing this in 0.10, or can we push it to 0.10.1 since there doesn't seem to be sufficient information to debug it?
comment:11 by , 18 years ago
I should have mentioned the version of trac I am using is a fairly recent trunk, r3585 (0.9.x had no WSGI support, did it?)
comment:12 by , 18 years ago
Milestone: | 0.10 → 0.10.1 |
---|
Replying to mgood:
cboos: do you have a plan for addressing this in 0.10, or can we push it to 0.10.1 since there doesn't seem to be sufficient information to debug it?
For 0.10, I only planned to implement the suggestion that we should convert the database is locked
error to a more "friendly" TracError, as the lock error seems to be inevitable at this point, but this shouldn't block 0.10 and can well be post-poned to 0.10.1.
However, the fact that there are apparently still serious persitent locks around would probably justify that we reopen #1661, which was closed due to lack of feedback and because I believed that people stopped being affected by this, which is apparently wrong…
follow-up: 15 comment:13 by , 18 years ago
Unfortunately I have been hitting "Database is locked" from switching to current trunk. Seems something that changed in last ~20 commits that caused database to be locked rather often.
follow-up: 16 comment:15 by , 18 years ago
Replying to kre:
Unfortunately I have been hitting "Database is locked" from switching to current trunk. Seems something that changed in last ~20 commits that caused database to be locked rather often.
Does this mean it happens more often with 0.10.0 than with previous versions ? If that's the case, we can't upgrade to 0.10.0, because this would block us even more. (I understand that reports about 0.9.x don't help you in any way anymore)
Alex
comment:16 by , 18 years ago
Replying to anonymous:
Does this mean it happens more often with 0.10.0 than with previous versions ?
I don't think so. In addition, with r3830, the locks seem to be much harder to reproduce. At least, I couldn't get one anymore, using tracd. I think I'll backport to 0.10.1, but more user feedback is needed first, so I'll be glad to hear from you if 0.10 + r3830 helps.
If that's the case, we can't upgrade to 0.10.0, because this would block us even more. (I understand that reports about 0.9.x don't help you in any way anymore)
No, 0.9.x in general will not get any more improvements, but only security bug fixes. Your best move would be to have a test drive of 0.10 like suggested above: this will help us to make a more robust 0.10.1 that you'll also benefit from.
follow-up: 18 comment:17 by , 18 years ago
Replying to cboos:
Unfortunately, "database locked" errors are somewhat inherent the to default architecture used by Trac (see also #3446 for enhancement proposals).
This ticket is about the suggestion of transforming the "database locked" error into a more "friendly" error message, if that's possible.
The only situations where the "database locked" errors are indeed problematic are the ones in which the database remains locked for all subsequent requests. Fortunately, we haven't seen that since a while, now.
unfortunately, we had it today. v 0.10.
comment:18 by , 18 years ago
Replying to ThurnerRupert:
Replying to cboos:
Unfortunately, "database locked" errors are somewhat inherent the to default architecture used by Trac (see also #3446 for enhancement proposals).
This ticket is about the suggestion of transforming the "database locked" error into a more "friendly" error message, if that's possible.
The only situations where the "database locked" errors are indeed problematic are the ones in which the database remains locked for all subsequent requests. Fortunately, we haven't seen that since a while, now.
unfortunately, we had it today. v 0.10.
read #1661 … we did not wait a couple of minutes, but 3-4 times as long as with the normal (frequent) locking errors.
comment:19 by , 18 years ago
Resolution: | → duplicate |
---|---|
Status: | new → closed |
seems to be a duplicate for #3446, there is a solution. we try it and report back.
comment:20 by , 18 years ago
Keywords: | session pysqlite database lock added |
---|---|
Priority: | high → highest |
Resolution: | duplicate |
Status: | closed → reopened |
No, it's not the same. The error reported in this ticket is specific to the req.session.save()
done after the response was sent. The problem here is that we attempt to do a req.send_error()
which will necessarily fail (as a response was already sent).
Also, it doesn't seem to be a good idea to persist the session changes if the request actually failed.
It would be possible to catch the session save failure and present it to the user if that is done before sending the response. However, this might increase the visible rate of database locks, which I think would be nevertheless OK given the recent progresses on this front.
follow-up: 22 comment:21 by , 18 years ago
Above changes implemented in r4048.
I should note that I still have a hard way to trigger database locks, even with that change, probably thanks to r3830 (see #3446). However, I was able to see the problem with the TracTimeline just after a commit (i.e. #2902).
Will backport this later.
comment:22 by , 18 years ago
comment:23 by , 18 years ago
comment:24 by , 18 years ago
follow-up: 27 comment:25 by , 18 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
we upgraded recently and we do not experience lock errors any more. before there was one lock error every 5-10 minutes caused by the company search engine spidering the trac instance.
comment:26 by , 18 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
Thanks very much for the feedback, but I've left the issue opened as a reminder for me to backport it to 0.10-stable, as currently this is only fixed in trunk.
comment:27 by , 18 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
r4048 was adapted for 0.10-stable in r4171.
Replying to ThurnerRupert <rupert.thurner@gmail.com>:
we upgraded recently and we do not experience lock errors any more. before there was one lock error every 5-10 minutes caused by the company search engine spidering the trac instance.
comment:28 by , 16 years ago
We just experienced the problem with version 0.11.1 - it went away after a few minutes.
follow-up: 30 comment:29 by , 16 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
We have started experiencing this in 0.10.4 as of a few days ago. What gives?
comment:30 by , 16 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
Replying to anonymous:
We have started experiencing this in 0.10.4 as of a few days ago. What gives?
Options are:
- move to 0.11.2,
- move to fastcgi, instead of mod_python,
- move to PostgreSQL,
and read MostFrequentDuplicates
comment:31 by , 15 years ago
We just experienced the problem with version 0.11.5. Can anyone give a way to relock the database? If the lock is not release,will trac work well without any problems?
follow-up: 33 comment:32 by , 15 years ago
The database is locked issue is nearly always a transient issue. If so, the problem is believed to be fixed now (#3446), so you could upgrade to 0.11.6dev (see TracDownload#Tracstable).
But are you saying that you have a persistent lock, i.e. that the database stays locked, even after a restart of the server?
comment:33 by , 15 years ago
Replying to cboos:
The database is locked issue is nearly always a transient issue. If so, the problem is believed to be fixed now (#3446), so you could upgrade to 0.11.6dev (see TracDownload#Tracstable).
But are you saying that you have a persistent lock, i.e. that the database stays locked, even after a restart of the server?
Thank you for replying. It was a transient issue and now it work well again.but I will upgrade the trac when possible.
follow-up: 38 comment:34 by , 15 years ago
I have got this error running Trac 0.12multirepos-r8178 on a virtual machine (Virtualbox 3.1.4) where the trac-db was mounted (cifs) from a machine running Vista.
Solved the problem by copying the db into the virtual-machine's filesystem.
comment:36 by , 14 years ago
4 years and counting. This bug is so annoying.
Why not just work around it, by quietly re-trying the request??
follow-up: 39 comment:37 by , 14 years ago
Well, this exception has become very … exceptional, these days. If you're seeing that frequently, then I'm pretty confident you're not running a recent version of Trac, PySqlite or SQLite.
comment:38 by , 14 years ago
Replying to patrick232@…:
I have got this error running Trac 0.12multirepos-r8178 on a virtual machine (Virtualbox 3.1.4) where the trac-db was mounted (cifs) from a machine running Vista.
Solved the problem by copying the db into the virtual-machine's filesystem.
it appears using nobrl option in mount.cifs solves the problem.
comment:39 by , 14 years ago
Replying to cboos:
Well, this exception has become very … exceptional, these days. If you're seeing that frequently, then I'm pretty confident you're not running a recent version of Trac, PySqlite or SQLite.
Hm, not sure. I am using quite recent versions (over apache2 and mod_python):
[ ~ ] alexp@forge:0<!534,j0>$ dpkg -l | grep trac ii trac 0.11.7-1 Enhanced wiki and issue tracking system for [ ~ ] alexp@forge:0<!535,j0>$ cat /etc/lsb-release DISTRIB_ID=Ubuntu DISTRIB_RELEASE=10.04 DISTRIB_CODENAME=lucid DISTRIB_DESCRIPTION="Ubuntu 10.04.1 LTS"
and
Python 2.6.5 (r265:79063, Apr 16 2010, 13:09:56) [GCC 4.4.3] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import trac.db.sqlite_backend as test >>> test._ver (3, 6, 22) >>> test.have_pysqlite 2 >>> test.sqlite.version '2.5.5' >>>
Accessing the login page constantly (even over apache2 restarts)
Traceback (most recent call last): File "/usr/lib/python2.6/dist-packages/trac/web/api.py", line 376, in send_error 'text/html') File "/usr/lib/python2.6/dist-packages/trac/web/chrome.py", line 733, in render_template message = req.session.pop('chrome.%s.%d' % (type_, i)) File "/usr/lib/python2.6/dist-packages/trac/web/api.py", line 195, in __getattr__ value = self.callbacks[name](self) File "/usr/lib/python2.6/dist-packages/trac/web/main.py", line 265, in _get_session return Session(self.env, req) File "/usr/lib/python2.6/dist-packages/trac/web/session.py", line 161, in __init__ self.promote_session(sid) File "/usr/lib/python2.6/dist-packages/trac/web/session.py", line 248, in promote_session db.commit() OperationalError: database is locked
Same in the logs. The only thing that comes up to my mind is that the trac environments live on an NFS share…
Thoughts? -Alexander
comment:40 by , 14 years ago
It still occur every day! even after upraded to latest version(Trac 0.13dev-r10406) :(
2011-01-06 17:35:09,572 Trac[main] ERROR: Internal Server Error: Traceback (most recent call last): File "build\bdist.win32\egg\trac\web\main.py", line 447, in _dispatch_request dispatcher.dispatch(req) File "build\bdist.win32\egg\trac\web\main.py", line 201, in dispatch req.session.save() File "build\bdist.win32\egg\trac\web\session.py", line 140, in save """, (mintime,)) File "build\bdist.win32\egg\trac\db\api.py", line 140, in __exit__ self.db.commit() OperationalError: database is locked
why not keep session in memory? main trac use enviroment is small team, I think.
follow-ups: 42 45 comment:41 by , 14 years ago
Milestone: | 0.10.1 → 0.13 |
---|---|
Resolution: | fixed |
Status: | closed → reopened |
It still occur every day! even after upraded to latest version(Trac 0.13dev-r10406) :(
is it a mission impossible ?
comment:42 by , 14 years ago
Replying to anonymous:
It still occur every day! even after upraded to latest version(Trac 0.13dev-r10406) :(
is it a mission impossible ?
It all depends on you… and whether you're responsive to our requests for feedback or not. Let's see.
Please give us at least the following information:
- version information for Python, the PySqlite bindings and SQLite itself
- which platform, which web front-end (+ version information when relevant)
- on what kind of filesystem is the
<tracenv>/db/trac.db
file located? Same question for/tmp
(if you're on Unix)? - are you using any Trac plugins? does the problem also happen when you try without any plugin enabled?
And also:
- how many users? what is the estimate of the number of concurrent requests when a lock happens? (you can add timing information in the Trac log, if you can't gather this information from the web server logs, see TracLogging)
- post a typical backtrace
- does the problem happens for any kind of request, or just some actions?
- do you have a reproduction scenario?
comment:44 by , 14 years ago
We started seeing this problem recently, once we moved TRAC from a Netapp to a Hitachi SAN. That was literally the only difference.
Hitachi deals with file locks in a very different way then Netapp, which the Trac db doesn't seem to like.
If you can properly lock the file on writes, or leave it on local disk, you should not run into these problems any further.
tsete