#3645 closed defect (fixed)
MySQL connections don't reconnect after idle timeout
Reported by: | Owned by: | Jonas Borgström | |
---|---|---|---|
Priority: | high | Milestone: | 0.11.2 |
Component: | general | Version: | 0.10 |
Severity: | major | Keywords: | mysql timeout |
Cc: | Branch: | ||
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
I tried to set up a new Trac using 0.10b1 and the MySQL backend. Installation went well and the install came up and worked exactly as expected. However, the next day, I'm receiving error 2006 (MySQL server has gone away) from the MySQLdb driver.
As far as I can tell, this is due to the idle timeout MySQL uses for client connections (8 hours by default, see http://dev.mysql.com/doc/refman/5.0/en/gone-away.html). Restarting the web server resurrects the connections. I can even refresh the browser view a few times and it will come back to life. I imagine this is because the handler now sees an invalid connection and creates a new one.
Here is the Python traceback:
Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 335, in dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 192, in dispatch req.perm = PermissionCache(self.env, req.authname) File "/usr/lib/python2.4/site-packages/trac/perm.py", line 267, in __init__ self.perms = PermissionSystem(env).get_user_permissions(username) File "/usr/lib/python2.4/site-packages/trac/perm.py", line 231, in get_user_permissions for perm in self.store.get_user_permissions(username): File "/usr/lib/python2.4/site-packages/trac/perm.py", line 111, in get_user_permissions cursor.execute("SELECT username,action FROM permission") File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 48, in execute return self.cursor.execute(sql) File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 48, in execute return self.cursor.execute(sql) File "/usr/lib/python2.4/site-packages/MySQLdb/cursors.py", line 163, in execute self.errorhandler(self, exc, value) File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (2006, 'MySQL server has gone away')
The suggested fix would be to trap error 2006 and immediately attempt to reconnect to the database. If the error is returned again, go ahead and fail.
Attachments (1)
Change History (52)
comment:1 by , 18 years ago
follow-up: 12 comment:2 by , 18 years ago
The wiki formatting messed me up above.
In /etc/my.cnf
[mysqld]
wait_timeout=259200
interactive_timeout=259200
comment:3 by , 18 years ago
Here's an (untested) patch that will use DBUtils if it's installed to create a SolidConnection
which will automatically reconnect if the connection is lost. Based on the DBUtils code it looks like this is not as simple a problem to solve as it may seem. The DBUtils "SolidDB.py" file seems to be standalone, so it'd be nice to include it with Trac, but based on the Open Software License I'm not sure if that would be possible.
-
trac/db/mysql_backend.py
18 18 19 19 from trac.core import * 20 20 from trac.db.api import IDatabaseConnector 21 from trac.db.util import ConnectionWrapper 21 from trac.db.util import ConnectionWrapper, solid_connect 22 22 23 23 _like_escape_re = re.compile(r'([/_%])') 24 24 … … 136 136 # some point, this hack should be removed, and a strict requirement 137 137 # on 1.2.1 made. -dilinger 138 138 if (self._mysqldb_gt_or_eq((1, 2, 1))): 139 cnx = MySQLdb.connect(db=path, user=user, passwd=password,140 139 cnx = connect(MySQLdb, db=path, user=user, passwd=password, 140 host=host, port=port, charset='utf8') 141 141 else: 142 cnx = MySQLdb.connect(db=path, user=user, passwd=password,143 142 cnx = connect(MySQLdb, db=path, user=user, passwd=password, 143 host=host, port=port, use_unicode=True) 144 144 self._set_character_set(cnx, 'utf8') 145 145 ConnectionWrapper.__init__(self, cnx) 146 146 -
trac/db/util.py
71 71 72 72 def cursor(self): 73 73 return IterableCursor(self.cnx.cursor()) 74 75 try: 76 from DBUtils.SolidDB import connect as connect 77 except ImportError: 78 def connect(dbapi, maxusage=0, setsession=None, *args, **kwargs): 79 return dbapi.connect(*args, **kwargs)
comment:4 by , 18 years ago
Milestone: | 0.10 → 0.10.1 |
---|
Probably this can wait for 0.10.1, as MySQL support is anyway deemed to be "experimental " for 0.10…
comment:5 by , 18 years ago
So long as it doesn't keep getting bumped, lest "experimental" become a synonym for "unsupported"… ;-)
comment:7 by , 18 years ago
Why does trac use a persistent connection in the first place? Why not use a connection per query or transaction?
follow-ups: 35 50 comment:8 by , 18 years ago
Using a pool of database connections is a common way to limit the amount of resources used (never use more than the max specified number of concurrent connections) and at the same time to be more reactive (most of the time, there's already an open connection waiting to be used).
That being said, you can experiment with having a connection per query, and if this proves more convenient in the case of MySQL, we could make that configurable.
Index: trac/db/mysql_backend.py =================================================================== --- trac/db/mysql_backend.py (revision 3968) +++ trac/db/mysql_backend.py (working copy) @@ -99,7 +99,7 @@ class MySQLConnection(ConnectionWrapper): """Connection wrapper for MySQL.""" - poolable = True + poolable = False def _mysqldb_gt_or_eq(self, v): """This function checks whether the version of python-mysqldb
comment:9 by , 18 years ago
Ah, and I forgot to say that the problem reported here is probably solved in trunk, but not yet ported to 0.10-stable (I'm waiting for feedback).
You could try to apply the corresponding patch on 0.10:
comment:10 by , 18 years ago
Summary: | Trac 0.10b1 MySQL connections don't reconnect after idle timeout → MySQL connections don't reconnect after idle timeout |
---|
tweaking title…
comment:12 by , 18 years ago
Replying to jnanney@gmail.com:
The wiki formatting messed me up above.
In /etc/my.cnf
[mysqld]
wait_timeout=259200
interactive_timeout=259200
After adding this to my my.cnf the error occurs more often for me.
comment:13 by , 18 years ago
Keywords: | needinfo added |
---|---|
Owner: | changed from | to
I'd be very interested to know if this problem still happens with 0.10.2
follow-up: 15 comment:14 by , 18 years ago
I applied 0.10.2 two days ago and I still have the error "Mysql has gone away" every morning. I also experimented with the timeout parameter in trac.ini which seems to have no effect.
comment:15 by , 18 years ago
Replying to bennet:
I applied 0.10.2 two days ago and I still have the error "Mysql has gone away" every morning. I also experimented with the timeout parameter in trac.ini which seems to have no effect.
A workaround for me is to restart the httpd service every morning through a cron job at 6 ;-)
comment:16 by , 18 years ago
Can you please post a backtrace for this issue happening with either trunk, or the 0.10.2 or 0.10.3dev code?
comment:17 by , 18 years ago
MySQL has something built into its API to check for inactive connections, its called ping which MySQLdb provides a wrapper for.
The following patch will call the ping method prior to allocating out the DB connection though this is only going to happen on people who have a short wait_timeout or a fairly inactive Trac installation.
by , 18 years ago
Attachment: | mysql_ping.patch added |
---|
Patch to ping a MySQL connection from a pool before using it.
comment:18 by , 18 years ago
Version: | devel → 0.10 |
---|
Thanks for the patch. Maybe you also have an idea why the current code doesn't work… Just being curious: why doesn't the cnx.rollback()
at line 58 fail if the connection has stalled?
Also, it would be useful to know which are the MySQL versions and MySQLdb versions supporting this.
comment:19 by , 18 years ago
The ping API call has been available since MySQL 3.22.1 onwards so thats about 8 years, I didn't know off the top of my head when it was implemented in MySQLdb but according to their SVN browser its been there since 0.9.0b1.
I'm pretty sure the reason that rollback doesn't catch this is due to the table type / auto commit. If MyISAM (default type) is used then there is no transaction support so rollback is worthless, the same if auto_commit is enabled.
cnx.rollback() wraps around a MySQL api call called mysql_rollback() which could be doing some automagic checking for all of these rather than a simple SQL "Rollback" statement that would trigger the error.
comment:20 by , 18 years ago
Status: | new → assigned |
---|
comment:21 by , 18 years ago
Unfortunately this patch [4363] did not work for me. When I restart the httpd service the error is gone. Running FC5, mysql 5.0.22, httpd 2.2.2, mod_python.
Please find the following:
Traceback (most recent call last): File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 387, in dispatch_request dispatcher.dispatch(req) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 191, in dispatch chosen_handler = self._pre_process_request(req, chosen_handler) File "/usr/lib/python2.4/site-packages/trac/web/main.py", line 263, in _pre_process_request chosen_handler = f.pre_process_request(req, chosen_handler) File "/usr/lib/python2.4/site-packages/trac/versioncontrol/api.py", line 73, in pre_process_request self.get_repository(req.authname) # triggers a sync if applicable File "/usr/lib/python2.4/site-packages/trac/versioncontrol/api.py", line 101, in get_repository repos = self._connector.get_repository(rtype, rdir, authname) File "/usr/lib/python2.4/site-packages/trac/versioncontrol/svn_fs.py", line 260, in get_repository crepos = CachedRepository(self.env.get_db_cnx(), repos, None, self.log) File "/usr/lib/python2.4/site-packages/trac/versioncontrol/cache.py", line 34, in __init__ self.sync() File "/usr/lib/python2.4/site-packages/trac/versioncontrol/cache.py", line 56, in sync cursor.execute("SELECT value FROM system WHERE name='repository_dir'") File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 51, in execute return self.cursor.execute(sql) File "/usr/lib/python2.4/site-packages/trac/db/util.py", line 51, in execute return self.cursor.execute(sql) File "/usr/lib/python2.4/site-packages/MySQLdb/cursors.py", line 137, in execute self.errorhandler(self, exc, value) File "/usr/lib/python2.4/site-packages/MySQLdb/connections.py", line 33, in defaulterrorhandler raise errorclass, errorvalue ReferenceError: weakly-referenced object no longer exists
follow-up: 23 comment:22 by , 18 years ago
And what is the version of the MySQLdb python bindings? If not the latest, you should upgrade and check if that error is still there in their latest package.
comment:23 by , 18 years ago
Replying to cboos:
And what is the version of the MySQLdb python bindings? If not the latest, you should upgrade and check if that error is still there in their latest package.
MySQL-python 1.2.0-3.2.2, the version which is the latest in fedora core repository. I am going to upgrade to 1.2.2 and will let you know whether it worked, sorry if this is the issue.
follow-up: 29 comment:24 by , 18 years ago
I upgraded to MySQL-python 1.2.1_p2 yesterday and the trac was still alive this morning.
Note: MySQL-python 1.2.2b2 did not work at all! When entering unicode chars into a wiki page (e.g. ö) and submitting this page it cuts the whole page from this char on without giving an error message.
comment:25 by , 18 years ago
Resolution: | → worksforme |
---|---|
Status: | assigned → closed |
running this patch for more than a week now … never had this timeout issue again.
comment:26 by , 18 years ago
Keywords: | needinfo removed |
---|---|
Resolution: | worksforme |
Status: | closed → reopened |
Great, so the fix provided by Scott MacVicar works.
reopening for changing the resolution
comment:27 by , 18 years ago
Keywords: | timeout added |
---|---|
Resolution: | → fixed |
Status: | reopened → closed |
So the issue was wixed by r4341 (trunk) and r4342 (0.10-stable).
I think we should start to document the specifics of the MySqlDb Python bindings for MySQL, in particular the version requirements reported by bennet in comment:24 and 25.
comment:28 by , 18 years ago
Its a thread safe issue with MySQLDb, see http://wolfram.kriesing.de/blog/index.php/2006/multithreading-with-mysqldb-and-weakrefs for more information on the problem.
comment:29 by , 16 years ago
Replying to bennet:
I upgraded to MySQL-python 1.2.1_p2 yesterday and the trac was still alive this morning.
Note: MySQL-python 1.2.2b2 did not work at all! When entering unicode chars into a wiki page (e.g. ö) and submitting this page it cuts the whole page from this char on without giving an error message.
I know this issue is marked as closed but I am seeing this problem 2 years later. I am using:
Trac: 0.11 Python: 2.5.1 (r251:54863, Sep 21 2007, 22:46:31) [GCC 4.2.1 (SUSE Linux)] setuptools: 0.6c8 MySQL: server: "5.0.45", client: "5.0.45", thread-safe: 0 MySQLdb: 1.2.2 Genshi: 0.5 mod_python: 3.3.1 Subversion: 1.4.4 (r25188) jQuery: 1.2.3
Here is the errors I receive when updating a wiki page and submitting the changes. The changes are stored but this page is displayed.
Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/web/api.py", line 339, in send_error 'text/html') File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/web/chrome.py", line 708, in render_template if not req.session or not int(req.session.get('accesskeys', 0)): File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/web/api.py", line 168, in __getattr__ value = self.callbacks[name](self) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/web/main.py", line 257, in _get_session return Session(self.env, req) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/web/session.py", line 142, in __init__ self.get_session(req.authname, authenticated=True) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/web/session.py", line 156, in get_session super(Session, self).get_session(sid, authenticated) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/web/session.py", line 56, in get_session (sid, int(authenticated))) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/db/util.py", line 50, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11-py2.5.egg/trac/db/util.py", line 50, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "build/bdist.linux-x86_64/egg/MySQLdb/cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "build/bdist.linux-x86_64/egg/MySQLdb/connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (2006, 'MySQL server has gone away')
I am not going to change the settings on this issue at this time. I want to research other issues first.
comment:30 by , 16 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
This issue must be reopend, as I have the same problem with 0.11.1 …
mysql_ping.patch by Scott can be applied and then fixes the problem … please patch the trunk + 0.11.1 …
comment:32 by , 16 years ago
http://trac.edgewall.org/ticket/3645#comment:8 is a workaround, but another patch should be provided …
comment:33 by , 16 years ago
Milestone: | 0.10.3 → 0.11.2 |
---|
I'm moving this to 0.11.2 since I guess this is quite annoying. But until then the obvious workaround is to configure the mysql server not to automatically close open connections
comment:34 by , 16 years ago
i see this in trunk very often.. can't find info on how to reconfigure mysql either.
comment:35 by , 16 years ago
Replying to cboos:
Using a pool of database connections is a common way to limit the amount of resources used (never use more than the max specified number of concurrent connections) and at the same time to be more reactive (most of the time, there's already an open connection waiting to be used).
That being said, you can experiment with having a connection per query, and if this proves more convenient in the case of MySQL, we could make that configurable.
Index: trac/db/mysql_backend.py =================================================================== --- trac/db/mysql_backend.py (revision 3968) +++ trac/db/mysql_backend.py (working copy) @@ -99,7 +99,7 @@ class MySQLConnection(ConnectionWrapper): """Connection wrapper for MySQL.""" - poolable = True + poolable = False def _mysqldb_gt_or_eq(self, v): """This function checks whether the version of python-mysqldb
That doesn't do anything.. the connection still drops.
comment:36 by , 16 years ago
Can someone describe what the problem is here? MySQL is so common that it seems very silly that the MySQL backend of trac is completly broken?
follow-up: 41 comment:38 by , 16 years ago
I've seen this today on the jQuery Trac.
Looking back at this issue (also at #5783), it seems that we miss a call to ping()
when the connection is retrieved from the pool. So far we only do a ping()
before the rollback()
, which happens just before the connection is returned to the pool. But after that, the connection could stay in the pool for a while and timeout. At the next use (source:trunk/trac/db/pool.py@7479#L92), it would then raise the "MySQL server has gone away".
Quick untested hack to verify this theory (patch on 0.11-stable, r7551):
-
trac/db/pool.py
95 95 self._pool_key.pop(idx) 96 96 self._pool_time.pop(idx) 97 97 cnx = self._pool.pop(idx) 98 if hasattr(cnx.cnx, 'ping'): 99 cnx.cnx.ping() 98 100 # Third best option: Create a new connection 99 101 elif len(self._active) + len(self._pool) < self._maxsize: 100 102 cnx = connector.get_connection(**kwargs)
Could someone having the issue test this approach?
follow-up: 40 comment:39 by , 16 years ago
Maybe this will help.
I get the following error if I edit the startup wiki page and submit the change:
Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/api.py", line 339, in send_error 'text/html') File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/chrome.py", line 702, in render_template if not req.session or not int(req.session.get('accesskeys', 0)): File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/api.py", line 169, in __getattr__ value = self.callbacks[name](self) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/main.py", line 257, in _get_session return Session(self.env, req) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/session.py", line 150, in __init__ self.get_session(req.authname, authenticated=True) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/session.py", line 164, in get_session super(Session, self).get_session(sid, authenticated) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/session.py", line 56, in get_session (sid, int(authenticated))) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/db/util.py", line 50, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "build/bdist.linux-x86_64/egg/MySQLdb/cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "build/bdist.linux-x86_64/egg/MySQLdb/connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (2006, 'MySQL server has gone away')
The change is stored but I get the above.
My configuration is:
System Information Trac: 0.11.1 Python: 2.5.1 (r251:54863, Sep 21 2007, 22:46:31) [GCC 4.2.1 (SUSE Linux)] setuptools: 0.6c8 MySQL: server: "5.0.45", client: "5.0.45", thread-safe: 0 MySQLdb: 1.2.2 Genshi: 0.5.1 mod_python: 3.3.1 Subversion: 1.5.2 (r32768) jQuery: 1.2.6
Here is "uname -a" of the processor I am using:
Linux mistichp 2.6.22.5-31-default #1 SMP 2007/09/21 22:29:00 UTC x86_64 x86_64 x86_64 GNU/Linux
This is what was appended to the trac log file:
2008-09-22 13:49:25,242 Trac[main] ERROR: (2013, 'Lost connection to MySQL server during query') Traceback (most recent call last): File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/main.py", line 423, in _dispatch_request dispatcher.dispatch(req) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/web/main.py", line 197, in dispatch resp = chosen_handler.process_request(req) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/wiki/web_ui.py", line 162, in process_request return self._render_view(req, versioned_page) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/wiki/web_ui.py", line 487, in _render_view for hist in latest_page.get_history(): File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/wiki/model.py", line 167, in get_history "ORDER BY version DESC", (self.name, self.version)) File "/usr/local/lib64/python2.5/site-packages/Trac-0.11.1-py2.5.egg/trac/db/util.py", line 50, in execute return self.cursor.execute(sql_escape_percent(sql), args) File "build/bdist.linux-x86_64/egg/MySQLdb/cursors.py", line 166, in execute self.errorhandler(self, exc, value) File "build/bdist.linux-x86_64/egg/MySQLdb/connections.py", line 35, in defaulterrorhandler raise errorclass, errorvalue OperationalError: (2013, 'Lost connection to MySQL server during query')
This is what was appended to the mysqld.log file:
080922 13:49:25 - mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=258048 max_used_connections=12 max_connections=100 threads_connected=10 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 92783 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x236cb90 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0x440c9fe0, backtrace may not be correct. Stack range sanity check OK, backtrace follows: (nil) New value of fp=0x236cb90 failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/using-stack-trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x2372d00 = SELECT version,time,author,comment,ipnr FROM wiki WHERE name='WikiStart' AND version<=28 ORDER BY version DESC thd->thread_id=914 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash.
Then a new mysqld.log file is started with:
Number of processes running now: 0 080922 13:49:25 mysqld restarted 080922 13:49:25 InnoDB: Started; log sequence number 0 2056477 080922 13:49:25 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.45' socket: '/var/lib/mysql/mysql.sock' port: 3306 SUSE MySQL RPM
We have experienced several other occurrences causing the mysql to restart but they are seldom. Submitted a change to the wiki page does it every time for me. I have looked at permissions in the mysql area and everything looks good.
This is the only problem I have seen with Trac with our configuration. Hope this is helpful. Thank you.
follow-up: 43 comment:40 by , 16 years ago
Replying to dale.miller@…:
MySQL: server: "5.0.45", client: "5.0.45", thread-safe: 0
thread-safe: 0
seems to be a good hint here. Looks like MySQLdb was built in non-thread-safe mode.
What happens when you execute the following query at the mysql prompt?
SELECT version,time,author,comment,ipnr FROM wiki WHERE name='WikiStart' AND version<=28 ORDER BY version DESC
Does that also trigger a crash? If not, then it's definitely the thread-safety issue. Please rebuild your MySQLdb python bindings with thread-safety support enabled.
From the python mysqldb README:
Building and installing ----------------------- ... edit the [options] section of site.cfg: threadsafe thread-safe client library (libmysqlclient_r) if True (default); otherwise use non-thread-safe (libmysqlclient). You should always use the thread-safe library if you have the option; otherwise you *may* have problems.
Indeed ;-)
Now that I think about it, I see that comment:29 had the same thread-safe: 0
info, and same goes for the jQuery Trac (#3130).
comment:41 by , 16 years ago
Owner: | changed from | to
---|---|
Status: | reopened → new |
Replying to cboos:
Quick untested hack to verify this theory (patch on 0.11-stable, r7551):
*snip*
Could someone having the issue test this approach?
I've tested this patch now and it doesn't fix the problem but it's almost there. cnx.ping()
seems to to raise an exception if the connection is no longer available, so we'll just need to catch that exception and discard the connection if that happens. I'll have a working patch and will commit it as soon as I've done some further testing.
follow-ups: 44 45 comment:43 by , 16 years ago
Replying to cboos:
Replying to dale.miller@…:
MySQL: server: "5.0.45", client: "5.0.45", thread-safe: 0
thread-safe: 0
seems to be a good hint here. Looks like MySQLdb was built in non-thread-safe mode.What happens when you execute the following query at the mysql prompt?
SELECT version,time,author,comment,ipnr FROM wiki WHERE name='WikiStart' AND version<=28 ORDER BY version DESCDoes that also trigger a crash? If not, then it's definitely the thread-safety issue. Please rebuild your MySQLdb python bindings with thread-safety support enabled.
You last statement "Does that also trigger a crash? If not, then it's definitely the thread-safety issue."
I did the statement you suggested and it did crash.
using mysql I did the following:
mysql> use trac; Database changed mysql> select version, time, author,comment,ipnr from wiki where name='WikeStart' and version<=28 order by version desc;
Which gave this error:
ERROR 2013 (HY000): Lost connection to MySQL server during query
I looked at the /var/lib/mysql and it contained:
tail -200 /var/lib/mysql/mysqld.log-20081004 Number of processes running now: 0 081003 15:17:40 mysqld restarted 081003 15:17:41 InnoDB: Started; log sequence number 0 2056477 081003 15:17:41 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.45' socket: '/var/lib/mysql/mysql.sock' port: 3306 SUSE MySQL RPM 081003 15:20:33 - mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=258048 max_used_connections=5 max_connections=100 threads_connected=5 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 92783 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x2262730 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0x4418cfe0, backtrace may not be correct. Stack range sanity check OK, backtrace follows: (nil) New value of fp=0x2262730 failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/using-stack-trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x226a140 = SELECT version,time,author,comment,ipnr FROM wiki WHERE name='WikiStart' AND version<=33 ORDER BY version DESC thd->thread_id=5 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. Number of processes running now: 0 081003 15:20:33 mysqld restarted 081003 15:20:34 InnoDB: Started; log sequence number 0 2056477 081003 15:20:34 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.45' socket: '/var/lib/mysql/mysql.sock' port: 3306 SUSE MySQL RPM 081003 15:21:07 - mysqld got signal 7; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=258048 max_used_connections=1 max_connections=100 threads_connected=1 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 92783 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x21ccdc0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0x44088fe0, backtrace may not be correct. Stack range sanity check OK, backtrace follows: (nil) New value of fp=0x21ccdc0 failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/using-stack-trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x22063b0 = SELECT version,time,author,comment,ipnr FROM wiki WHERE name='WikiStart' AND version<=33 ORDER BY version DESC thd->thread_id=1 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. Number of processes running now: 0 081003 15:21:07 mysqld restarted 081003 15:21:07 InnoDB: Started; log sequence number 0 2056477 081003 15:21:07 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.45' socket: '/var/lib/mysql/mysql.sock' port: 3306 SUSE MySQL RPM 081007 8:26:07 - mysqld got signal 11; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=258048 max_used_connections=12 max_connections=100 threads_connected=10 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 92783 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x2324bd0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0x442d1fe0, backtrace may not be correct. Stack range sanity check OK, backtrace follows: (nil) New value of fp=0x2324bd0 failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/using-stack-trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x232ad40 = select version, time, author,comment,ipnr from wiki where name='WikeStart' and version<=28 order by version desc thd->thread_id=195 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash.
It definitely looks like a thread issue. I installed MySQL-python-1.2.2 in May when converting from Mantis to Trac and my site.cfg had:
[options] # embedded: link against the embedded server library # threadsafe: use the threadsafe client # static: link against a static library (probably required for embedded) embedded = False threadsafe = True static = False
So I am not sure why it is using 0 threads?
comment:44 by , 16 years ago
Replying to anonymous:
Replying to cboos:
Replying to dale.miller@…:
MySQL: server: "5.0.45", client: "5.0.45", thread-safe: 0
thread-safe: 0
seems to be a good hint here. Looks like MySQLdb was built in non-thread-safe mode.What happens when you execute the following query at the mysql prompt?
I had a typo in what I tested
SELECT version, time, author, comment, ipnr FROM wiki WHERE name='WikeStart' AND version<=28 ORDER BY version DESC;
Note the "WikeStart"
When I correct the typo the command errors with
ERROR 1105 (HY000): Unknown error
However, if I using "ORDER BY version ASC" it works.
When I tail the mysqld.log file following the "ORDER BY version DESC" error it contains:
Number of processes running now: 0 081007 11:13:21 mysqld restarted 081007 11:13:21 InnoDB: Started; log sequence number 0 2056477 081007 11:13:21 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.45' socket: '/var/lib/mysql/mysql.sock' port: 3306 SUSE MySQL RPM 081007 11:28:07 - mysqld got signal 7; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=16777216 read_buffer_size=258048 max_used_connections=10 max_connections=100 threads_connected=10 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections = 92783 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. thd=0x22a9fb0 Attempting backtrace. You can use the following information to find out where mysqld died. If you see no messages after this, something went terribly wrong... Cannot determine thread, fp=0x442d1fe0, backtrace may not be correct. Stack range sanity check OK, backtrace follows: (nil) New value of fp=0x22a9fb0 failed sanity check, terminating stack trace! Please read http://dev.mysql.com/doc/mysql/en/using-stack-trace.html and follow instructions on how to resolve the stack trace. Resolved stack trace is much more helpful in diagnosing the problem, so please do resolve it Trying to get some variables. Some pointers may be invalid and cause the dump to abort... thd->query at 0x22b0120 = select version,time,author,comment,ipnr from wiki where name='WikiStart' and version <=28 order by version desc thd->thread_id=16 The manual page at http://www.mysql.com/doc/en/Crashing.html contains information that should help you find out what is causing the crash. Number of processes running now: 0 081007 11:28:07 mysqld restarted 081007 11:28:07 InnoDB: Started; log sequence number 0 2056477 081007 11:28:07 [Note] /usr/sbin/mysqld: ready for connections. Version: '5.0.45' socket: '/var/lib/mysql/mysql.sock' port: 3306 SUSE MySQL RPM
I am not convinced that it is not running with threads and that the "About Trac" is showing 0 threads incorrectly.
MySQL: server: "5.0.45", client: "5.0.45", thread-safe: 0
There are some reports of the DESC problem with MySQL 5.0.45 which this test may be running into.
What does "About Trac" show on your system?
comment:45 by , 16 years ago
Replying to dale:
using mysql I did the following:
mysql> use trac; Database changed mysql> select version, time, author,comment,ipnr from wiki where name='WikeStart' and version<=28 order by version desc;
<crash>
Sorry, it's probably even worse than just a broken MySQLdb, looks like either you've hit a bug in your version of MySQL (maybe this DESC problem you mention).
I couldn't reproduce the issue, but at least I verified that when MySQLdb is correctly set-up (i.e. [options] threadsafe = True
in site.cfg), the version info should indeed show that thread-safe support is on:
MySQL: server: "5.0.67-community-nt", client: "5.0.67", thread-safe: 1 MySQLdb: 1.2.2
To summarize:
- that this SELECT query also crashes when executed on the mysql command line shows that there's a quite serious issue in your MySQL server
- your MySQLdb is definitely not thread-safe, rebuild it yourself to be sure
comment:46 by , 16 years ago
PS: just to be really clear, setting the flag in site.cfg is not enough, the bindings need to be rebuilt (e.g. python setup.py install
).
follow-up: 48 comment:47 by , 16 years ago
If after a rebuild and reinstallation of you MySQLdb bindings you still see thread-safe: 0
, then chances are that you end up using another MySQL library at run-time.
Be sure to read: http://code.google.com/p/modwsgi/wiki/ApplicationIssues#MySQL_Shared_Library_Conflicts
comment:48 by , 16 years ago
Replying to cboos:
If after a rebuild and reinstallation of you MySQLdb bindings you still see
thread-safe: 0
, then chances are that you end up using another MySQL library at run-time.Be sure to read: http://code.google.com/p/modwsgi/wiki/ApplicationIssues#MySQL_Shared_Library_Conflicts
I restarted apache2 with php not in the startup and now have thread-safe on:
MySQL: server: "5.0.67-community", client: "5.0.67", thread-safe: 1
However, I still could not look at history on a wiki page, and in mysql itself, I tried "ORDER BY version DESC" and it returned with an error:"
mysql> SELECT version, time, author, comment, ipnr FROM wiki WHERE name='WikiStart' AND version<=5 ORDER BY version DESC; ERROR 2013 (HY000): Lost connection to MySQL server during query
I finally found bug 36639 on the MySQL web page that said the problem I am seeing was unique to 64 bit pentium 4 under openSUSE 10.3 and gcc-4.2.1. The problem is not repeatable on any other machine (32/64 bit, intel/amd, suse or other distribution). I installed mysql 5.0.77 and the problem is gone. I also ALTERED my Trac mysql tables from MyISAM to InnoDB and have seen no problems. I have trac 0.11.3 installed at this time.
comment:50 by , 15 years ago
Thanks for the workaround. No other thing works for me :
Trac 0.11.5
MySQL server: 5.1.37
MySQL client: I don't know
Thread safe: I don't know
(how to know this ??? :S)
comment:51 by , 9 years ago
This error still occurs with:
Package Version Trac 1.0 MySQL server: "5.5.41-MariaDB", client: "5.5.41-MariaDB", thread-safe: 1 MySQLdb 1.2.3 Python 2.7.5 (default, Jun 24 2015, 00:41:19) [GCC 4.8.3 20140911 (Red Hat 4.8.3-9)]
Every morning we are greeted with Mysql Server has gone away. I'm hesitant to just increase the timeout on persistent connections. Shouldn't the connection instead simply reconnect if the current persistent connection is stale?
While I don't have a fix, this is a workaround I use and it works for me.
See http://dev.mysql.com/doc/refman/4.1/en/server-system-variables.html and http://dev.mysql.com/doc/refman/4.1/en/using-system-variables.html for explinations of the options below.
in /etc/my.cnf you can add this…
# 3 day timeout in seconds [mysqld] wait_timeout=259200 interactive_timeout=259200
The default value for wait_timeout is 28800 which is 8 hours. wait_timeout is for connections from TCP/IP.
I also found this.
http://sourceforge.net/tracker/index.php?func=detail&aid=1483074&group_id=22307&atid=374934
Which is a patch against python-mysqldb to allow reconnect=1 to be used in the connect method.
This may be an option to add into mysql_backend.py once the patch above is added to python-mysqldb.