Stopped sync in 0.11.2.1
|Reported by:||Christian Boos||Owned by:||Christian Boos|
|Severity:||major||Keywords:||cache, postgresql, mysql, pysqlite|
|Cc:||eric.petit@…, techtonik@…, dale.miller@…||Branch:|
As reported by Anatoly on googlegroups:trac-dev:401827e96364f901, there can be situations where versioncontrol cache tables in the database are left in an inconsistent state, from which it is impossible to resume synchronization.
… It turned out to be locked cache after timeout. At first I thought about a race condition, but it was false. The cache can easily be jammed with a long resync from repository on a busy server. It took about 40 seconds to sync 200 revisions before timeout occurred. Take a look at cache.py sync(): On line 159 we check if a revision is being entered in DB by looking if next revision number is already present in
revisionstable. That means another thread is made an INSERT in line 184. The cache is now locked. To unlock it, another thread should update youngest revision at line 221.
So, the problem is when a timeout occurs during execution of 202:221 - there is an update in
node_changetable that can potentially take a long time to experience a timeout (esp. when server is under a heavy load). There are also cset.get_changes() and self.repos.next_rev(next_youngest) calls that can contribute to the issue. I haven't found other way to recover than to manually patch DB.
Also, with a synchronization attempt on every request, this can also lead to a serious slowdown (#7490).
Change History (20)
follow-up: 19 comment:12 by , 13 years ago
|Keywords:||postgresql mysql pysqlite added|
|Milestone:||0.11.4 → not applicable|
|Priority:||high → normal|
comment:17 by , 7 years ago
|Keywords:||cache postgresql mysql pysqlite → cache, postgresql, mysql, pysqlite|