Opened 16 years ago
Last modified 14 months ago
#7975 new defect
Viewing changesets is very slow
Reported by: | Owned by: | ||
---|---|---|---|
Priority: | normal | Milestone: | next-stable-1.6.x |
Component: | version control/changeset view | Version: | 0.11.2.1 |
Severity: | major | Keywords: | |
Cc: | Branch: | ||
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
Running trac 0.11.2.1 with Postgres 8.1.11 backend. We have approximatley 20k changesets. It takes several minutes to render a changeset. The systems appears to be doing heavy I/O against the database during the rendering. Have tried adjusting max_diff_bytes and max_diff_files to various settings with no effect.
Attachments (0)
Change History (14)
comment:1 by , 16 years ago
Component: | general → version control/changeset view |
---|---|
Milestone: | → 0.13 |
comment:3 by , 15 years ago
Owner: | set to |
---|---|
Severity: | normal → major |
An alternative idea to try out is to by pass the Genshi match filters by pre-rendering the diff sections.
comment:4 by , 15 years ago
#8530 closed as duplicate - doing some rendering optimizations will also hopefully reduce overall memory consumption.
follow-up: 7 comment:6 by , 15 years ago
The problem seems to be related to the number of changesets. We have a Trac environment with around 15K changesets and viewing a changeset that contains just one changed file has become unbearably slow. Another Trac environment with around 2K changesets does not have the problem.
We are using the following configuration:
- Mac OS X 10.5.9
- Trac 0.11.6
- Python 2.5.1
- SQLite 3.4.0
- pysqlite 2.3.2
- Pygments 1.1.1
- Subversion 1.6.6
- Genshi 0.5.1
follow-up: 8 comment:7 by , 15 years ago
Replying to Sascha Kratky <kratky@…>:
The problem seems to be related to the number of changesets. We have a Trac environment with around 15K changesets and viewing a changeset that contains just one changed file has become unbearably slow. Another Trac environment with around 2K changesets does not have the problem.
We are using the following configuration:
- Mac OS X 10.5.9
- Trac 0.11.6
- Python 2.5.1
- SQLite 3.4.0
- pysqlite 2.3.2
- Pygments 1.1.1
- Subversion 1.6.6
- Genshi 0.5.1
I have run some tests on the Trac environment and found out that Trac's CachedRepository is very inefficient in combination with SQLite. Upon viewing a changeset the CachedRepository method previous_rev is called. The method runs the following query to determine the predecessor revision number:
2009-12-13 16:12:33,686 Trac[cache] INFO: SELECT rev FROM node_change WHERE CAST(rev AS int) < %s ORDER BY CAST(rev AS int) DESC LIMIT 1 2009-12-13 16:13:00,294 Trac[cache] INFO: 14636 : 14635
This query takes almost half a minute on our repository with 15K changesets. Actually upon further inspection it turned out that CachedRepository contains a lot of queries which become very slow once the Subversion repository has more than a couple of thousand changesets.
To work around the problem I disabled caching for the Subversion repository by changing the repository type in the environment's trac.ini file from svn to direct-svnfs. This bypasses the generation of a CachedRepository and directly uses SubversionRepository instead.
repository_type = direct-svnfs
comment:8 by , 15 years ago
Replying to Sascha Kratky <kratky@…>:
…
Upon viewing a changeset the CachedRepository method previous_rev is called. The method runs the following query to determine the predecessor revision number:
2009-12-13 16:12:33,686 Trac[cache] INFO: SELECT rev FROM node_change WHERE CAST(rev AS int) < %s ORDER BY CAST(rev AS int) DESC LIMIT 1 2009-12-13 16:13:00,294 Trac[cache] INFO: 14636 : 14635
This query takes almost half a minute on our repository with 15K changesets. Actually upon further inspection it turned out that CachedRepository contains a lot of queries which become very slow once the Subversion repository has more than a couple of thousand changesets.
This has been fixed in r9224.
To work around the problem I disabled caching for the Subversion repository by changing the repository type in the environment's trac.ini file from svn to direct-svnfs. This bypasses the generation of a CachedRepository and directly uses SubversionRepository instead.
repository_type = direct-svnfs
Don't, for determining the predecessor revision number, it's even (orders of magnitude) slower, for now (see ticket:8813#comment:1).
comment:9 by , 14 years ago
We ran into this problem using the mysql backend and having 2,268,912 rows in node_change. We did a mysql dump of the table and changed the schema of the table to this:
CREATE TABLE `node_change` ( `repos` int(11) NOT NULL DEFAULT '0', `rev` char(10) NOT NULL, `path` text COLLATE utf8_bin NOT NULL, `path_sha1` char(40) NOT NULL, `node_type` char(1), `change_type` char(1) NOT NULL, `base_path` text COLLATE utf8_bin, `base_rev` char(10), PRIMARY KEY (`repos`,`rev`,`path_sha1`,`change_type`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 COLLATE=utf8_bin;
This made the query work in 0.01s rather than minutes.
comment:10 by , 10 years ago
Milestone: | next-minor-0.12.x → next-stable-1.0.x |
---|
comment:11 by , 9 years ago
Owner: | removed |
---|
comment:12 by , 8 years ago
Milestone: | next-stable-1.0.x → next-stable-1.2.x |
---|
Moved ticket assigned to next-stable-1.0.x since maintenance of 1.0.x is coming to a close. Please move the ticket back if it's critical to fix on 1.0.x.
comment:13 by , 5 years ago
Milestone: | next-stable-1.2.x → next-stable-1.4.x |
---|
Good, we were just discussing on Trac-dev what we could do besides max_diff_bytes and max_diff_files to guarantee a faster response time, see googlegroups:trac-dev:26e351fbb8941f69.