#4347 closed defect (fixed)
postgres connection limit exceeded
Reported by: | Owned by: | Jonas Borgström | |
---|---|---|---|
Priority: | high | Milestone: | 0.11.1 |
Component: | general | Version: | 0.10.2 |
Severity: | critical | Keywords: | postgresql, mysql, schema, connection, pool |
Cc: | pacopablo@…, stephan.zeissler@… | Branch: | |
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
I realise there seems to be loads of articles from the beginning of the year on this subject but nothing recently and we are experiencing the problem. We running with Trac 0.10.2, Postgres 8.1.5-1, Apache 2.2 and Python 2.4.
Basically with Apache restarted there are no connections to Postgres but once a few pages are visited the number goes up and stays there. I notice in trac admin that there is a connection timeout which is set to 20 seconds (the default) which seems to have no effect. After a while the number of open connections exceeds the maximum number of available postgres connections and the sites fail. The only way to recover the situation is to restart apache. I notice from earlier mailing list entries that the problem also existed in tracd but nobody seemed to have posted a solution:-(
We currently run something like 30 projects with each one being a schema within one Postgres DB. running "ps ax" gives:
postgres: <username> trac 127.0.0.1(36257) idle in transaction postgres: <username> trac 127.0.0.1(36258) idle in transaction postgres: <username> trac 127.0.0.1(36259) idle in transaction postgres: <username> trac 127.0.0.1(36261) idle in transaction postgres: <username> trac 127.0.0.1(36262) idle in transaction postgres: <username> trac 127.0.0.1(36263) idle in transaction postgres: <username> trac 127.0.0.1(36265) idle in transaction postgres: <username> trac 127.0.0.1(36266) idle in transaction ...
If there's more info required please let me know - happy to assist as I really want to get a fix if possible.
Thanks
Attachments (0)
Change History (23)
follow-up: 2 comment:1 by , 18 years ago
Keywords: | postgresql schema connection pool added |
---|---|
Milestone: | → 0.10.4 |
comment:2 by , 18 years ago
Replying to cboos: Currently I have 44 connections open which isn't divisible by 5 which is implied by your response. Also I do not seem to get groups of 5 being added but rather fours (though I've not tested this exaustively - also I seem to get just one initially) which again doesn't seem to line up with the above. I think possibly I don't fully understand your reply to be honest.
follow-up: 4 comment:3 by , 18 years ago
Well, 5 is a maximum per environment, as each environment has a pool of connections. So depending on the load, it's likely that not all of your 30 environments will end up having all their 5 connections used (and this, for each server process).
Actually 44 is quite a low number in this model, indicative that probably no more than 1 or 2 connections are kept in each pool. But this is probably already too much and this shows the limitation of the current connection pool system when dealing with a high number of environments. There are not that many alternative choices either, except in your special case (PostgreSQL, using a separate schema per environment instead of a different database). So this rather shows the limits of the TracMultipleProjects/MultipleEnvironments approach.
To summarize: I think it's worth considering having a pool that will be shared by all the environments which are using the same PostgreSQL connection but a different schema.
This will keep the number of connections under control by eliminating the E factor from the C*E*S equation…
comment:4 by , 18 years ago
Replying to cboos: OK fair enough. In which case I'm happy to give that a go if you like and see what the effect is. If some code is around for the change I can try that out or alternatively can have a crack at writing my own - let me know.
comment:5 by , 18 years ago
You could also increase the maximum number of connections in the PostgreSQL configuration file.
follow-ups: 7 9 comment:6 by , 18 years ago
Cc: | added |
---|
I have run into a similar issue w/ both apache and tracd.
Simple solution was to turn off connection pooling
poolable = False in trac.db.postgres_backend
Are you using TRAC_ENV_PARENT_DIR
by any chance?
cboos: FWIW, switching to schemas in a single database most likely won't solve the issue either.
follow-up: 8 comment:7 by , 18 years ago
Replying to pacopablo:
cboos: FWIW, switching to schemas in a single database most likely won't solve the issue either.
Well, he's already using schemas, so yes this won't help; what I was talking about was a modification of the current code to bypass the per-environment pool and use a per-database pool that can be shared by different environments using different schemas in the same pg database. Or, if you understood it like that and see a problem with this approach, can you be more explicit about why you think this couldn't work?
comment:8 by , 18 years ago
Replying to cboos: Yes this worked fine - though clearly a bit slower it's a good short term fix. As I said before I'm happy to try other stuff out if somebody lets me know.
comment:9 by , 18 years ago
Replying to pacopablo:
Are you using
TRAC_ENV_PARENT_DIR
by any chance?
Sorry forgot the second part of the question… yes we are using this. It's being set in apache as per the install doc to the directory above the various projects - which is as per the suggested multi-project notes in the install from memory.
comment:10 by , 18 years ago
This looks very strange. Even if a lot of transactions are kept in the pool "ps as" should list them as "idle" not "idle in transaction". Only connections that are currently used by active requests should be listed as "in transaction".
comment:11 by , 18 years ago
comment:12 by , 18 years ago
Keywords: | mysql added |
---|
#5255 reports the problem for a MySQL setup, which was to be expected as the main reason of the problem is the presence of a db connection pool for each environment, in each process (see comment:1).
In the case of postgresql schema, there could be an optimization (see comment:7), but that wouldn't help in the general case.
comment:13 by , 17 years ago
I was having this same problem today (Trac version 0.10.4, Apache 2.2, Python 2.5.1, Ubuntu 7.10) and I've discovered it was caused (or maybe worsened) by the ProjectMenu plugin at Trac-hacks. I've poste a ticket there to let the author know: http://trac-hacks.org/ticket/2190
comment:14 by , 17 years ago
I'd like to have a solution for this also. Just increasing the connection limit isnt a good idea because every connection eats server memory. I also have multiple schemas (better would be a shared schema but this is more complicated I assume) so one connect - common connection pool and setting schema search path could help here. If someone knows how to have a shared connection pool I could help with the glue.
comment:15 by , 17 years ago
Cc: | added |
---|
We have this problem too. We have a PG Database with a schema for every project.
comment:16 by , 17 years ago
Milestone: | 0.10.5 → 0.11.1 |
---|
comment:17 by , 17 years ago
currently investigating pgbouncer http://pgfoundry.org/frs/?group_id=1000258 to the rescue. I'll let you know if this is a workable short-term solution. Still interested on a fix inside trac.
T.
comment:18 by , 16 years ago
comment:19 by , 16 years ago
Milestone: | 0.11.2 → 0.11.1 |
---|
comment:20 by , 16 years ago
Resolution: | → fixed |
---|---|
Status: | new → closed |
In [7316], I backported jonas' new pool implementation from trunk (r7190). This should fix this problem.
Note that if your web front-end setup involves a lot of processes, you might still by default have an excessive number of connections opened. You can limit the number of opened connections per process by setting the TRAC_DB_POOL_SIZE environment variable (default is 10).
comment:21 by , 10 years ago
Keywords: | postgresql mysql schema connection pool → postgresql, mysql, schema, connection, pool |
---|
comment:23 by , 6 years ago
Replying to strk@…:
TinoW how did pgbouncer go ? (10 years later…)
https://pgbouncer.github.io/ (Did you Google it? google:pgbouncer).
There's one pool of (C=5) connections per environment (E=30), per server process (S=?).
So after a while you end up with C*E*S connections…
A few ideas:
schema
parameter is used)