Edgewall Software
Modify

Opened 17 years ago

Closed 16 years ago

Last modified 6 years ago

#4347 closed defect (fixed)

postgres connection limit exceeded

Reported by: nick.stone@… Owned by: Jonas Borgström
Priority: high Milestone: 0.11.1
Component: general Version: 0.10.2
Severity: critical Keywords: postgresql, mysql, schema, connection, pool
Cc: pacopablo@…, stephan.zeissler@… Branch:
Release Notes:
API Changes:
Internal Changes:

Description

I realise there seems to be loads of articles from the beginning of the year on this subject but nothing recently and we are experiencing the problem. We running with Trac 0.10.2, Postgres 8.1.5-1, Apache 2.2 and Python 2.4.

Basically with Apache restarted there are no connections to Postgres but once a few pages are visited the number goes up and stays there. I notice in trac admin that there is a connection timeout which is set to 20 seconds (the default) which seems to have no effect. After a while the number of open connections exceeds the maximum number of available postgres connections and the sites fail. The only way to recover the situation is to restart apache. I notice from earlier mailing list entries that the problem also existed in tracd but nobody seemed to have posted a solution:-(

We currently run something like 30 projects with each one being a schema within one Postgres DB. running "ps ax" gives:

postgres: <username> trac 127.0.0.1(36257) idle in transaction
postgres: <username> trac 127.0.0.1(36258) idle in transaction
postgres: <username> trac 127.0.0.1(36259) idle in transaction
postgres: <username> trac 127.0.0.1(36261) idle in transaction
postgres: <username> trac 127.0.0.1(36262) idle in transaction
postgres: <username> trac 127.0.0.1(36263) idle in transaction
postgres: <username> trac 127.0.0.1(36265) idle in transaction
postgres: <username> trac 127.0.0.1(36266) idle in transaction
...

If there's more info required please let me know - happy to assist as I really want to get a fix if possible.

Thanks

Attachments (0)

Change History (23)

comment:1 by Christian Boos, 17 years ago

Keywords: postgresql schema connection pool added
Milestone: 0.10.4

There's one pool of (C=5) connections per environment (E=30), per server process (S=?).

So after a while you end up with C*E*S connections…

A few ideas:

  1. We could probably share the connection pool between environments in that specific case (i.e. when a schema parameter is used)
  2. We could add a configuration setting for C, the number of connections stored in the pool
  3. You can try to keep S to a low number, in apache's config

in reply to:  1 comment:2 by nick.stone@…, 17 years ago

Replying to cboos: Currently I have 44 connections open which isn't divisible by 5 which is implied by your response. Also I do not seem to get groups of 5 being added but rather fours (though I've not tested this exaustively - also I seem to get just one initially) which again doesn't seem to line up with the above. I think possibly I don't fully understand your reply to be honest.

comment:3 by Christian Boos, 17 years ago

Well, 5 is a maximum per environment, as each environment has a pool of connections. So depending on the load, it's likely that not all of your 30 environments will end up having all their 5 connections used (and this, for each server process).

Actually 44 is quite a low number in this model, indicative that probably no more than 1 or 2 connections are kept in each pool. But this is probably already too much and this shows the limitation of the current connection pool system when dealing with a high number of environments. There are not that many alternative choices either, except in your special case (PostgreSQL, using a separate schema per environment instead of a different database). So this rather shows the limits of the TracMultipleProjects/MultipleEnvironments approach.

To summarize: I think it's worth considering having a pool that will be shared by all the environments which are using the same PostgreSQL connection but a different schema.

This will keep the number of connections under control by eliminating the E factor from the C*E*S equation…

in reply to:  3 comment:4 by nick.stone@…, 17 years ago

Replying to cboos: OK fair enough. In which case I'm happy to give that a go if you like and see what the effect is. If some code is around for the change I can try that out or alternatively can have a crack at writing my own - let me know.

comment:5 by anonymous, 17 years ago

You could also increase the maximum number of connections in the PostgreSQL configuration file.

comment:6 by pacopablo, 17 years ago

Cc: pacopablo@… added

I have run into a similar issue w/ both apache and tracd.

Simple solution was to turn off connection pooling

poolable = False in trac.db.postgres_backend

Are you using TRAC_ENV_PARENT_DIR by any chance?

cboos: FWIW, switching to schemas in a single database most likely won't solve the issue either.

in reply to:  6 ; comment:7 by Christian Boos, 17 years ago

Replying to pacopablo:

cboos: FWIW, switching to schemas in a single database most likely won't solve the issue either.

Well, he's already using schemas, so yes this won't help; what I was talking about was a modification of the current code to bypass the per-environment pool and use a per-database pool that can be shared by different environments using different schemas in the same pg database. Or, if you understood it like that and see a problem with this approach, can you be more explicit about why you think this couldn't work?

in reply to:  7 comment:8 by nick.stone@…, 17 years ago

Replying to cboos: Yes this worked fine - though clearly a bit slower it's a good short term fix. As I said before I'm happy to try other stuff out if somebody lets me know.

in reply to:  6 comment:9 by nick.stone@…, 17 years ago

Replying to pacopablo:

Are you using TRAC_ENV_PARENT_DIR by any chance?

Sorry forgot the second part of the question… yes we are using this. It's being set in apache as per the install doc to the directory above the various projects - which is as per the suggested multi-project notes in the install from memory.

comment:10 by Jonas Borgström, 17 years ago

This looks very strange. Even if a lot of transactions are kept in the pool "ps as" should list them as "idle" not "idle in transaction". Only connections that are currently used by active requests should be listed as "in transaction".

comment:11 by Christian Boos, 17 years ago

The "idle in transaction" problem should be discussed in #4987.

This ticket should focus on the "limitation of the current connection pool system when dealing with a high number of environments" problem (comment:3).

comment:12 by Christian Boos, 17 years ago

Keywords: mysql added

#5255 reports the problem for a MySQL setup, which was to be expected as the main reason of the problem is the presence of a db connection pool for each environment, in each process (see comment:1).

In the case of postgresql schema, there could be an optimization (see comment:7), but that wouldn't help in the general case.

comment:13 by Leandro Conde <leandor AT gmail>, 16 years ago

I was having this same problem today (Trac version 0.10.4, Apache 2.2, Python 2.5.1, Ubuntu 7.10) and I've discovered it was caused (or maybe worsened) by the ProjectMenu plugin at Trac-hacks. I've poste a ticket there to let the author know: http://trac-hacks.org/ticket/2190

comment:14 by TinoW, 16 years ago

I'd like to have a solution for this also. Just increasing the connection limit isnt a good idea because every connection eats server memory. I also have multiple schemas (better would be a shared schema but this is more complicated I assume) so one connect - common connection pool and setting schema search path could help here. If someone knows how to have a shared connection pool I could help with the glue.

comment:15 by stephan.zeissler@…, 16 years ago

Cc: stephan.zeissler@… added

We have this problem too. We have a PG Database with a schema for every project.

comment:16 by Christian Boos, 16 years ago

Milestone: 0.10.50.11.1

comment:17 by TinoW, 16 years ago

currently investigating pgbouncer http://pgfoundry.org/frs/?group_id=1000258 to the rescue. I'll let you know if this is a workable short-term solution. Still interested on a fix inside trac.

T.

comment:18 by Christian Boos, 16 years ago

jonas just committed a fix for this issue.

Testing needed: people experiencing this issue should have a go at Trac 0.12dev (r7190) or apply the patch corresponding to r7190 directly on their Trac source (it applies cleanly to 0.11 and probably to 0.10.4 as well).

comment:19 by Christian Boos, 16 years ago

Milestone: 0.11.20.11.1

comment:20 by Christian Boos, 16 years ago

Resolution: fixed
Status: newclosed

In [7316], I backported jonas' new pool implementation from trunk (r7190). This should fix this problem.

Note that if your web front-end setup involves a lot of processes, you might still by default have an excessive number of connections opened. You can limit the number of opened connections per process by setting the TRAC_DB_POOL_SIZE environment variable (default is 10).

comment:21 by Ryan J Ollos, 9 years ago

Keywords: postgresql mysql schema connection pool → postgresql, mysql, schema, connection, pool

comment:22 by strk@…, 6 years ago

TinoW how did pgbouncer go ? (10 years later…)

in reply to:  22 comment:23 by Jun Omae, 6 years ago

Replying to strk@…:

TinoW how did pgbouncer go ? (10 years later…)

https://pgbouncer.github.io/ (Did you Google it? google:pgbouncer).

Modify Ticket

Change Properties
Set your email in Preferences
Action
as closed The owner will remain Jonas Borgström.
The resolution will be deleted. Next status will be 'reopened'.
to The owner will be changed from Jonas Borgström to the specified user.

Add Comment


E-mail address and name can be saved in the Preferences .
 
Note: See TracTickets for help on using tickets.