#3833 closed enhancement (fixed)
Automatic reload of configuration and plugins
Reported by: | Owned by: | Christopher Lenz | |
---|---|---|---|
Priority: | normal | Milestone: | 0.11 |
Component: | general | Version: | 0.10 |
Severity: | major | Keywords: | plugins logging configuration |
Cc: | Branch: | ||
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
Trac can check file timestamp of various files of interest, and reload them if its different from the currently loaded files. timestamp check is very fast, and will not cause a significant performance hit. this will save the bother of restarting apache after a configuration change, and will also prevent wierd things caused by one instance of apache running trac with different plugins activated than other instances (can be caused by activating a plugin from the admin interface).
Attachments (5)
Change History (26)
comment:1 by , 18 years ago
comment:2 by , 18 years ago
Regular configuration data like banner logo, project description and so on already works like this without restart - essentially values that operate as pure 'settings' and are only fetched by reading the env.config
by the relevant module. Look at _open_environment()
environment cache in trac.web.main
that checks if the trac.ini has changed, and reloads if true. I use Apache with mod_python, and this works fine as far as I can tell.
However, the issue of reloading plugins is something I have been wondering about too (and are in need of). That does not work with mod_python, and I am also interested in finding some sort of 'deep environment reset' after changing trac.ini that also includes a full set of components.
The mentioned Environment
cache contains a env.config.parse_if_needed()
that handles refresh of the config settings, and somehow there should also be a way to reset the component manager. One idea is having a similar call to a new function env.compmgr.reload_if_needed()
called after reading new settings - having the env.config.parse_if_needed()
return true if [components]
config settings are changed, or maybe setting a flag like env.compmgr.is_dirty
to true if the environment needs a components refresh.
Implementing a check as an overlay to all possible plugin load possibilities (plugin folder and so on), is not very feasible - it will likely be a time-consuming compare operation that affects performance when run for each and every request. However, just comparing the config settings for [components]
before and after refresh is a quick operation.
As long as any plugin change is accompanied by a change in trac.ini [components]
section, it will trigger a reload accomplished by de-referencing old components and loading a fresh set using trac.loader.load_components()
.
Would such a scenario work?
comment:3 by , 18 years ago
I would object deep configuration reload automatically. A configuration can be edited by administrator and some interim changes can be saved easily. This can lead to loading of invalid/incomplete configuration as result.
Assuming that the project is in use by some development team, this can lead to loosing of data easily. (Imagine if you have entered a two page wiki or ticket, and the system crashed after pressing Preview.)
It is better to have configuration reload on demand, e.g. an administrator should explicitly ask Trac to reload its configs.
There is a wide range of methods for doing the explicit request about config reload, e.g.:
- Sending a signal to Trac (and have a support for this via
/var/run
approach); - Creating an empty file in some predefined location that is deleted by Trac when it detects it;
- Use some IPC mechanisms for signalling the request.
Upon receiving the request, Trac can reload configs in proper time.
comment:4 by , 18 years ago
That isn't actually quite as things work in Trac, I think.
If you for instance don't use mod_python, but use a simple CGI setup instead, the Environment gets completely reloaded for each request. Such a request essentially have no history - a Python process is started, modules loaded, Trac processing started, and when request is done the whole Python process dies.
The reason this also works is because request specific information (like your preview example) gets passed around as arguments to the request object - available inside the req
context.
This discussion is really about trying to get the best of both worlds - the persistance and performance of mod_python, with the flexibility of making config changes on-the-fly. Actually, config changes are currently made on-the-fly after each file save also when using mod_python, it is just that for [components]
they don't have any effect.
Lastly, to clarify my previous comment: The use of of deep reload was probably not the best words to use - it is not in the Python reload()
meaning. Reloading modules and class definitions is a totally different challenge fraught with potential pitfalls… I really mean a refresh of the component lists for the environment, but using the already loaded Python code (or load new modules/components if first time use).
comment:5 by , 18 years ago
Keywords: | plugins logging configuration added |
---|---|
Milestone: | → 2.0 |
Taking into account logging configuration changes and component list changes without having to restart the server would be handy, but it's a somewhat low priority goal.
by , 18 years ago
Attachment: | env_reload_5342.diff added |
---|
Patch that solves arbitrary reload of individual projects after updating trac.ini without server restart (for servers that persist memory between requests). Patch is for 0.10-stable revision 5342.
comment:6 by , 18 years ago
Milestone: | 2.0 → 0.11 |
---|---|
Owner: | changed from | to
Severity: | normal → major |
Thanks for the patch!
comment:7 by , 17 years ago
by , 17 years ago
Attachment: | env_reset.diff added |
---|
Somewhat different approach, works a lot better for me
comment:8 by , 17 years ago
Milestone: | 0.11.1 → 0.11 |
---|---|
Owner: | changed from | to
Status: | new → assigned |
I've attached another patch that works pretty well for me.
This does a couple of things:
Configuration.parse_if_needed()
returns a boolean value indicating whether the reparsing was indeed necessarytrac.env.open_environment()
checks that return value, and if it's true, callsEnvironment.reset()
- Added
reset()
method toComponentManager
andEnvironment
: this basically clears the component cache, and reloads all the modules - Added
touch()
method toConfiguration
, which is called when a plugin gets installed or uninstalled via the plugins admin panel. This causes the process outlined above to run, even though there was no actual config change
follow-up: 11 comment:10 by , 17 years ago
I'm half-way there… If I enable a component in trac.ini
it gets immediately picked up for all processes, and works as it should. That's very nice.
However, what it doesn't solve is the issue I originally raised in #5135 (closed due to the patch on this ticket) - it does not pick up on a deleted configuration item. So that if I try to disable the component it will not disable as there is nothing in the config re-parser that knows how to actually remove a setting (ie make it disappear).
I see the same consequence of this using my custom field admin plugin - if i create and delete fields there each refresh of the page will show a 'random' selection of fields depending on what changes each process have picked up. Only the process that did the delete will have the correct config - until another delete which might happen through another process :-)
comment:11 by , 17 years ago
Replying to osimons <simon-code@bvnetwork.no>:
… it does not pick up on a deleted configuration item.
Just to be precise: I did not mean 'disabled' as in enbaled/disabled, i actually ment that the component-enabling line was removed from config.
follow-up: 13 comment:12 by , 17 years ago
Looks good. I didn't have the time to test the fix myself, but I'm OK with the changes.
To osimons: I see your points, but wouldn't that be fixed by implementing a kind of reset()
on the Configuration object itself?
follow-up: 14 comment:13 by , 17 years ago
Replying to cboos:
To osimons: I see your points, but wouldn't that be fixed by implementing a kind of
reset()
on the Configuration object itself?
Oh, sure it can be fixed - and no doubt quite easily as long as a config change actually reloads the configs fully, and not just re-parses it into the same data structure. No doubt a few extra lines at suitable places, I'm sure.
I often comment out settings (such as in [components]
), but not least is this a problem for [ticket-custom]
and [ticket-workflow]
sections that have varying number of settings without any defaults. Remove a workflow status and it will not be noticed before a server restart.
If we are solving one aspect, we should also solve the other - if not it will still be some sort of 'it depends…' as to the need for server restart after config change.
comment:14 by , 17 years ago
Replying to osimons <simon-code@bvnetwork.no>:
Oh, sure it can be fixed - and no doubt quite easily as long as a config change actually reloads the configs fully, and not just re-parses it into the same data structure. No doubt a few extra lines at suitable places, I'm sure.
Good point, I'll look into adding that to the patch.
comment:15 by , 17 years ago
Okay, I've added a slightly revised patch that also removes all existing configuration before doing the reparsing.
I couldn't really reproduce the problem with the custom fields admin plugin in the first place, so please let me know if this works for you or not. Thanks.
follow-up: 17 comment:16 by , 17 years ago
Thanks, cmlenz. The patch works with regards to various settings, components settings enabled/disabled/removed, custom fields and all that - changing both local and global trac.ini. Tested on Apache 2.2.4/mod_python 3.3.1 on OSX btw.
However, one more thing discovered (mentioned as a nice effect of the environment reload)… Logging is not affected - it is immune to any change of any kind to config settings. That is obviously as the log object is created when the env is first initialised, and never re-checked for as long as env lives.
Hmmm… We are targeting individual parts as we notice them while leaving the others as-is, and I can't help feel that there should be a more general solution. It is bound to bite users/developers in the tail as various attributes and objects still cling on to the env
object, while the circumstances for their creation and existence might have changed radically.
As detailed above, my approach was very simple: If settings for the environment has changed, just ditch it from further use. Call shutdown()
(which my patch misses) and remove it from environment cache so that a new, fresh environment is loaded on next use. If you are running CGI-style then it does not matter anyway - they should always be run_once
, and will create the a new env object each time.
One thing to keep in mind with this environment refresh is that it is really a very rare occurence for a typical project - it could be months between the need for a change, so the 'cost' of doing a new creation is really very tiny.
If you are intent on letting the env object live, but try to refresh the important parts, then perhaps a more general way of doing it is to call __init__()
again? That should set up new config, components and logging - and whatever else is added later? Well, I suppose it is not much different from having an explicit reset()
that picks just the things we are interested in.
PS! Note that my original patch is for 0.10 - I have not looked at how things might have changed for 0.11 or why it does not work well for you.
comment:17 by , 17 years ago
Replying to osimons <simon-code@bvnetwork.no>:
However, one more thing discovered (mentioned as a nice effect of the environment reload)… Logging is not affected - it is immune to any change of any kind to config settings. That is obviously as the log object is created when the env is first initialised, and never re-checked for as long as env lives.
Yeah, I'm aware of the logging not getting reinitialized. There's a problem here: instead of the old logging handler getting replaced, we'd simply be adding another handler. So for N resets, you'll see every log message N times :P
There's probably some way around that, but AFAICT no clean one. I.e. we'd have to mess with internals of the logging
module. If anyone has a clean approach, I'd be interested in seeing it, of course.
Hmmm… We are targeting individual parts as we notice them while leaving the others as-is, and I can't help feel that there should be a more general solution. It is bound to bite users/developers in the tail as various attributes and objects still cling on to the
env
object, while the circumstances for their creation and existence might have changed radically.As detailed above, my approach was very simple: If settings for the environment has changed, just ditch it from further use. Call
shutdown()
(which my patch misses) and remove it from environment cache so that a new, fresh environment is loaded on next use. If you are running CGI-style then it does not matter anyway - they should always berun_once
, and will create the a new env object each time.
Yeah, I tried that first, but it did not work. I'm not really sure why it didn't, in theory it should have. Alas, the reset()
approach worked immediately, so I went with that :) I'll try to look into the problems I've been seeing more closely.
by , 17 years ago
Attachment: | env_reset.3.osimons.diff added |
---|
Another attempt at solving environment reload - a combination of earlier patches that also solves logging.
comment:18 by , 17 years ago
I've uploaded a new patch to solve this:
- It uses my original idea of popping the environment out of the cache to load a fresh one. All
refresh()
code is removed. - I've used cmlenz's patch for
parse_if_needed()
and newtouch()
+ the web_ui changes. Only tiny change is that thecheck_only=False
is removed - it is a leftover from my orgiginal patch that is not needed (it never gets called that way). - It also solves logging, as I've added a tiny change to the log setup where we also return the handler so that when shutting down the env we can call
log.removeHandler()
. I suppose that particular call could actually be moved toenv.shutdown()
as last call there (record until the very end…). Anyway, that call makes all the difference - no more than 1 copy in the log for a given request, and it works with enable, disable, switching log types and so on. - I had to remove the test case testing component enable, reset and checking - following the removal of
reset()
. Some new test case should likely be provided.
This patch is working like a charm as far as I can tell. I do hope others can verify that…
comment:19 by , 17 years ago
Okay, this is looking good. Thanks a lot for that patch!
One problem is the API change to log_factory
, which I'd rather avoid. This change also resulted in the breakage of a ton of unit tests, which is a hint ;)
I'm attaching a patch that simply stores a reference to the Trac handler as an attribute on the logger object. Everything's working really well at this point for me.
comment:20 by , 17 years ago
Resolution: | → fixed |
---|---|
Status: | assigned → closed |
Implemented in [6006]. Closing.
comment:21 by , 17 years ago
One afterthought: what happens to the other requests that are currently being processed which are already using the same env
for which env.shutdown()
will called in the new code?
It seems that for those requests, the worse that can happen is that a new database pool will be recreated and a new database connection will be retrieved the next time env.get_db_cnx()
gets called. This should be harmless, as if we're in a write transaction, we shouldn't normally be calling env.get_db_cnx()
again anyway. But I thought it was worth raising the attention on this aspect.
Checking a timestamp is fast (to detect a file change) but how to use the new version of the file without restarting the server?