#717 closed enhancement (fixed)
[PATCH] When downloading or viewing a file with svn:keywords property set the keywords are not expanded
Reported by: | Owned by: | Jun Omae | |
---|---|---|---|
Priority: | normal | Milestone: | 1.0.2 |
Component: | version control/browser | Version: | 0.7.1 |
Severity: | normal | Keywords: | svn:keywords patch review |
Cc: | alexs@…, joern.zaefferer@…, khali@…, roos@…, jsiirola@…, langenbach@… | Branch: | |
Release Notes: |
Apply svn:keywords and svn:eol-style substitutions to content of files retrieved from Subversion repositories. |
||
API Changes: | |||
Internal Changes: |
Description (last modified by )
Not sure if this is a bug against trac or against the python bindings, but when a file has say svn:keywords Id
set as a property and a user tries to download the head revision or view the head revision the keyword expansion doesn't happen.
It would be nice if the keyword expansion happened so that the user gets a file with a version number in it.
Attachments (1)
Change History (62)
comment:1 by , 20 years ago
comment:2 by , 20 years ago
Milestone: | 0.7.1 → 1.0 |
---|
comment:3 by , 20 years ago
Milestone: | 1.0 → Someday |
---|
comment:4 by , 20 years ago
Resolution: | → wontfix |
---|---|
Status: | new → closed |
Almost certainly a subversion/pysvn issue.
comment:5 by , 19 years ago
Milestone: | → 1.0 |
---|---|
Resolution: | wontfix |
Status: | closed → reopened |
Type: | defect → enhancement |
It's not a Subversion issue: fs.file_contents
retrieves the raw content of the file in the repository.
It's up to the caller to transform it, if needed.
And then, why not doing the keyword expansion, as requested here?
comment:7 by , 19 years ago
Description: | modified (diff) |
---|---|
Keywords: | svn:keywords added; svn:keyword removed |
Summary: | when downloading of viewing a file with svn:keyword properties set the keyword is not expanded → When downloading or viewing a file with svn:keywords property set the keywords are not expanded |
comment:8 by , 18 years ago
Cc: | added |
---|
I'd like to second this request.
jQuery provides lots of it's plugins via the Trac svn browser, and it would help a lot when the files downloaded via the browser had the keywords expanded.
An example: I'm linking to this file in my plugin release page: http://dev.jquery.com/browser/trunk/plugins/treeview/jquery.treeview.js?format=txt
Any user downloading that file doesn't get the expanded property, therefore he can't tell me which revision he has when reporting an issue.
I'm willing to explore this problem in trac's code, though some more hints for what to look would be a great help, as I'm not familar with trac nor with python.
Thanks!
comment:9 by , 18 years ago
Description: | modified (diff) |
---|
Note to self: it could be implemented using a custom content renderer for a file node, which would take over the Plain Text link.
To Jörn: there is a workaround. If the Trac Zip downloads were enabled at dev.jquery.com (see TracIni#browser-section), you could tell people to use the Zip Archive link at the bottom of the http://dev.jquery.com/browser/trunk/plugins/treeview/ page, which produces a Zip file with an explicit name (today, in this case, it would be trunk_plugins_treeview-r1392.zip
).
comment:10 by , 16 years ago
Cc: | added |
---|
Same problem here. In my project there's a helper script which evolves frequently, I often have to direct the users to the latest version, which they download from trac's web interface to the source tree. Unfortunately the $Revision$ keyword is not substituted, so when they later report the output of the script, I don't know which exact version of the script they are running. If trac would substitute the keywords, it would help me a lot.
comment:11 by , 15 years ago
Any news on this issue? It's quite annoying that keywords are not expanded. You do not get a clue what version others have downloaded from TRAC.
comment:12 by , 15 years ago
Cc: | added |
---|
comment:13 by , 15 years ago
I would just add that if anyone needs this really badly it could most likely be implemented as a plugin.
by , 14 years ago
Attachment: | render-keywords-r10541.patch added |
---|
Patch to expand the svn:keywords when browsing/exporting the repository
comment:16 by , 14 years ago
Cc: | added |
---|---|
Keywords: | patch added |
Summary: | When downloading or viewing a file with svn:keywords property set the keywords are not expanded → [PATCH] When downloading or viewing a file with svn:keywords property set the keywords are not expanded |
I have hit this issue as well (under an almost identical scenario to comment:10). I spent a long time thinking about how to implement this as a plugin, and in the end, I feel it should be part of the core svn_fs for the following reasons:
- this issue is specific to SVN repositories: other repository formats have different keyword substitution schemes.
- to properly process the keywords, the processor needs somewhat intimate access to the file node (e.g. check for the
svn:keywords
property, parse the keywords, look up the information from the repository to correctly fill in the data, parse the file contents). - (at least to me) there does not appear to be an extension point that can hook into the repository system between the actual repository and all renderers.
The attached patch against source:/trunk@10541 implements a lightweight wrapper around the SVN core.Stream
class that overrides read()
to find and replace the svn:keywords
as they are read from the node. The patch also includes unit tests (added to source:/trunk/trac/versioncontrol/tests/svn_fs.py), and a slightly updated svnrepos.dump
to provide a reference file with keywords to compare against.
This solution is attractive (at least to me) as the keywords are substituted for ALL consumers of the node (ZIP, Original Format, and general browsing). However it does have one slightly unexpected side effect: as the keywords are always substituted, diffs rendered by Trac (both in revision history views and changeset views) will show differences for the keywords, even though the underlying file within SVN does not technically have those changes (although the working copies would).
The patch also works for the 0.11 and 0.12 lines with a minor change to track the difference in the _to_svn()
API.
comment:17 by , 14 years ago
Keywords: | review added |
---|
comment:18 by , 14 years ago
I applied the patch, render-keywords-r10541.patch, and I had a few errors. I manually added 2 missing imports from the .rej files.
I installed and got the following error message:
AttributeError: 'SubversionRepository' object has no attribute 'get_path_url'
How do I fix it ?
follow-up: 20 comment:19 by , 14 years ago
Off-topic: Just spotted the last notification, and how can it get assigned 'comment:5'
? Is it a bug 6 months ago that started the new series (comment:2
a few sections up)?
follow-up: 21 comment:20 by , 14 years ago
Replying to osimons:
Is it a bug 6 months ago that started the new series (
comment:2
a few sections up)?
Yes, that seems to be the case. We did have a few issues with cnum
assignment at some point, so this could have slipped in at the time. I'll fix the numbers in the database.
follow-up: 22 comment:21 by , 14 years ago
Replying to rblank:
Replying to osimons:
Is it a bug 6 months ago that started the new series (
comment:2
a few sections up)?Yes, that seems to be the case. We did have a few issues with
cnum
assignment at some point, so this could have slipped in at the time.
I'm not sure the problem was fixed, I think we don't even have a ticket for this issue yet. The problem happened when I deleted the milestone triaging. For all tickets in which the immediate previous change was the deletion of another milestone (like here in comment:14), the comment number became "2". In all other case, the numbering was OK (e.g. ticket:8653#comment:7).
I already described that effect in another OT comment which I can't find anymore ;-) So we should create a ticket out of this before we forget again…
comment:22 by , 14 years ago
Replying to cboos:
I already described that effect in another OT comment which I can't find anymore ;-) So we should create a ticket out of this before we forget again…
I didn't remember that this was related to deleting milestones. This should be easy to reproduce, then.
comment:23 by , 13 years ago
Cc: | added |
---|
comment:24 by , 13 years ago
Good afternoon all,
I've applied the patch to trac 0.12.2 and it does it job as described without any errors. I'm really appreciate to see this patch merged into next release of trac.
Kind Regards
follow-up: 28 comment:27 by , 12 years ago
I think it would be simple to use svn_client_cat2
that expands svn:keywords
in the file contents. See changeset:f5d21dc9/jomae.git.
follow-up: 29 comment:28 by , 12 years ago
Replying to jomae:
I think it would be simple to use
svn_client_cat2
that expandssvn:keywords
in the file contents. See changeset:f5d21dc9/jomae.git.
I'm a bit concerned about performance issues here:
- this uses the
client
API, it's probably slow to get that initialized - with that, we would read the file in one go, as opposed to doing writes of chunks obtained from a stream with fs.file_contents (in
_render_file
raw mode; of course if we pygmentize it, it's mostly the same in the end)
Did you do some timings and memory usage measurement with big files?
The alternative I always had in mind (but never got around implementing, needless to say) was to expand the first CHUNK_SIZE
of data ourselves, svn:keywords
being pretty stable. And quid of $URL$
with your solution? We may have that info (from the repository url, i.e. Repository.get_path_url
), but I think the SVN API has no way to know it, the self.repos.ra_url_utf8
is likely not what one would expect here (it's a file:///
URL).
follow-up: 30 comment:29 by , 12 years ago
Replying to cboos:
[…] timings and memory usage measurement with big files?
Or rather when creating a .zip from the repository, as this will involve many such context creations and full content retrieval and processing.
So I did the testing myself on a svn mirror of the Trac repository, creating a .zip from trunk:
- it takes 3s with the patch, 1.6s without
- the memory usage is sensibly the same (checked on Windows with psutil
psutil.Process(os.getpid()).get_ext_memory_info()
) - I also noticed that with the patch, the
svn:eol-style native
files are exported in DOS format (on Windows); before it was exported as unix style (only thesvn:eol-style CRLF
ones were exported as DOS format) - the substitution works as expected (i.e. fine for
$Rev: 10617 $
, but$URL: file:///C:/Workspace/src/svn/mirrors/teo-trac-mirror/trunk/sample-plugins/Timestamp.py $
)
I think it's a good trade-off as the performance penalty is not as bad as I imagined, so I guess I'm fine with the change even if the $URL$
substitution is not optimal…
comment:30 by , 12 years ago
Replying to cboos:
- I also noticed that with the patch, the
svn:eol-style native
files are exported in DOS format (on Windows); before it was exported as unix style (only thesvn:eol-style CRLF
ones were exported as DOS format)
That's a bit of a bummer, but ok.
- the substitution works as expected (i.e. fine for
$Rev: 10617 $
, but$URL: file:///C:/Workspace/src/svn/mirrors/teo-trac-mirror/trunk/sample-plugins/Timestamp.py $
)
So it exposes path information about the system running Trac? That's more of a concern.
follow-up: 32 comment:31 by , 12 years ago
Ok. I understand performance issue, huge memory issue and the leaking the filesystem path about the system in my proposal using svn_client_cat2
.
comment:32 by , 12 years ago
Replying to jomae:
Ok. I understand performance issue, huge memory issue and the leaking the filesystem path about the system in my proposal using
svn_client_cat2
.
I'm on my way to test repos:jomae.git:ticket717-subst/1.0.2dev. Looks great!
But I've got this:
AttributeError: 'int' object has no attribute 'encode' key 'LastChangedRevision' value 11493
Fix on the way, as well as tests.
Also, LastChangedRev
and Rev
and Revision
are all synonyms, and setting one implies any of the other aliases are also expanded (look at a checkout of our trunk/sample-plugins/HelloWorld.py for example).
follow-up: 34 comment:33 by , 12 years ago
See repos:cboos.git:ticket717-subst/1.0.2dev. Actually the error came from node.created_rev
only as I figured out later, but as I've removed that anyway it doesn't matter. LastChangedRevision
should really be just an alias for Revision
. If you want to reintroduce support for node.created_rev
, we need to introduce a new (non-standard) keyword, probably CreatedRevision
.
follow-up: 37 comment:34 by , 12 years ago
Replying to cboos:
Thanks for the unit tests and the refactoring ;-)
LastChangedRevision
should really be just an alias forRevision
. If you want to reintroduce support fornode.created_rev
, we need to introduce a new (non-standard) keyword, probablyCreatedRevision
.
Ah, I just misunderstood the LastChangedRevision
and Revision
. It is good with the substitions for the keywords. I will not introduce any non-standard keywords.
follow-up: 36 comment:35 by , 12 years ago
One last thing: the download with tracd --http10
doesn't work as expected, as the content itself contains the chunked encoded data (i.e. contains the numbers).
Request:
GET /cblaptop-trac/browser/teo-mirror/trunk/sample-plugins/HelloWorld.py?format=txt HTTP/1.1 Host: localhost:9001 Connection: keep-alive Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.47 Safari/537.36 Referer: http://localhost:9001/cblaptop-trac/browser/teo-mirror/trunk/sample-plugins/HelloWorld.py Accept-Encoding: gzip,deflate,sdch Accept-Language: en-US,en;q=0.8 Cookie: trac_form_token=...
Response:
HTTP/1.0 200 OK Server: tracd/1.0.2dev-r0 Python/2.7.3 Date: Sun, 28 Apr 2013 10:02:13 GMT Content-Type: text/plain Transfer-Encoding: chunked Last-Modified: Sun, 13 Jan 2013 15:51:52 GMT Pragma: no-cache Cache-Control: no-cache Expires: Fri, 01 Jan 1999 00:00:00 GMT Content-Disposition: attachment 884 """Example macro.""" revision = "$Rev: 11784 $" url = "$URL: /trunk/sample-plugins/HelloWorld.py $" ... (the rest of the file) ... 0
comment:36 by , 12 years ago
What do you think about this:
- [95887a40/cboos.git] #717: don't use chunked encoding when server-side protocol is limited to HTTP/1.0 (really tested this time!)
That only covers tracd --http10
of course, but if for some reason someone is using another server limited to HTTP/1.0, this gives the appropriate hook for setting this limitation in a wrapper script.
follow-up: 38 comment:37 by , 12 years ago
Replying to jomae:
Replying to cboos:
Thanks for the unit tests and the refactoring ;-)
LastChangedRevision
should really be just an alias forRevision
. If you want to reintroduce support fornode.created_rev
, we need to introduce a new (non-standard) keyword, probablyCreatedRevision
.Ah, I just misunderstood the
LastChangedRevision
andRevision
. It is good with the substitions for the keywords. I will not introduce any non-standard keywords.
Ok, we both misunderstood actually, but different parts ;-)
My error was that node.created_rev
is indeed what should have been used, as it's the "last changed revision" of the file and not the revision at which the file was created.
So hopefully the last version is correct now (comes also with a new test case): [340c39ff/cboos.git]
comment:38 by , 12 years ago
Replying to cboos:
So hopefully the last version is correct now (comes also with a new test case): [340c39ff/cboos.git]
My bad, I just noticed you already fixed that in [778c257f/jomae.git].
comment:39 by , 12 years ago
Milestone: | unscheduled → 1.0.2 |
---|---|
Owner: | changed from | to
Status: | reopened → assigned |
comment:40 by , 12 years ago
Owner: | changed from | to
---|---|
Release Notes: | modified (diff) |
Severity: | minor → normal |
follow-up: 42 comment:41 by , 12 years ago
Merged your fixes for the chunked encoding, Thanks! I verified on tracd
, tracd --http10
and mod_wsgi
. Works fine.
Another thing: the changes between the substituted contents is rendered in changeset view. However, svn
command shows the changes between the original contents.
Output of svn diff -c26 file:///var/svn/svnrepos.dump
:
-
tête/Résumé.txt
10 10 # $URL:: $ same, but truncated 11 11 # $Header:: $ combination with URL 12 12 13 Overlapped keywords: 14 # $Xxx$Rev$Xxx$ 15 # $Rev$Xxx$Rev$ 16 # $Rev$Rev$Rev$ 17 13 18 En r�sum� ... �a marche.
Output of unified diff of the same changeset:
-
tête/Résumé.txt
1 1 # Simple test for svn:keywords property substitution (#717) 2 # $Rev: 2 5$: Revision of last commit3 # $Author: cboos$: Author of last commit4 # $Date: 2013-04-2 7 14:43:15 +0000 (Sat, 27Apr 2013) $: Date of last commit (now really substituted)5 # $Id: Résumé.txt 2 5 2013-04-27 14:43:15Z cboos$: Combination2 # $Rev: 26 $: Revision of last commit 3 # $Author: jomae $: Author of last commit 4 # $Date: 2013-04-28 05:36:06 +0000 (Sun, 28 Apr 2013) $: Date of last commit (now really substituted) 5 # $Id: Résumé.txt 26 2013-04-28 05:36:06Z jomae $: Combination 6 6 7 7 Now with fixed width fields: … … 9 9 # $HeadURL:: //tête/Résumé.txt $ same 10 10 # $URL:: //tête/Résum#$ same, but truncated 11 # $Header:: //tête/Résumé.txt 25 2013-04-27 14:43:#$ combination with URL 11 # $Header:: //tête/Résumé.txt 26 2013-04-28 05:36:#$ combination with URL 12 13 Overlapped keywords: 14 # $Xxx$Rev: 26 $Xxx$ 15 # $Rev: 26 $Xxx$Rev: 26 $ 16 # $Rev: 26 $Rev$Rev: 26 $ 12 17 13 18 En résumé ... ça marche.
follow-up: 43 comment:42 by , 12 years ago
Replying to jomae:
Another thing: the changes between the substituted contents is rendered in changeset view.
We should introduce some new Repository
method to retrieve directly the diff(s) from the base system, avoiding to go through our diff from the two (set of) files. In the case of svn, that would enable us to work around the above issue and if implemented for other DVCS that will provide an opportunity for huge performance savings (#10267).
comment:43 by , 12 years ago
… but that of course implies to change a lot of things about how the diffs are processed in order to be rendered as tabular diffs (using the PatchRenderer
or even ideally done client-side in Javascript).
In the meantime, and to conclude with this ticket, I'd suggest simply adding a new methods get_processed_content()
.
- [6eb9a11a/cboos.git] #717: fix a few eol tests on Windows
- [d68147fd/cboos.git] #717: introduce
get_processed_content()
with processing options.
comment:44 by , 12 years ago
Thanks for your suggestions and many helps! I've merged your changes and my some fixes. I'll push it later.
- [12e067e39/jomae.git]: #717: add unit tests for the failures which has been fixed in [d68147fd/cboos.git] if
svn:eol-style
and nosvn:keywords
- [1e7d5387/jomae.git]: #717:
Node.get_content()
don't subtitute with or withoutsvn:eol-style
, it retrieves raw content. - [798f16a7/jomae.git]: #717: use
to_unicode
forsvn:author
property, which possibly has invalid utf-8 bytes. - [0806d333/jomae.git]: #717: option's document is Trac Wiki
comment:45 by , 12 years ago
Release Notes: | modified (diff) |
---|---|
Resolution: | → fixed |
Status: | assigned → closed |
Committed in [11797].
comment:46 by , 12 years ago
Resolution: | fixed |
---|---|
Status: | closed → reopened |
I'm really sorry to have to reopen this one, but it looks like the chunked transfer we do "manually" doesn't play well with Apache2 / mod_wsgi (3.3)… I've upgraded our 1.0 demo instance to r11804.
Have a look at demo-1.0:source:trunk/sample-plugins/revision_links.py, try the Plain Text format.
Chrome says:
Error 355 (net::ERR_INCOMPLETE_CHUNKED_ENCODING): The server unexpectedly closed the connection.
Similar trouble with other browsers (e.g. FF says the download of 841 bytes is finished after what seems to be a timeout, but the file is empty).
comment:47 by , 12 years ago
Ok, it's not really mod_wsgi which interferes, rather the combination with gzip encoding.
A GET without the Accept-Encoding: gzip
header succeeds, whereas after a request with that header, we get:
HTTP/1.1 200 Ok Date: Wed, 15 May 2013 18:26:29 GMT Server: Apache/2.2.16 (Debian) Pragma: no-cache Cache-Control: no-cache Expires: Fri, 01 Jan 1999 00:00:00 GMT Content-Disposition: attachment Transfer-Encoding: chunked Last-Modified: Sun, 13 Jan 2013 15:51:52 GMT Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 841 Content-Type: text/plain
and no content (timeout).
follow-up: 49 comment:48 by , 12 years ago
I think we should deal with the absence of the Content-length
header at a lower level, in trac.web.wsgi
… and hope that the other web front-ends will also correctly deal with that. I think there are actually good chances for that, as I made the Content-Length mandatory just for the sake of TracStandalone --http11
(in r8215). Before that, it worked quite well for the other backends without the content length.
So the following changes are somewhat experimental, I have yet to test them with Apache / mod_wsgi (and the gzip Content-Encoding):
- [614e4a70/cboos.git] #717: deal with lack of Content-Length header at the wsgi level.
If the above is not enough, then maybe the following will do better:
- [5a6a8770/cboos.git] #717: experiment with returning an iterator (as advised in PEP:0333)
If this approach proves to be robust, then we might even consider reverting parts of r8215 and in some cases do away with the need to accumulate the whole response in memory. This could help reduce the memory usage a lot in some situations (big CSV ticket reports anyone?).
follow-up: 50 comment:49 by , 12 years ago
Hmm, reproduced on Apache 2.2.15 with mod_wsgi and mod_deflate. Thanks for your catching.
It seems that your approach is robust. Also I tried your branches on the following environment.
- python-2.6.6-36.el6.x86_64
- subversion-1.6.11-9.el6_4.x86_64
- httpd-2.2.15-26.el6.centos.x86_64
- mod_wsgi-3.2-3.el6.x86_64
- Firefox 21.0 on Windows XP
SetOutputFilter DEFLATE
| No SetOutputFilter DEFLATE
| |
r11804 | chunked encoding is broken | works (chunked and no gzip) |
[614e4a70/cboos.git] | works (no chunked and gzip with Content-Length) | works (chunked, no gzip and no Content-Length) |
[5a6a8770/cboos.git] | works (no chunked and gzip with Content-Length) | works (chunked, no gzip and no Content-Length) |
This could help reduce the memory usage a lot in some situations (big CSV ticket reports anyone?).
Zip archive from repository browser and attachment page, too.
follow-up: 51 comment:50 by , 12 years ago
Replying to jomae:
Thanks for testing! I've now upgraded demo-1.0 as well, and it also works fine there.
[614e4a70/cboos.git] […] works
(chunked, no gzip and no Content-Length)
Great, so Apache is doing the right thing in that case. Btw., it's httpd and not mod_wsgi
which must add the Transfer-Encoding: chunked response header and do the chunked encoding, as the only things about "chunked" in mod_wsgi.c are about the chunked request support (WSGIChunkedRequest
, see 3.0 Features, point 7., and this has nothing to do with our stuff).
I'm a bit unsure if every other server out there would also do the right thing in this situation, or if we will run into trouble. I guess we'll have to see and adapt if that happens.
Any preference for the first or second approach? The second looks a bit more hackish but is closer to what PEP:0333 mandates.
follow-up: 52 comment:51 by , 12 years ago
Replying to cboos:
Great, so Apache is doing the right thing in that case. Btw., it's httpd and not
mod_wsgi
which must add the Transfer-Encoding: chunked response header and do the chunked encoding, as the only things about "chunked" in mod_wsgi.c are about the chunked request support (WSGIChunkedRequest
, see 3.0 Features, point 7., and this has nothing to do with our stuff).
In apache, if both server and client use HTTP/1.1
, chunked encoding is automatically used by apache. see modules/http/http_protocol.c.
I'm a bit unsure if every other server out there would also do the right thing in this situation, or if we will run into trouble. I guess we'll have to see and adapt if that happens.
I've more tests for [5a6a8770/cboos.git] and it works fine on the following servers.
- Apache 2.2.15 with mod_fcgid 2.3.7
- Lighttpd 1.4.31 with mod_fastcgi
- Nginx 1.0.15 with fastcgi
Any preference for the first or second approach? The second looks a bit more hackish but is closer to what PEP:0333 mandates.
The second approach is fine to me. In addition, it would be nice to add close
method to the iterator to release the resources, e.g. FileContentStream.pool
.
Also, we should add close()
method to the content from Node.get_content()
in order to release the resources. I don't think GC should release them.
follow-up: 53 comment:52 by , 12 years ago
Replying to jomae:
I've more tests for [5a6a8770/cboos.git] and it works fine on the following servers.
Perfect! We're nearly ready to go, then.
Any preference for the first or second approach? The second looks a bit more hackish but is closer to what PEP:0333 mandates.
The second approach is fine to me. In addition, it would be nice to add
close
method to the iterator to release the resources, e.g.FileContentStream.pool
.Also, we should add
close()
method to the content fromNode.get_content()
in order to release the resources. I don't think GC should release them.
Not completely sure what you had in mind, something like this?
-
trac/versioncontrol/web_ui/browser.py
diff --git a/trac/versioncontrol/web_ui/browser.py b/trac/versioncontrol/web_ui/browser.py index 5bd9488..e5b5ac4 100644
a b class BrowserModule(Component): 682 682 while c: 683 683 yield c 684 684 c = content.read(CHUNK_SIZE) 685 content.close() 685 686 raise RequestDone(chunks()) 686 687 else: 687 688 # The changeset corresponding to the last change on `node` -
tracopt/versioncontrol/svn/svn_fs.py
diff --git a/tracopt/versioncontrol/svn/svn_fs.py b/tracopt/versioncontrol/svn/svn_fs.py index caeedab..1b7945f 100644
a b class FileContentStream(object): 1163 1163 self.close() 1164 1164 1165 1165 def close(self): 1166 if self.pool: 1167 self.pool.destroy() 1168 self.pool = None 1166 1169 self.stream = None 1167 1170 self.fs_ptr = None 1168 1171
For PyPy I suppose? Any news on that front? ;-)
comment:53 by , 11 years ago
Replying to cboos:
Not completely sure what you had in mind, something like this?
I've misunderstood. I thought it would be good to add __iter__
, next
and close
methods to file-like object from node.get_content()
and then it could pass directly the object to RequestDone
and wsgi framework call the close
method. But it would be API changes.
For PyPy I suppose? Any news on that front? ;-)
Yes and no. Also no news ;-)
However, I want to remove gc.collect()
from trac/web/main.py
or cut down in future. The method is currently called on every request and makes Trac slow….
Well, I've fixed two minor issue from your branch, log:jomae.git:ticket717/rework-chunked, and re-checked with apache, lghttpd and nginx. I think we got all done for the issue.
follow-up: 56 comment:55 by , 11 years ago
I've noticed the same. The notifications seem to have stopped working around the time of the t.e.o maintenance.
comment:56 by , 11 years ago
Replying to Ryan J Ollos <ryan.j.ollos@…>:
I've noticed the same. The notifications seem to have stopped working around the time of the t.e.o maintenance.
The Trac log shows that the mails were transmitted to the MTA. The MTA, however, isn't happy, and prevents relaying even from localhost. Debugging…
comment:57 by , 11 years ago
The issue should be fixed now. It looks like Trac now connects locally through IPv6 on [::1]
, which was missing from mynetworks
.
comment:58 by , 11 years ago
Pushed in [11818]. Sorry about the broken "chunked encoding" for a long time. Thanks for the many supports, Christian!
comment:59 by , 11 years ago
Resolution: | → fixed |
---|---|
Status: | reopened → closed |
comment:60 by , 11 years ago
I would like to thank everyone who worked on this bug, this is very appreciated.
comment:61 by , 10 years ago
See follow-up ticket #11802, which attempts to generalize the use of the chunked encoding transfer mode.
Reset milestone