Opened 19 years ago
Closed 17 years ago
#2118 closed defect (fixed)
Proper use of HTTP status codes
Reported by: | Owned by: | Jonas Borgström | |
---|---|---|---|
Priority: | normal | Milestone: | 0.11 |
Component: | general | Version: | 0.8.4 |
Severity: | normal | Keywords: | 404 |
Cc: | Branch: | ||
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
This is mainly a robot issue. If an automatic link due to the usage of a wiki word goes to wiki page signaling that the link is not found, the header is 200 - 0K and not 404 - Not Found. Therefore this empty wiki page is indexed by google and other robots. This can happen with even more weird situation where the link is created outside of the wiki from a Subversion Log message for example. If you need a real world sample, search google.com for "YearDay site:www.softec.st" !
Denis Gervalle http://www.softec.st
Attachments (0)
Change History (11)
comment:1 by , 19 years ago
Resolution: | → worksforme |
---|---|
Status: | new → closed |
comment:2 by , 19 years ago
Component: | wiki → general |
---|---|
Milestone: | → 1.0 |
Resolution: | worksforme |
Status: | closed → reopened |
Summary: | Please return a 404 - Not Found for missing wiki page ! → Proper use of HTTP status codes |
That's true, but returning proper status code would still be the Right Thing. I don't think there's a ticket about this, so I'll reopen this one (there's a ton of FIXME
comments in the code related to HTTP status codes, though).
This includes:
- Returning
404 Not Found
when objects (wiki pages, tickets, changesets, repository dirs/files, etc) cannot be found - Return
403 Forbidden
when the user lacks privileges to view something - probably a couple more that I don't remember right now
comment:3 by , 19 years ago
But following a missing wiki
link leads to the Wiki page creation.
Would it be correct to provide at the same time that valid page
and a 404 for it? Or do I miss something?
comment:4 by , 19 years ago
I really hope that you missed something. I cannot imagine that a robot crawling a site could at the same time create many new empty wiki pages ! I definitevely agree that a correct usage of the status code is the way to go. And following a link to a not existing wiki page should not create it and should definitely return a 404 error.
comment:5 by , 19 years ago
Well, following the missing link doesn't directly create the page. There is a button on the page that says "Create this Page", which will take you to the editing form. I think it's still proper to use a 404 page, since the page isn't found. Just because it's a 404 response, doesn't mean there can't be more on the page.
follow-up: 8 comment:6 by , 19 years ago
Looking at the current code in
wiki/web_ui.py,
I see that we now return a 404
only when the user has not
the "CREATE_WIKI" permission.
Is this the final word on this topic?
comment:8 by , 17 years ago
Replying to cboos:
Looking at the current code in wiki/web_ui.py, I see that we now return a
404
only when the user has not the "CREATE_WIKI" permission.
Current wiki/web_ui.py (0.11b1+) actually uses a ResourceNotFound
for this, which is just a marker for a TracError
. The response code is then back to 200
.
follow-up: 10 comment:9 by , 17 years ago
A ResourceNotFound exception should translate to a 404 (see source:trunk/trac/web/main.py@6420#L239). Isn't that the case?
comment:10 by , 17 years ago
Replying to cboos:
A ResourceNotFound exception should translate to a 404 (see source:trunk/trac/web/main.py@6420#L239). Isn't that the case?
*Cough*. What, me? Did I say that? Did I not notice that I had WIKI_CREATE
permission when testing? Is it the final word on this topic after all? Can we perhaps just close the ticket? :-)
comment:11 by , 17 years ago
Milestone: | 1.0 → 0.11 |
---|---|
Resolution: | → fixed |
Status: | reopened → closed |
Yes, I think it's fixed now (with r5554).
0.9 has the attribute
rel
set to"nofollow"
for the links pointing to missing wiki pages. This solves the problem described above.