#9612 closed enhancement (fixed)
`WikiPage` doesn't need the `ipnr` attribute
Reported by: | Owned by: | Ryan J Ollos | |
---|---|---|---|
Priority: | low | Milestone: | 1.3.1 |
Component: | wiki system | Version: | 0.12-stable |
Severity: | minor | Keywords: | WikiPage model ipnr |
Cc: | Thijs Triemstra | Branch: | |
Release Notes: |
Remove |
||
API Changes: |
Added |
||
Internal Changes: |
Description
Just saw this while changing th:WatchlistPlugin from own SQL code to the trac.wiki.model.WikiPage
class:
Instances of the WikiPage
class do not provide the ipnr
as attribute as for all other wiki page values (version, author, etc.). In fact the _fetch
method doesn't read this value from the DB.
If this is not by intention this should be fixed for consistency. Some plug-ins probably like to read this information out, e.g. I would have given the users the option to show it in their watchlist.
I would be willing to provide a patch.
Attachments (1)
Change History (60)
follow-up: 3 comment:1 by , 14 years ago
comment:2 by , 14 years ago
We removed the display of that information some time ago in 0.11-stable IIRC.
comment:3 by , 14 years ago
Replying to cboos:
Well, why do you need the ipnr? I always thought nobody was actually interested in that and thought that we should even remove it in later releases.
As I said, I'm writing a plugin which shows the wiki data to the user as a table and I just wanted to be as comprehensive as possible. If you think about this that way I will just drop it. It's not an important thing for the plugin.
follow-up: 5 comment:4 by , 14 years ago
Milestone: | → next-major-0.1X |
---|---|
Summary: | `WikiPage` instances lack the `ipnr` attribute → `WikiPage` doesn't need the `ipnr` attribute |
All clear then. Let's remember to remove the ipnr information from the wiki table, at some later point.
comment:5 by , 13 years ago
Cc: | added |
---|
Replying to cboos:
All clear then. Let's remember to remove the ipnr information from the wiki table, at some later point.
What version do you have in mind?
comment:6 by , 13 years ago
0.14 of course. I don't think we should do any more schema change at this point for 0.13, as we should slowly try to finalize this version.
comment:7 by , 13 years ago
Milestone: | next-major-0.1X → 0.14 |
---|
comment:8 by , 10 years ago
I was thinking we should add some notices to the documentation about the pending removal: log:rjollos.git:t9612-schedule-ipnr-removal.
comment:9 by , 10 years ago
comment:11 by , 10 years ago
Should we also remove
ipnr
from theAttachment
class?
Yeah. We should consistently remove attachment.ipnr
if wiki.ipnr
will be removed.
comment:12 by , 10 years ago
Milestone: | next-dev-1.1.x → next-major-releases |
---|
Thanks. I'll add notices to attachment
module about deprecation of ipnr
and pending removal, as in comment:8. Changing milestone since work will be done in milestone:1.3.1.
comment:13 by , 10 years ago
Related question, should we also remove [trac] show_ip_addresses option in 1.3.1?
comment:14 by , 10 years ago
Yeah, it looks like that should be removed. The only uses I see aside from adding the value to the Chrome
data dictionary are:
- tags/trac-1.0.3/trac/templates/diff_view.html@:65#L65
- tags/trac-1.0.3/trac/templates/history_view.html@:58#L58
Side note: I didn't realize it was possible to link to a specific option and highlight it. I guess that works with any wiki word or phrase though. Very cool!
comment:15 by , 10 years ago
Deprecated ipnr
attribute of attachment class in [13729:13730]. Deprecated show_ip_addresses
in [13731:13732].
comment:16 by , 9 years ago
Milestone: | next-major-releases → 1.3.1 |
---|
comment:17 by , 9 years ago
What about a hacked trac and where people upload unwanted attachments or alter wiki pages in ways we cannot imagine? Wouldn't it be nice to at least have the ip address from where the alteration or malicious upload came from?
comment:18 by , 9 years ago
For a publicly-accessible site, you probably want to be running SpamFilter. See also #12398.
For the specific case in which a site was hacked and you want to know the IP address, there are probably better ways to obtain that information such as logging from the web server.
comment:19 by , 8 years ago
Owner: | set to |
---|---|
Status: | new → assigned |
follow-up: 22 comment:21 by , 8 years ago
LGTM, + 2 remarks:
- we should also remove
show_ip_addresses
in chrome.py - db42.py makes me think we could start to introduce a DDL helper functions, like
upgrade_table(old_Table, new_Table)
and a Table.remove_columns that would create a new Table instance, so that one could writeupgrade_table(attachment_table.remove_columns('ipnr'), attachment_table)
comment:22 by , 8 years ago
Replying to Christian Boos:
- we should also remove
show_ip_addresses
in chrome.py
show_ip_addresses
was removed in r14909. That change should probably have been committed with these changes.
- db42.py makes me think we could start to introduce a DDL helper functions, like
upgrade_table(old_Table, new_Table)
and a Table.remove_columns that would create a new Table instance, so that one could writeupgrade_table(attachment_table.remove_columns('ipnr'), attachment_table)
Similar aim in #11872. I'll look into it.
comment:23 by , 8 years ago
Proposed changes in log:rjollos.git:t9612.2. Some of the changes will be committed to 1.2-stable to address #11872.
follow-up: 25 comment:24 by , 8 years ago
I think the upgrade_tables
method should call update_sequence
if the table has an auto increment primary key.
follow-up: 26 comment:25 by , 8 years ago
Replying to Jun Omae:
I think the
upgrade_tables
method should callupdate_sequence
if the table has an auto increment primary key.
Is this only needed if the column with auto_increment
has changed, or has been removed? If the column with auto_increment
has not changed, then that column will have the same data in the new table as in the old.
comment:26 by , 8 years ago
Replying to Ryan J Ollos:
Replying to Jun Omae:
I think the
upgrade_tables
method should callupdate_sequence
if the table has an auto increment primary key.Is this only needed if the column with
auto_increment
has changed, or has been removed? If the column withauto_increment
has not changed, then that column will have the same data in the new table as in the old.
The upgrade_tables
re-create tables. Auto-increment column uses sequence object in PostgreSQL. Then, the sequence object would be re-created and started with 1. Inserting old records wouldn't change the sequence.
See also #8575.
comment:27 by , 8 years ago
[7ea7e2d5/rjollos.git] calls update_sequence
. I'll propose additional changes after #11872 is implemented.
comment:28 by , 8 years ago
API Changes: | modified (diff) |
---|
comment:29 by , 8 years ago
API Changes: | modified (diff) |
---|
comment:30 by , 8 years ago
API Changes: | modified (diff) |
---|---|
Release Notes: | modified (diff) |
Resolution: | → fixed |
Status: | assigned → closed |
comment:31 by , 8 years ago
I was dumb and didn't run tests before committing r15037. To fix the broken build, I guess I need to create a temp directory:
-
trac/upgrades/tests/db42.py
12 12 # history and logs, available at http://trac.edgewall.org/. 13 13 14 14 import copy 15 import tempfile 15 16 import unittest 16 17 17 18 from trac.db.api import DatabaseManager … … 56 57 class UpgradeTestCase(unittest.TestCase): 57 58 58 59 def setUp(self): 59 self.env = EnvironmentStub( )60 self.env = EnvironmentStub(path=tempfile.mkdtemp()) 60 61 self.dbm = DatabaseManager(self.env) 61 62 with self.env.db_transaction: 62 63 self.dbm.drop_tables(new_schema)
Or instead just call self.env.reset_db()
followed by self.env.shutdown()
?
follow-up: 39 comment:32 by , 8 years ago
The test failure I introduced in r15037 was fixed in r15038. However, tests are still failing with on-disk SQLite. I suspect the pathway through destroy_db
in db42.py
has exposed an existing bug.
comment:33 by , 8 years ago
I think we should close connections before calling destroy_db
-
trac/test.py
diff --git a/trac/test.py b/trac/test.py index ba8c7bb..93e809c 100755
a b class EnvironmentStub(Environment): 384 384 tables = dbm.reset_tables() 385 385 else: 386 386 # different version or version unknown, drop the tables 387 if self.dburi.startswith('sqlite:') and \ 388 self.dburi != 'sqlite::memory:': 389 # closes connections to avoid self.destroy_db() raises 390 # "WindowsError: [Error 32]" when deleting on-disk SQLite 391 dbm.shutdown() 387 392 self.destroy_db() 388 393 389 394 if not tables:
comment:34 by , 8 years ago
Yeah, that looks good, I was working on a similar patch. I was hoping to keep the db specific information out of EnvironmentStub
, but I don't see a way.
comment:35 by , 8 years ago
It might be worth considering to put the logic in DatabaseManager.destroy_db(). I suppose a test would be useful to prove the case, but from looking at the code, wouldn't the logic be needed if there was a reason to destroy an on-disk SQLite database outside of EnvironmentStub
?
follow-up: 41 comment:38 by , 8 years ago
I added db_exists
to DatabaseManager
and made the changes to destroy_db
: log:rjollos.git:t9612_destroydb. I haven't tested with PostgreSQL 8.x. Tests are passing on Travis CI and AppVeyor.
comment:39 by , 8 years ago
Replying to Ryan J Ollos:
The test failure I introduced in r15037 was fixed in r15038.
Running unit tests with r15037 while investigating the failure, the EnvironmentStub.reset_db_and_disk()
accidentally removed trac
in my source directory. EnvironmentStub.path
is set to os.path.dirname(trac.__file__)
if path
argument is empty at tags/trac-1.0.12/trac/test.py@:360-365#L332.
>>> env = EnvironmentStub() >>> env.path '/src/tracdev/git/trac'
I think we should check whether self.path
is in current directory to avoid the accident before removing self.path
directory.
-
trac/test.py
diff --git a/trac/test.py b/trac/test.py index 9a7a30e7e..9ee756cc5 100755
a b class EnvironmentStub(Environment): 475 475 """ 476 476 self.reset_db() 477 477 self.shutdown() # really closes the db connections 478 shutil.rmtree(self.path) 478 cwd = os.getcwd() 479 # avoid removing source directory 480 if os.path.commonprefix([cwd, self.path]) != cwd: 481 shutil.rmtree(self.path) 479 482 if self._old_registry is not None: 480 483 self.restore_component_registry() 481 484
follow-ups: 43 49 comment:40 by , 8 years ago
I've encountered that problem as well. It might not be an issue, but an environment created in the root of the source directory would not be removed.
The reason I started thinking about that was remembering that the functional tests, by default, create the environment in <path to source>/testenv/trac
: trunk/trac/tests/functional/__init__.py@14794:133-138#L122. The functional tests don't call reset_db_and_disk
, so your patch doesn't create a problem in this case.
It's probably bad practice to create an environment directory in the source tree, so I think the patch is okay. I have two thoughts on the functional test cases though:
- Should we destroy the environment at the end of the test run?
- Should we put the environment in a temp directory? If putting in a temp directory, we might want to randomize the directory name.
Also, adding a mkdir
function in trac.test
, we could ensure use of the same prefix for all temporary directories.
follow-up: 42 comment:41 by , 8 years ago
Replying to Ryan J Ollos:
I added
db_exists
toDatabaseManager
and made the changes todestroy_db
: log:rjollos.git:t9612_destroydb. I haven't tested with PostgreSQL 8.x. Tests are passing on Travis CI and AppVeyor.
Reconsidering, we could simply call self.shutdown()
before connector.destroy_db()
because destroy_db
directly connects a database without ConnectionPool
: [bbe878e89/jomae.git]
follow-up: 48 comment:42 by , 8 years ago
Replying to Jun Omae:
Reconsidering, we could simply call
self.shutdown()
beforeconnector.destroy_db()
becausedestroy_db
directly connects a database withoutConnectionPool
: [bbe878e89/jomae.git]
That is much cleaner. I didn't even consider that it might work, but it's definitely the better solution.
comment:43 by , 8 years ago
Replying to Ryan J Ollos:
I have two thoughts on the functional test cases though:
- Should we destroy the environment at the end of the test run?
- Should we put the environment in a temp directory? If putting in a temp directory, we might want to randomize the directory name.
When an exception is raised from twill, the html is saved as <test-case-name>.html
in log
directory of the test environment. I think it would be better to keep the environment if tests are failing, at least.
comment:44 by , 8 years ago
Good point. I added additional changes to consider:
- Create temporary dir for functional test run in system temp directory, but don't delete it at the end of a test run: [8788908e/rjollos.git].
- Use the system temp directory as the default for
EnvironmentStub.path
, which would make the comment:39 change unnecessary: [808e806f/rjollos.git].
comment:45 by , 8 years ago
I consider the following code in [808e806f/rjollos.git] would remove all owned files in the temporary directory if the path
of EnvironmentStub.__init__
is None
and reset_db_and_disk()
is wrongly called in tearDown()
.
self.path = tempfile.gettempdir()
$ mkdir /dev/shm/t9612 $ touch /dev/shm/t9612/test.txt $ ls -al /dev/shm/t9612/ total 0 drwxr-xr-x 2 jun66j5 jun66j5 60 Aug 5 22:22 . drwxrwxrwt 20 root root 440 Aug 5 22:20 .. -rw-r--r-- 1 jun66j5 jun66j5 0 Aug 5 22:22 test.txt $ TMP=/dev/shm/t9612 python Python 2.7.6 (default, Jun 22 2015, 17:58:13) [GCC 4.8.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import os >>> import tempfile >>> from trac.test import EnvironmentStub >>> tempfile.gettempdir() '/dev/shm/t9612' >>> env = EnvironmentStub() >>> env.path '/dev/shm/t9612' >>> env.reset_db_and_disk() >>> os.listdir(env.path) Traceback (most recent call last): File "<stdin>", line 1, in <module> OSError: [Errno 2] No such file or directory: '/dev/shm/t9612'
comment:47 by , 8 years ago
If unit tests run with on-disk SQLite file, we generally use TRAC_TEST_DB_URI=sqlite:test.db
. With the branch, /tmp/test.db
would be created (currently, trac/test.db
is created in source tree). Then, only unit tests could run in the same time on system.
comment:48 by , 8 years ago
Replying to Ryan J Ollos:
Replying to Jun Omae:
Reconsidering, we could simply call
self.shutdown()
beforeconnector.destroy_db()
becausedestroy_db
directly connects a database withoutConnectionPool
: [bbe878e89/jomae.git]That is much cleaner. I didn't even consider that it might work, but it's definitely the better solution.
Committed to 1.2-stable in r15042, merged to trunk in r15043. I tested with PostgreSQL 8.4.20 on CentOS 6.8.
comment:49 by , 8 years ago
Replying to Ryan J Ollos:
Also, adding a
mkdir
function intrac.test
, we could ensure use of the same prefix for all temporary directories.
Committed to trunk in r15050 (commit message should say mkdtemp
rather than mkdir
).
comment:50 by , 8 years ago
I think DatabaseManager.upgrade_tables
should quote table name and column names using db.quote()
.
-
trac/db/api.py
diff --git a/trac/db/api.py b/trac/db/api.py index e08b912bc..6c6e98ac0 100644
a b class DatabaseManager(Component): 425 425 """ 426 426 for new_table in new_schema: 427 427 temp_table_name = new_table.name + '_old' 428 old_column_names = set(self.get_column_names(new_table)) 429 new_column_names = set(col.name for col in new_table.columns) 430 cols_to_copy = ','.join(old_column_names & new_column_names) 428 column_names = (set(self.get_column_names(new_table)) & 429 set(col.name for col in new_table.columns)) 431 430 with self.env.db_transaction as db: 431 cols_to_copy = ','.join(db.quote(name) 432 for name in column_names) 432 433 cursor = db.cursor() 433 434 cursor.execute(""" 434 435 CREATE TEMPORARY TABLE %s AS SELECT * FROM %s 435 """ % (temp_table_name, new_table.name)) 436 """ % (db.quote(temp_table_name), 437 db.quote(new_table.name))) 436 438 self.drop_tables((new_table,)) 437 439 self.create_tables((new_table,)) 438 440 cursor.execute(""" 439 441 INSERT INTO %s (%s) SELECT %s FROM %s 440 """ % ( new_table.name, cols_to_copy,441 cols_to_copy, temp_table_name))442 """ % (db.quote(new_table.name), cols_to_copy, 443 cols_to_copy, db.quote(temp_table_name))) 442 444 for col in new_table.columns: 443 445 if col.auto_increment: 444 446 db.update_sequence(cursor, new_table.name, col.name)
follow-up: 55 comment:51 by , 8 years ago
Minor issue: if there is no common columns between old and new tables, INSERT INTO %s (%s) SELECT %s FROM %s
would fail to execute.
comment:52 by , 8 years ago
API Changes: | modified (diff) |
---|
comment:54 by , 8 years ago
Replying to Ryan J Ollos:
Proposed changes look good to me.
Thanks for the reviewing. Committed in [15317] and merged in [15318].
by , 8 years ago
Attachment: | t9612-no-common-columns.diff added |
---|
comment:55 by , 8 years ago
Replying to Jun Omae:
Minor issue: if there is no common columns between old and new tables,
INSERT INTO %s (%s) SELECT %s FROM %s
would fail to execute.
Proposed changes in t9612-no-common-columns.diff.
comment:56 by , 8 years ago
I noticed .upgrade_table
creates transaction per each table. It would be better to create one transaction for call of .upgrade_table
?
-
trac/db/api.py
423 423 424 424 :since: version 1.2 425 425 """ 426 for new_table in new_schema: 427 temp_table_name = new_table.name + '_old' 428 old_column_names = set(self.get_column_names(new_table)) 429 new_column_names = set(col.name for col in new_table.columns) 430 column_names = old_column_names & new_column_names 431 with self.env.db_transaction as db: 426 with self.env.db_transaction as db: 427 cursor = db.cursor() 428 for new_table in new_schema: 429 temp_table_name = new_table.name + '_old' 430 old_column_names = set(self.get_column_names(new_table)) 431 new_column_names = set(col.name for col in new_table.columns) 432 column_names = old_column_names & new_column_names 432 433 cols_to_copy = ','.join(db.quote(name) 433 434 for name in column_names) 434 cursor = db.cursor()435 435 cursor.execute(""" 436 436 CREATE TEMPORARY TABLE %s AS SELECT * FROM %s 437 437 """ % (db.quote(temp_table_name),
Well, why do you need the ipnr? I always thought nobody was actually interested in that and thought that we should even remove it in later releases.