#5241 closed defect (wontfix)
Conversion of MediaWiki database to Trac Wiki database.
Reported by: | Owned by: | Jonas Borgström | |
---|---|---|---|
Priority: | normal | Milestone: | |
Component: | general | Version: | |
Severity: | normal | Keywords: | trac faq wiki mediawiki |
Cc: | rganz@…, dclark@… | Branch: | |
Release Notes: | |||
API Changes: | |||
Internal Changes: |
Description
The script provided on the wiki:TracFaq page doesn't work for MediaWiki 1.5 since the database structure has been changed.
Attached is the script I used to export the pages from Mediawiki and then import them into Trac.
Maybe someone can put this at the FAQ for me since I don't know how I should do this?
Thanks
Attachments (5)
Change History (18)
by , 18 years ago
Attachment: | mediawiki2trac.ph added |
---|
comment:2 by , 17 years ago
Thanks for this script. This is actually useful to me as I have an existing Mediawiki installation that folks want as part of Trac now.
Question: This code doesn't appear to address attachments. Is this a known issue or does trac-admin import mystically take care of this somehow?
comment:3 by , 17 years ago
Cc: | added |
---|
comment:4 by , 17 years ago
Owner: | changed from | to
---|---|
Status: | new → assigned |
OK. Well I've done some research about attachments.
MediaWiki handles attached files in a completely different way than Trac does. In Trac, you have attached files that are associated with a given page. Whereas in MediaWiki you have files that are uploaded and it's up to the wiki-editors to create links to the uploaded documents.
So, the process for dealing with "attachments" depends on a couple of factors.
If you want your downloaded documents to be unique in Trac like they are in mediawiki, then that's a problem. AFAIK, uniqueness in mediawiki is based on the filename whereas in Trac it's a combination of filename and what Wiki page the attachment is associated with.
So if you have an attachment that is linked in multiple places from on your MediaWiki page, then to have the same effect you'd either have to set up an independent web for these attachments (so all things would link to the same file) or you need to give up on the notion that the file is unique and that you have attachments that represent copies of the file in question.
Another interesting bit is the hash encoding that can occur with how Mediawiki stores the actual files.
I haven't looked into this supremely closely, but it appears that storing the files can be toggled between two methods. The method that is enabled on our MediaWiki instance is based on the following:
Attached files are stored in an images/ folder (and possibly a media/ folder. We don't have one, but our version is 1.6 and we only have the images/ folder to store uploads).
The subfolders that the actual attachment is stored in is based on the first 2 characters of the md5 hash of the filename. You can derive the file path using the following SQL query:
select img_name, concat(left(md5(img_name),1), '/', left(md5(img_name),2), '/', img_name) as path from image;
So at this point, I'm going to modify the script you have here to include the possibility of bringing attachments along, moving the downloaded files to the appropriate trac directory and updating the trac database accordingly.
See you back here in a few minutes.
comment:5 by , 17 years ago
Owner: | changed from | to
---|---|
Status: | assigned → new |
comment:6 by , 17 years ago
Cc: | added |
---|
by , 16 years ago
Attachment: | mediawiki2trac.py added |
---|
This script handles revisions, User:
pages and Talk:
pages. It exports to SQL.
by , 16 years ago
Attachment: | mediawiki2trac.3.py added |
---|
This one does links even a little more nicely.
comment:7 by , 16 years ago
Summary: | Modified mediawiki 2 trac script → Image grabbing `sh` code. |
---|
I found out how to get the images and put them in an "Image page" that is kind of like the MediaWiki image page. I'll post the Python for generating the database part of that in a minute. In the meantime, I'll post the sh
I used to gather and organize the images:
:; find <your mediawiki installation>/images -type f > image-like-files :; egrep -v 'archive|README' image-like-files > f :; cat f | sed -r "s|^.+/([^/]+)$|mkdir 'Image/\1' \&\& cp '&' 'Image/\1/\1'|" | sh
by , 16 years ago
Grabs image data and puts it into the Trac tables. Links are more nicely formatted.
comment:8 by , 16 years ago
Cc: | removed |
---|
comment:9 by , 16 years ago
Summary: | Image grabbing `sh` code. → Conversion of MediaWiki database to Trac Wiki database. |
---|
Still not really done, alas. The data store in MediaWiki is designed to be used, not transformed. What I've worked out so far:
- Recovery of
User:
,Talk:
andImage:
pages. - Recovery of image metadata.
- Recovery of revision history.
- Reformatting of links, so they are both functional and attractive.
What I have not done:
- The
User_talk:
pages are omitted. - Any HTML code in the MediaWiki documents is simply ignored.
- Page moves and archived images are ignored.
- Anonymous users, with just an IP address, are ignored.
If you ever find yourself afflicted with this task, you have my blessing.
comment:10 by , 16 years ago
I was getting:
Traceback (most recent call last): File "./mw2tw.py", line 256, in <module> db.query(query) _mysql_exceptions.OperationalError: (1054, "Unknown column 'cs_page.page_title' in 'on clause'")
with mysqld 5.0.51a - putting the FROM clause stuff in parens seems to have fixed the problem:
-
mw2tw.py
old new 166 166 ${p}image.img_size, 167 167 ${p}page.page_namespace, 168 168 ${p}revision.rev_page 169 FROM 169 FROM ( 170 170 ${p}page, 171 171 ${p}revision, 172 172 ${p}user, 173 ${p}text 173 ${p}text ) 174 174 LEFT JOIN ${p}image ON 175 175 ${p}page.page_title = ${p}image.img_name 176 176 WHERE
comment:11 by , 16 years ago
Cc: | added |
---|
Some more patches (includes the above patch). Translate some more syntax; make mediawiki headings like =heading= (no spaces) work; work with the mediawiki convention of the first letter always being in caps; use spaces instead of underlines in wiki links.
-
mw2tw.py
old new (this hunk was shorter than expected) 33 34 pairs = [ 34 35 ("\n***","\n *"), 35 36 ("\n**", "\n *"), 36 37 ("\n*", "\n *"), 38 ("\n#", "\n 1."), 37 39 ("<br>","[[BR]]"), 38 40 ("\n:","\n "), 41 ("<pre>","{{{"), 42 ("</pre>","}}}"), 43 ("<code>","{{{"), 44 ("</code>","}}}"), 45 ] 46 47 repairs = [ 48 (r"(\=)([^\=]+)\=(\n)",r"\1 \2 \1\3"), 49 (r"(\=\=)([^\=]+)\=\=(\n)",r"\1 \2 \1\3"), 50 (r"(\=\=\=)([^\=]+)\=\=\=(\n)",r"\1 \2 \1\3"), 51 (r"(\=\=\=\=)([^\=]+)\=\=\=\=(\n)",r"\1 \2 \1\3"), 52 (r"(\=\=\=\=\=)([^\=]+)\=\=\=\=\=(\n)",r"\1 \2 \1\3"), 53 (r"(\=\=\=\=\=\=)([^\=]+)\=\=\=\=\=\=(\n)",r"\1 \2 \1\3"), 39 54 ] 40 55 41 56 wiki_link_catcher = re.compile(r""" … … 62 81 def link_rewriter(match): 63 82 (link, label) = match.group(1, 3) 64 83 def wrap(a, b=()): 65 return '[wiki: ' + a.replace(' ', '_') + ' ' + (b or a) + ']'84 return '[wiki:"' + (a[0].upper() + a[1:]).replace('_', ' ') + '" ' + (b or a) + ']' 66 85 if link.startswith("Image:"): 67 86 return '[[Image(wiki:Image/' + link[6:] + ':' + link[6:] + ')]]' 68 87 return wrap(link, label) … … 71 90 """ convert from mediawiki text to trac text """ 72 91 for (mw, tw) in pairs: 73 92 mw_text = mw_text.replace(mw, tw) 93 for (mw, tw) in repairs: 94 #print >> sys.stderr, mw_text 95 mw_text = re.sub(mw, tw, mw_text) 74 96 return q(wiki_link_catcher.sub(link_rewriter, mw_text)) 75 97 76 98 def title_fixer(namespace, title): 99 title = (title[0].upper() + title[1:]).replace('_', ' ') 100 #print title 77 101 if namespace is 0: 78 102 return q(title) 79 103 if namespace is 1: … … 165 189 ${p}image.img_size, 166 190 ${p}page.page_namespace, 167 191 ${p}revision.rev_page 168 FROM 192 FROM ( 169 193 ${p}page, 170 194 ${p}revision, 171 195 ${p}user, 172 ${p}text 196 ${p}text ) 173 197 LEFT JOIN ${p}image ON 174 198 ${p}page.page_title = ${p}image.img_name 175 199 WHERE
comment:12 by , 15 years ago
Milestone: | not applicable |
---|---|
Resolution: | → wontfix |
Status: | new → closed |
Maybe time to create a NewHack from this script.
comment:13 by , 15 years ago
In case someone has any use for this - I refactored the above into PHP adding the importing features I missed from the python version. See features and install/usage guide at:
Modified MediaWiki 1.5 to Trac script (original from wiki:TracFaq)