Note: Most of the actions on this page require administrative privileges. They won't work unless you have set an admin username and password in the PhpWiki config file.
A Wiki SandBox is very easy to clean. Here you can restore it to pristine condition by loading the default from pgsrc:
/Remove multiple pages.
These links lead to zip files, generated on the fly, which contain all the pages in your Wiki. The zip file will be downloaded to your local computer.
This ZIP Snapshot contains only the latest versions of each page, while this ZIP Dump contains all archived versions.
(If the PhpWiki is configured to allow it,) anyone can download a zip file.
If your php has zlib support, the files in the archive will be compressed, otherwise they will just be stored.
Here you can dump pages of your Wiki into a directory of your choice.
The most recent version of each page will written out to the directory, one page per file. Your server must have write permissions to the directory!
If you have dumped a set of pages from PhpWiki, you can reload them here. Note that pages in your database will be overwritten; thus, if you dumped your HomePage when you load it from this form it will overwrite the one in your database now. If you want to be selective just delete the pages from the directory (or zip file) which you don't want to load.
Here you can upload ZIP archives, or individual files from your (client) machine.
Here you can load ZIP archives, individual files or entire directories. The file or directory must be local to the http server. You can also use this form to load from an http: or ftp: URL.
Currently the pages are stored, one per file, as MIME (RFC:2045) e-mail (RFC:822) messages. The content-type application/x-phpwiki is used, and page meta-data is encoded in the content-type parameters. (If the file contains several versions of a page, it will have type multipart/mixed, and contain several sub-parts, each with type application/x-phpwiki.) The message body contains the page text.
Serialized Files
The dump to directory command used to dump the pages as PHP serialized() strings. For humans, this made the files very hard to read, and nearly impossible to edit.
Plain Files
Before that the page text was just dumped to a file--this means that all page meta-data was lost. Note that when loading plain files, the page name is deduced from the file name.
The upload and load functions will automatically recognize each of these three types of files, and handle them accordingly.
This will generate a directory of static pages suitable for distribution on disk where no web server is available. The various links for page editing functions and navigation are removed from the pages.
The XHTML file collection can also be downloaded as an XHTML ZIP Snapshot.
These are here mostly for debugging purposes (at least, that's the hope.)
In normal use, you shouldn't need to use these, though, then again, they shouldn't really do any harm.
Purge Markup Cache | (If your wiki is so configured,) the transformed (almost-HTML) content of the most recent version of each page is cached. This speeds up page rendering since parsing of the wiki-text takes a fair amount of juice. Hitting this button will delete all cached transformed content. (Each pages content will be transformed and re-cached next time someone views it.) Plugin WikiAdminUtils disabled. (action != 'browse')
<?plugin WikiAdminUtils action=purge-cache label="Purge Cache" ?> |
---|---|
Clean WikiDB of Illegal Filenames | Page names beginning with the subpage-separator --- usually a slash (/) --- are not allowed. Sometimes though an errant plugin or something might create one... This button will delete any pages with illegal page names. Plugin WikiAdminUtils disabled. (action != 'browse')
<?plugin WikiAdminUtils action=purge-bad-pagenames label="Exorcise WikiDB" ?> |
Page Execution took real: 8.515, user: 3.770, sys: 0.520 seconds |