Contents
Mysql: see GoogleSoc2006?
Flexible authentication
The ability to use an external authentication system (Kerberos, PAM, etc.) and/or user directory (SQL, LDAP, etc.) is vital for integrating MoinMoin into larger web sites. See PhpWiki for an example of this ability.
Some of these methods can be used already currently by plugging modules into your web server (like mod_auth_kerb, mod_ldap, etc.). See their documentation. MoinMoin would just get the username via well defined CGI variables in that case.
This currently works using HTTP authentication - JeorgWendland offers a patch to get Kerberos Negotiate authentication working - http://moinmoin.wikiwikiweb.de/MoinMoinPatch#head-bad1e4e1fd1b4d52a4051784d6d7abb8cb2d581b Trouble is the username obtained from CGI variables is usually not a valid Moin UserName (Kerberos principles contain @) - so some sophisticated mangling may be necessary. Jeorg's patch is nice - but I think stackable user information backends will eventually be required - http://moinmoin.wikiwikiweb.de/RefactoringProposals#head-a75a386c181cfe74cfab8fac4d29fda4f1ed765f -- 207.6.195.230 2005-11-01 18:51:31
- While we are at it, it would be good to get a clean separation of login, account creation, and user preferences. In particular, it would be good to be able to customize the new user creation form so as to collect site-specific information.
Questions by ThomasKalka
- Is there a plan to use internal object-representation for parsed pages (something like XIST) and to get rid of the html spread over the source modules ?
- You may look on a wiki as a graph. I would like to have different kinds of pointers, representing a somehow directed graph (an ontology). This would allow queries about links and summarized recent changes. For this purpose it would be nice to
have a way to mark the direction and kind of a link ( for example ->Link (page belongs to link), =>Link (page is a "link") , all other are (link belongs to page))
- have some plug in to be called when a page is saved to export linking information (to an ontology server)
- have some makro to query the ontology server for pages
:N: Name :A: Adress Adress2 :C: Comment continued free text ---- =>UserEntry
which could automatically be rendered in table form. The initial setting for attribute names could be cloned from UserEntry and updated, if UserEntry is changed. This would also be a nice format to represent the ugly table format, having one or more still somehow uggly templates for a table raw with :XXX: variables in between cells, followed by a nicely human readable representation of the data.
-- ThomasKalka
Making the parser more than RE
By now WikiMarkup is a regular language. The wiki parser only does string replacement based on regular expressions. This has some disadvantages.
- no markup in headings
- bad and error prone link syntax (could partly be fixed with RE, but this is complicated)
Things would be much simpler if we had some sort of recursion in our parser. (Indeed no real recursion is needed, one level would be sufficent for many things because e.g. headings can't be nested.)
One simple but powerful and useful way of implementing this would be to extend the request class by two methods.
- keep_output everything passed to request.write() is stored in the request object (using a StingIO object)
- get_output return the bufferd output an restore the behavious of the write() method
These methods may be called several times which leads to an internal stack if StringIO objects. keep_output could get an optional parameter that it uses as repacement to the StringIO object.
If the wiki parser finds a begin of heading tag it can set an internal flag and call request.keep_output(). If a end heading tag is found it can do
buff = request.get_output() request.write(formatter.heading(depth, buff))
or
buff = request.get_output() request.write(buff)
if there was no end of heading tag found until the end if the line.
This would also allow to simply access the content of a parsed and formatted page. This may be interesting in combination with nonstandard formatters:
- actions doing something with XML
- WASP
- formatter which generates an parse tree of a page.
Topics encountered while implementing FastCGI
While implementing FastCGI (see patch on MoinMoinPatch) I encountered some FastCgiMoinMoinProblems which might be of general interest. -- OliverGraf
Indexing WikiLinks in a database
Some wikis (like SnipSnap) index their links in a database. This makes backlinks more robust, and opens up possibility of powerful neighbourhood searching capabilities. Using full text search to find backlinks can be very unreliable.
My biggest gripe being that many marcos generate links, and the current backlink searching method will not find these refering pages with generated links. If links were indexed in a database, macros can update the link database everytime they're called.
-- GoofRider 2004.03.08
How about adding a "makeLink(title, destURL)" to the formatter? Then every macro could call this method to add a link to the output and it could collect all links in an extra file or at the end of the page. -- A. Digulla 2004-06-01 12:58:35
Optional makro execution before parsing
Would clean up a lot of code (the heading handler) and would make a few makros easier to implement (IncludePage dosn't work completely, CamelCase links dosn't work).
-- JohanViklund 2005.02.08