The CC parser turns a block of text like the following:

{{{#pre TITLE: Personal panoramic perception AUTHOR: Boult,-T. AUTHOR AFFILIATION: Lab. of Vision & Software Technol., Lehigh Univ., Bethlehem, PA, USA EDITOR: Arabnia,-H.-R. SOURCE: International-Conference-on-Imaging-Science,-Systems,-and-Technology. 1999: 383-9 PUBLISHER: CSREA Press, Athens, GA, USA NUMBER OF PAGES: 595 COUNTRY OF PUBLICATION: USA RECORD TYPE: Conference-Paper CONFERENCE DETAILS: Proceedings of CISST'99: 1999 International Conference on Imaging Science, Systems, and Technology. 28 June-1 July 1999 Las Vegas, NV, USA LANGUAGE: English ABSTRACT: For a myriad of military and educational situations, video imagery provides an important view into a remote location. These situations range from remote vehicle operation, to mission rehearsal, to troop training, to route planning, to perimeter security. These situations require a large field of view and most would benefit from the ability to view in different directions. Research has led to the development of new technologies that may radically alter the way we view these situations. By combining a compact omni-directional imaging system and a body-worn display, we can provide a new window into the remote environment: personal panoramic perception (P/sup 3/). The main components of a P/sup 3/ system are the omni-directional camera, a body-worn display and, when appropriate, a computer for processing the video. The paper discusses levels of immersion and their associated display/interface "needs". It also looks at the capture system issues including resolution issues, and the associated computational demands. Throughout the discussion we report on details of and experiences from using our existing P/sup 3/ systems. AVAILABILITY: CISST99-Personal-panoramic-perception--Boult.pdf COMMENT: hardware platform used is dated. Interesting observations on the use of a spheric reflector with a telecentric lens (omnicam configuration, I think): if multiple cameras image the reflector, the optical center of the system is unique (and centered inside the sphere). This is difficult to achieve with configurations that use multiple cameras. Evalutation of user performance and usability is anedoctical. }}}

into something more readable like this:

* Boult,-T. Personal panoramic perception in International-Conference-on-Imaging-Science,-Systems,-and-Technology. 1999: 383-9

the Current Contents format has been extended by adding

if you add the following (very lazy) lines to your CSS style sheet, the abtracts will fold and unfold themselves on mouseOver (provided your browser is compliant with CSS2):

{{{#pre p span span {display: none;} p span:hover span {

}}}

If you have more than one citations between the triple brackets, they will be turned into multiple citation lines, exactly like you would expect. Terminate a citation with a blank line. Extra blank lines should be ignored, but don't overdo it anyway. Additionally, citations in the BibTex and CiteSeer format will be treated properly, as far I can understand.

I realize that I have probably rewritten -poorly- something that either Knuth or Lamport have already done in 1987 with great elegance and beauty. But I needed it inside MoinMoin and in 1987 Python wasn't even around.

MoinMoin: ParserMarket/CC (last edited 2007-10-29 19:08:14 by localhost)