so…Norm's musings. Make of them what you will.https://so.nwalsh.com/feed/fulltext.xml?subject=DocBook2020-07-25T08:30:02.143ZNorman WalshDocBook XSL: The Next Generationhttps://so.nwalsh.com/2020/07/25-docbook-xsltng2020-07-25T08:25:07.583Z2020-07-25T08:30:02.143Z

A new implementation of transformations for DocBook in XSLT 3.0

Volume 4, Issue 37; 25 Jul 2020

A new implementation of transformations for DocBook in XSLT 3.0

Yesterday, I posted a short note about the new DocBook xslTNG Stylesheets. I wanted to get something out and to mark the 1.0.0 release day, but I didn’t have the time (or the foresight) to get this post ready for publication on the day.

The goal

Marginal note:The CSS that’s included delivers (I hope) a clean and clear presentation, but I am not a graphical designer by trade. I’d be delighted to see what actual designers can do (a la CSS Zen Garden). Improvements to the presentation, or entirely alternate styles, would be very much appreciated!

of the stylesheets is to produce clean, semantically rich HTML(5) that can be beautifully rendered with CSS (and a dash or two of JavaScript, if you wish) in the browser. I’ve done my best in all cases to make sure that the presentations are accessible. If you find something that isn’t accessible, please report it.

I made the 1.0.0 release yesterday morning because I was scheduled to deliver a presentation about them for a Balisage “tech check” session. That presentation is online now if you want to see it (or the DocBook source). Whether it makes a lot of sense without the narration is a little hard to predict. Some highlights from the talk:

  1. The stylesheets support extended links (links with multiple targets)
  2. The styelsheets do this by supporting XLink extended links (including linkbases)
  3. The extended links have a compact, unobtrusive presentation if JavaScript is available but fall back gracefully to an inline list in other environments, including print.
  4. The stylesheets support the notion of “local conventions” for markup, a last-in-a-series transformation that can turn your markup into proper DocBook just before the standard stylesheet processing begins. In this case, I have a simple(r) format for entering the extended links that’s transformed into full XLink for the stylesheets.
  5. I also showed annotations, which present as modal dialogs with JavaScript but also fall back gracefully. (The non-JavaScript online rendering needs a little work, but that’s the plan.)
  6. The stylesheets support an XInclude fragment identifier scheme when parse="text" that allows you to specify search strings for the beginning and the end of the text to include. That makes your document references much less brittle.
  7. The paged (or chunked) rendering of a document supports keyboard navigation and a “fly out” table of contents on every page (if you want one).

Those are all fun things. The stylesheets also do the mundane job of transforming 387 of the 416 elements in DocBook. The missing elements are related to assemblies, which I just haven’t implemented yet, and some new synopsis markup proposed for DocBook 5.2 that I don’t think is quite finished. There’s a coverage report that tracks progress towards 100%. That page also lists the test suite results. There are (at the time of this writing) 1,557 tests in the test suite, all passing.

I sort of assume that if you’ve got this far, you know what XSLT is for and if you care about transforming DocBook with these stylesheets, you can figure out what to do next. The guide tries to be helpful. (I hope it succeeds! Feedback most welcome.)

This is obviously

Marginal note:Coming soon, I hope!

begging to be implemented in an XProc pipeline, but I didn’t have time to finish my implementation and write these stylesheets and write the Balisage paper that motivated them all in July. Alas. I will go back and address that deficiency as quickly as possible.

In the meantime, there are a few other options for running them. My thinking being that usability trumps most other concerns. No matter how cool they are, if you can’t get them to work, you won’t use them.

Chapter 2 describes four ways to run them:

  1. The distribution includes a jar file and the core dependencies. You can run the jar (java -jar …) directly. It takes the same command line arguments as Saxon, but will do convenient things for you like setup the catalog, resolvers, and extension functions.
  2. The distribution includes a Python script that also takes the same command line arguments and also sets up the environment. What it does in addition is work with Maven (which you must have installed) to automatically download the required dependencies and configure the class path.
  3. There’s a build file for a Docker container and an outline of instructions for how to use that. I don’t think that’s ready for prime time among folks who aren’t already familiar with Docker, but if you are, I’d be very interested in hearing your feedback.
  4. And, of course, if you’re already comfortable configuring your Java environment to run applications, you can just do that. I do highly recommend making sure that the jar file that ships with the stylesheets is on your classpath. The XInclude and image properties extension functions are required for some features.

I have decided to pursue print formatting (making paged media, i.e., PDFs) with HTML+CSS instead of XSL-FO. I appreciate that this may be disappointing, but I don’t have the time to do an entirely separate set of XSL-FO transformations right now. The project is open source, so if you feel like doing it (and its test suite, and its documentation), go for it!

I’ve published a PDF (formatted with Antenna House) of the presentation I gave yesterday as an example. I plan to publish a PDF of the guide soon.

Share and enjoy!

One more thing…https://so.nwalsh.com/2018/03/10/notes2018-03-10T23:03:18.874Z2018-03-10T23:58:13.986Z

A few notes about using the DocBook stylesheets without Gradle and on catalogs in jar files.

Volume 2, Issue 9; 10 Mar 2018

A few notes about using the DocBook stylesheets without Gradle and on catalogs in jar files.

Since I posted about the DocBook XSLT 2.0 Stylesheets a few days ago, I’ve made a few refinements to the stylesheets distribution.

That last post was about how easy it is to use the stylesheets in a Gradle project to format DocBook documents. That isn’t going to suit everyone. If you don’t want to use Gradle, or you can’t, or you’ve already got a build system in place and you don’t want to change it more than you have to, that’s fine.

One option is to simply download the zip file (e.g., docbook-xslt2-2.3.1.zip), unpack it where you like, and have at it. You’ll need to setup your favorite XProc processor to run the pipelines, or decompose the pipeline workflow into steps using some other tool, but that’ll work.

Another option, at least for simple workflows, is to download the “-app” zip file (e.g, docbook-xslt2-2.3.1-app.zip), unpack it where you like, and run the pipeline directly. Unpacking the zip file will create a directory containing a jar file and a subdirectory of “lib” jars. You can run the stylesheet jar directly from that location:

java -jar /path/to/docbook-xslt2-2.3.1.jar mydocument.xml

The “app” distribution includes all of the open source libraries required to run the pipeline. If you need additional libraries, for example, for print formatting, put them on your classpath, or simply copy them into the lib/ directory.

Making catalogs

The other thing that the Gradle task does is construct a catalog file for the resources in the jar file. It then makes sure that catalog is added to the resolver before running the pipeline.

If you’re incorporating the stylesheets into your own build system, you can get the catalog with a few simple lines of Java (or your favorite JVM language). Here’s a toy example:

import org.docbook.XSLT20;

public class TestJvm {
    public static void main(String[] args) {
        XSLT20 docBookXslt = new XSLT20();

        String catalog = docBookXslt.createCatalog();

        // This catalog is a temporary file and will be deleted automatically.
        System.out.println("Temp catalog: " + catalog);

        catalog = docBookXslt.createCatalog("/tmp/dbcat.xml");

        // This catalog is "permanent"; you have to delete it yourself.
        System.out.println("Perm catalog: " + catalog);
    }
}

If you’re running the build inside a single JVM, the first form will create a catalog file that automatically gets deleted when the process finishes. If you’re running it in separate calls to the JVM, from make, for example, then use the latter form to create a persistent catalog.

For those who are curious, the resulting catalog has entries that look like this:

<uri name="https://cdn.docbook.org/release/2.3.1/xslt/base/html/final-pass.xsl"
     uri="jar:file:/home/ndw/…/docbook-xslt2-2.3.1.jar!/xslt/base/html/final-pass.xsl"/>

The XMLResolver supports jar: scheme URIs and will return the final-pass.xsl file from the jar when referenced with the URI from the CDN. No network access is required.

One last note, it’s also possible to obtain the catalogs through reflection, if you don’t have, or want to have, the DocBook XSLT classes as compile time dependencies. Here’s how the DocBook Task loads the schemas catalog:

String schemaCatalog = null
try {
    Class klass = Class.forName("org.docbook.Schemas")
    Object schemas = klass.newInstance()
    Method method = schemas.getClass().getMethod("createCatalog")
    schemaCatalog = "file://" + method.invoke(schemas)
} catch (ClassNotFoundException cfne) {
    // not found
}

Here’s hoping that’s useful to someone.

Make it easy to usehttps://so.nwalsh.com/2018/03/05/easy2018-03-05T12:40:36.839Z2018-03-10T23:59:26.793Z

Very few things are easy to use in absolute terms, but relative improvements can have value as well. Relatively speaking, I think I’ve made processing DocBook documents easier.

Volume 2, Issue 8; 05 Mar 2018

Very few things are easy to use in absolute terms, but relative improvements can have value as well. Relatively speaking, I think I’ve made processing DocBook documents easier.

Recently, I attempted to reformat some documents that I hadn’t touched in a while. What I discovered was that they were relying on out-of-date stylesheets and a build environment that no longer “just worked” on my system. I decided the quickest thing to do would be to copy a more recent build system from another project and tweak it.

My build system of choice these days is Gradle. Now don’t start, I’m sure there are lots of reasons why you might argue that that isn’t “easy”. I’m a pragmatist:

  • Most of my XML work that isn’t directly in MarkLogic server runs on the JVM. That’s where my XML Calabash implementation(s) are, that’s where Saxon is, etc. Gradle works well with the Java ecosystem.
  • Gradle is the first tool that made it practical for me to use Maven repositories.
  • Maven deals very successfully with managing the software dependencies of a project.
  • Gradle is cross-platform. I recently helped someone get a document build system setup on Windows. Gradle worked flawlessly out of the box except for the bits of my build where I’d lazily left in some make and perl and a stray “cp” command.
  • Gradle is extensible, even if I don’t especially like Groovy.

The build system that I copied was the one for the latest XProc spec. What I noticed as I was tweaking it was that it depended on the DocBook XSLT 2.0 Stylesheets artifact

Marginal note:A Maven “artifact” is just a jar file. It’s a way of packaging up a software dependency and sticking it on the web were build tools can find it. The details are unimportant to you, the user, if all you care about is formatting documents.

from Maven and it downloaded the stylesheets from https://cdn.docbook.org/ to do the formatting. That shouldn’t be necessary, I thought. (I’ve been burned several times recently by this downloading step when attempting to build documents on trains and planes, so I was predisposed to investing a little time in fixing it.)

[ Here we go. If you’re going to fall down a deep rat hole, make sure there’s lots of yak fur at the bottom to pad your fall. —ed]

Indeed, downloading the stylesheet artifact from Maven wasn’t accomplishing very much. It would be possible, I thought, to use the stylesheets directly from the jar file if I could get a catalog setup correctly.

I had written, and was using, an extension task that makes it easier to use XML Calabash in Gradle. Getting the catalog in place meant refactoring that task in significant ways.

But having that task automatically setup an XML Catalog for the DocBook stylesheets seemed wrong. It’s just about pipeline processing, not DocBook specifically.

So I wrote another extension task for formatting DocBook documents specifically. Logically, that task had to be an extension of the underlying XML Calabash task. That meant a bit more refactoring.

Along the way, I also corrected a problem with the underlying task wherein the documents used by the task didn’t automatically get counted as inputs and outputs for the purpose of Gradle figuring out what tasks needed to be run again when documents changed.

Now that I had a place to stand, I added a bit of code to the DocBook task so that it would construct a bespoke catalog for the stylesheets in the jar file and insert that into the XML Calabash runtime.

Threading all these needles was a little tricky. I ended up putting some debugging code in XML Calabash to help me out. Turns out I had to fix some bugs related to catalog handling. I also applied a bunch of pull requests and fixed a handful of unrelated bugs along the way. (Including the bug that was causing the p:validate-with-relax-ng step to swallow the message that described the actual validation error. I anticipate much rejoicing across the land. Or in my office, anyway.)

After all that, a Gradle script for formatting DocBook documents:

buildscript {
  repositories {
    mavenCentral()
    maven { url "http://maven.restlet.org" }
  }

  dependencies {
    classpath group: 'org.docbook', name: 'docbook-xslt2', version: '2.2.0'
    classpath group: 'com.xmlcalabash', name: 'xmlcalabash1-gradle', version: '1.3.2'
    classpath group: 'org.xmlresolver', name: 'xmlresolver', version: '0.13.1'
  }
}

import org.docbook.DocBookTask
import com.xmlcalabash.XMLCalabashTask

task myDocument(type: DocBookTask) {
  input("source", "document.xml")
  output("result", "output.html")
}

That may not look “easy”, especially if you aren’t a software developer. But if you install Gradle on your platform and run gradle myDocument, you’ll get a formatted document.

Marginal note:Assuming, that is, that your document is named document.xml. Replace that with the filename of your actual DocBook document. You can change output.html into something nicer as well while you’re at it. And the name of the task, if you wish.

If you’re curious:

  • The word buildscript and the block enclosed in curly braces that follows is just boilerplate. You don’t have to understand it, but what it says is, this project requires the DocBook XSLT 2.0 stylesheets, the Gradle plugin for running XML Calabash, and my XML Catalog processor.
  • The import statements are also just boilerplate.
  • Finally the task myDocument of type “DocBookTask” transforms the source document document.xml into the output document output.html.

If you’re tempted to say “so what”, at least consider briefly what happens when you run this through Gradle.

  1. It will download the artifacts necessary: the three named explicitly and all of the thirty or so dependencies that you didn’t even know about.
  2. It will cache them locally for you in some location you never have to worry about. And if you have multiple projects that use DocBook, it’ll share them across those projects.
  3. If you end up using different versions of the stylesheets in different projects, that’ll just work as well.
  4. It will arrange for XML Calabash to run with the DocBook pipeline to process your source document.
  5. It will use an XML Catalog that will find the stylesheets directly in the appropriate jar file.
  6. And it will just work on Linux, on the Mac, and on Windows!

If you’ve been processing XML documents since the last millenium or if you’re a software developer, none of those steps will seem difficult [except maybe the Windows thing —ed], but it’s still possible to appreciate that you don’t have to do any of them. If you aren’t a software developer, some of those steps probably read like complete jibberish. That’s ok, because you don’t have to do any of them!

Right. Having got this far, being just about in a position to go back and finish the job I started, it occurred to me that if the stylesheets can be processed this way, shouldn’t it be possible to process the schemas in the same way? [Can you say “displacement activity”? —ed]

Of course, the answer is yes: SMOP. Long story short, I took the DocBook schemas (4.5, 5.0, and 5.1) and packaged them up in a Maven artifact with a little Java shim to construct a bespoke catalog for them as well. Then I went back and extended the DocBookTask so that it will use them if they’re available.

Want to validate your DocBook document before you format it?

buildscript {
  repositories {
    mavenCentral()
    maven { url "http://maven.restlet.org" }
  }

  dependencies {
    classpath group: 'org.docbook', name: 'docbook-xslt2', version: '2.2.0'
    classpath group: 'com.xmlcalabash', name: 'xmlcalabash1-gradle', version: '1.3.2'
    classpath group: 'org.xmlresolver', name: 'xmlresolver', version: '0.13.1'
    // Add this line:
    classpath group: 'org.docbook', name: 'docbook-schemas', version: '5.1-1'
  }
}

import org.docbook.DocBookTask
import com.xmlcalabash.XMLCalabashTask

task myDocument(type: DocBookTask) {
  // And tell the pipeline to validate with the schema
  option("schema", "https://docbook.org/xml/5.1/rng/docbook.rng")
  input("source", "document.xml")
  output("result", "output.html")
}

If you want a custom stylesheet or a custom schema, that’s fine too. Simply import or include the stylesheets or schemas using the standard URIs; they will be resolved by the catalogs and no actual web access will be required.

At the end of the day, whether you consider this easy or difficult is going to depend on a lot of factors. I haven’t taken the time to describe all of the options of the DocBookTask (e.g., how to make PDF instead of HTML), and if you’re doing more than just formatting a single XML file, you will probably need or want to learn a little bit more Gradle.

I’m pleased with the results, however. So what if it consumed most of a weekend and required updates to three projects and the construction of a fourth. I’ve made it easy for you, right? Isn’t that the important thing?

How?https://so.nwalsh.com/2017/03/01/how2017-03-01T03:21:28.907Z2017-05-05T01:34:24.961Z

You did what now?

Volume 1, Issue 2; 01 Mar 2017

You did what now?

This post, like the hello world post, is a natural one to write. It will be of passing interest to the sorts of folks who care how website sausage is made. If that doesn’t include you, feel free to wander off and read something more interesting.

History · My previous weblog had three distinct implementations. It started out as a site of mostly static HTML pages shaken and stirred by a bunch of Perl and Python (and XSLT). The XSLT transformed DocBook sources into HTML, the Perl extracted some RDF, ran it through an inference engine, and then cobbled all the bits and pieces together.

The second iteration ran on MarkLogic, with all the RDF pulled out and replaced by indexed XML markup. (Because MarkLogic of that vintage didn’t have support for semantics.)

In the last iteration, I put most of the RDF back. I did it partly to get some practical experience with the semantics features of MarkLogic, but also because SPARQL queries are convenient.

For this reboot, I wanted to go in a different direction. I chose a new direction partly because this weblog is an excuse for me to play with new technologies, but mostly because it’s been almost 15 years since I started doing this and that’s a long time in web years.

The old SGMLers’ dream of arbitrary markup on the web has come and gone. If you’re going to publish on the web, you’re going to do it with HTML, CSS, and JavaScript. That’s HTML5, CSS3, and ES6, sortof.

These days, you’re likely to write in Markdown instead of HTML, use SASS or Less instead of CSS, and JQuery (or maybe TypeScript or CoffeeScript) instead of “plain” JavaScript. (In addition, everything seems to be buried in frameworks of such rococo complexity that they’d make Louis XV wince, but that’s a topic for another day.)

Today · So what am I doing today? Today, this weblog is authored in Markdown (specifically CommonMark), styled with CSS (I confess, I haven’t made the switch to SASS or the like), and tarted up here and there with a bit of JQuery.

But there’s no DocBook!? No, in fact, there isn’t. I still think DocBook is great, but I wanted to try something different. Using DocBook for a weblog is a bit like driving to the corner grocery store in a Ferrari. It’ll get you there, but you’re not exactly taking advantage of it, are you?

The heavy lifting is still all provided by MarkLogic, of course. In addition to the pages themselves, there’s extracted (and inferred) semantic metadata on each article, plus a taxonomy and a big bag of semantic data. These are combined with XQuery and SPARQL to provide formatted pages plus all of the various views. (I know it’s possible to build web sites with tools other than MarkLogic, but for the love of all things, why put yourself through that hell? You know MarkLogic is a free download, right?)

CommonMark · I chose CommonMark for a couple of reasons. First, it has a really good specification. It’s not terribly long; it’s clearly written, unambiguous, and filled with useful examples. Second, there’s a complete and conformant JavaScript implementation of a CommonMark interpreter that produces well-formed HTML.

I’ve poked about with a number of markup flavors. In daily use, I have an affinity for org-mode because…Emacs. I have also used AsciiDoc which has good support for round tripping to DocBook. But neither of them has a clear, concise specification and, while they may have JavaScript implementations, those implementations can’t be as complete and conformant as the CommonMark interpreter. They can’t be because there’s no proper specification against which to write tests.

That matters because all of these less-than-XML markup formats have something in common: they make easy things easy. The less than easy things are…less than easy. The formats tend to introduce increasingly arbitrary punctuation to accomplish anything even moderately complicated. So knowing that there’s a bulletproof specification is what gives you confidence that you’ll never be surprised. In particular that the interpretation of punctuation won’t drift over time. I want to have confidence that the characters I write today will have the same interpretation at the end of the unix epoch as they do now.

Actually, another point in CommonMark’s favor is the ruthlessly simple way that the specification deals with this dilemma: if it’s not easy, just stick in literal HTML markup. End of story. Literal HTML is a bit incongruous when you find it jutting out in the middle of your otherwise mostly markup-free prose, but it’s damned simple to understand.

CommonMark may be the bee’s knees for authoring, but I need to turn it into actual markup to make use of it. I need HTML to display and I need structured markup from which to derive indexes and semantic data. This is where the JavaScript interpreter comes in. Yes, I could write an interpreter for any of these formats if I wanted to, or call an external process, but the fact that the reference implementation just drops into MarkLogic is awfully sweet. Here it is:

var commonmark = require ('commonmark.sjs');

var reader = new commonmark.Parser();
var writer = new commonmark.HtmlRenderer();
var parsed = reader.parse(mdtext);
var result = writer.render(parsed);

result

Stick the source markup in mdtext, call that module, and I get good, structed HTML back almost instantly. Perfect. Almost perfect.

What about? · Yes, exactly! What about those things? What about bibliographic metadata? What about hierarchical document elements? What about syntactic shortcuts for my particular editorial needs?

One of the absolute advantages of XML (the feature that makes it superior to Markdown and to HTML and to JSON and and and…) is its extensibility. It is always possible to extend an XML format simply by adding new markup. And that extension is always both apparent to consumers and ignorable by consumers.

But I don’t have XML this time, so I cheat.

CommonMark++ · Within the overall design of this weblog, I have four requirements that are not directly satisfied by CommonMark without resorting to inline HTML. Since they occur in almost every posting, I decided I wanted to handle them specially:

  1. Arbitrary bibliographic metadata
  2. Abstracts
  3. Epigraphs
  4. Extensible inline markup

I address these by imposing additional constraints on the input. In particular, these posts are not formed from completely arbitrary Markdown. Each posting has (must have!) the following format:

# The post title

A “paragraph” of arbitrary bibliographic metadata (see below).

A paragraph that is taken to be the abstract for the posting.

> An optional
> epigraph.

The rest of the input is the body of the post, which is
ordinary Markdown except for the special interpretation
of a particular inline syntactic extension.

The bibliographic metadata is further encoded into keyword/value pairs like so:

:uri: /2017/03/01/how
:subject: SelfReference
:where: us-tx-austin
:anytoken: Any value

Without some sort of an extension for metadata, I don’t see how to use Markdown in a publishing context without some considerable inconvenience. Well, I suppose if you’re working in a system where the metadata can be tracked externally, you don’t need to put it in the documents.

My inline syntactic extension is really just laziness. The markup could absolutely be inserted as HTML. But typing

<a href=”https://en.wikipedia.org/Topic”>Topic</a>

everytime I want to refer to a Wikipedia page, or

<span class="person" data-person="Walsh,Norman">Norman Walsh</span>

everytime I wanted to refer to a person, just seemed too tedious and intrusive. That’s a completely arbitrary value judgement and given the amount of markup that I’ve happily typed in my life, may even be a bit hypocritical. But the fact remains that that’s what I decided.

For the use cases I have in mind, it’s sufficient to encode a keyword, a token, and a string. After a few minutes skimming the CommonMark spec, I concluded that I could hijack the sequence “{:”. In particular, that I could encode arbitrary inline metadata for my own purposes like so:

{:keyword:token “string”}

It makes the reference to a Wikipedia {:wiki:Topic} or personal name, like {:person:Walsh,Norman “Norman Walsh”}, easier to type and less intrusive to the flow of the paragraph (for the editor). And, naturally, once the mechanism existed, I found another half dozen uses for it.

Putting it all together · To write a post, I author it in Markdown according to my conventions. I usually do this in Emacs, but I can also do it in SimpleMDE. Regardless, the Markdown is eventually sent to the weblog via an HTTP POST.

The CommonMark Javascript converts it to HTML. The HTML is post-processed according to my conventions. Semantic metadata is added, inference is performed (using ad hoc queries today, perhaps using MarkLogic inferencing in the future), and the result is stored in the database, ready to be served up.

(Well. Mostly ready. In fact, I do a little bit of additional processing in some cases. But it’s not especially interesting and the result can be cached so that responding to requests is nearly instantaneous.)


  • Yes, I know about web components. Yes. Maybe. But not yet and, frankly, I expect not ever.