Note: XML Prague is also a very interesting pre-conference day, a traditional dinner, posters, sponsors announcements, meals, coffee breaks, discussions and walks that I have not covered in article for lack of time.
When I was a child, I used to say that I was feeling Dutch when I was in France and French when I was in the Netherlands. That was nice to feel slightly different and I liked to analyze the differences between Dutch people who seemed to be more adult and civilized and French people who seemed to me more spontaneous and fierce.
I have found back this old feeling of being torn between two different culture very strongly this week end at XML Prague. Of course, that was no longer between French and Dutch but between the XML and Web communities.
The conference also reminded me the old joke of the Parisian visiting Corsica and saying “Corsica would be so cool without Corsicans!” and for me the tag line could have been “the web would be so cool without web developers!”.
Jeni Tennison’s amazing opening keynote was of course more subtle than that!
She started by acknowledging that the web was split into no less than four different major formats: HTML, JSON, XML and RDF.
Her presentation has been a set of clever considerations over how we can deal with these different formats and cultures, concluding that we should accept the fact than “the web is varied, complex, dynamic, beautiful”.
I was then giving my talk “XML, the eX Markup Language” (read also my paper of this blog) where I have analyzed the reasons of the failure of XML to become the one major web format and given my view on where XML should be heading.
While Jeni had explained why “chimera are usually ugly, foolish or impossible fantasies”, my conclusion has been that we should focus on the data model and extend or bridge it to embrace JSON and HTML like the XPath 3.0 data model is proposing to do.
I am still thinking so, but what is such a data model if not a chimera? Is it ugly, foolish or impossible then? There is a lot to think about beyond what Hans-Jürgen Rennau and David Lee have proposed at Balisage 2011 and I think I’ll submit a proposal at Balisage 2012 on this topic!
Anne van Kesteren had chosen a provocative title for his talk: “What XML can learn from HTML; also known as XML5“. Working for Opera, Anne was probably the only real representative of the web community at this conference. Under that title, his presentation was an advocacy for releasing the strictness of the XML parsing rules and defining an error recovery mechanism in XML as they exist in HTML5.
His talk was followed by a panel discussion on HTML/XML convergence and this subject of error recovery has monopolized the full panel! Some of the panelists (Anne van Kesteren, Robin Berjon and myself) were less hostile but the audience did unanimously reject the idea to change anything in the well-formedness rules of the XML recommendation.
Speaking of errors may be part of the problem: errors have a bad connotation and if a syntactical construct is allowed by the error recovery mechanism with a well defined meaning, why should we still consider it an error?
However, a consensus was found to admit that it could be useful to specify an error recovery mechanism that could be used when applications need to read non well formed XML documents that may be found on the wide web. This consensus has led to the creation of the W3C XML Error Recovery Community Group.
The reaction if the room that didn’t accept to even consider a discussion on what XML well-formedness means seems rather irrational to me. Michael Sperberg-McQueen reinforced this feeling in his closing keynote when he pleaded to define this as “a separate add-on rule rather than as a spec that changes the fundamental rules of XML”.
What can be so fundamental with the definition of XML well-formedness? These reactions made me feel like we were discussing kashrut rules rather than parsing rules and the debate often looked more religious than technical!
The next talk, XProc: Beyond application/xml by Vojtěch Toman was again about bridging technologies but was less controversial, probably because the technologies to bridge with were not seen as XML competitors.
Taking a look at the workarounds used by XML pipelines to support non XML data (either encoding the data or storing it out of the pipeline), Vojtěch proposed to extend the data model flowing in the pipelines to directly support non XML content. That kind of proposal looks so obvious and simple that you wonder why it hasn’t been done before!
George Bina came next to present Understanding NVDL – the Anatomy of an Open Source XProc/XSLT implementation of NVDL. NVDL is a cool technology to bridge different schema languages and greatly facilitates the validation of compound XML documents.
Next was Jonathan Robie, presenting JSONiq: XQuery for JSON, JSON for XQuery. JSONiq is both a syntax and a set of extensions to query JSON documents in an XQuery flavor that looks like JSON. Both the syntax and the extensions look both elegant and clever.
The room was usually very quiet during the talks, waiting for the QA sessions at the end of the talks to ask questions or give comments, but as soon as Jonathan displayed the first example, Anne van Kesteren couldn’t help gasping: “what? arrays are not zero based!”
Proposing zero based arrays inside a JSONic syntax to web developers is like wearing a kippah to visit an orthodox Jew and bring him baked ham: if you want to be kosher you need to be fully kosher!
Norman Walsh came back on stage to present Corona: Managing and querying XML and JSON via REST, a project to “expose the core MarkLogic functionality—the important things developers need— as a set of services callable from other languages” in an format agnostic way (XML and JSON can be used interchangeably).
The last talk of this first day was given by Steven Pemberton, Treating JSON as a subset of XML: Using XForms to read and submit JSON. After a short introduction to XForms, Steven explained how the W3C XForms Working Group is considering supporting JSON in XForms 2.0.
While Steven was speaking, Michael Kay twitted what many of us were thinking: “Oh dear, yet another JSON-to-XML mapping coming…“. Unfortunately, until JSON finds its way into the XML Data Model, every application that wants to expose JSON to XML tool has to propose a mapping!
The first sessions of the second day were spent by Jonathan Robie and Michael Kay to present What’s New in XPath/XSLT/XQuery 3.0 and XML Schema 1.1.
A lot of good things indeed! XML Schema 1.1 in particular that will correct the biggest limitations of XML Schema 1.0 and borrow some features to Schematron, making XML Schema an almost decent schema language!
But the biggest news are for XPath/XSLT/XQuery 3.0, bringing impressive new features that will turn these languages into fully functional programming languages. And of course new types in the data model to support the JSON data model.
One of these new features are annotations and Adam Retter gave a good illustration of how these annotations can be used in his talk RESTful XQuery – Standardised XQuery 3.0 Annotations for REST. XQuery being used to power web applications, these annotations can be used to define how stored queries are associated to HTTP requests and Adam proposes to standardize them to insure interoperability between implementations.
After the lunch, Evan Lenz came to present Carrot, “an appetizing hybrid of XQuery and XSLT” which was first presented at Balisage 2011. This hybrid is not a chimera but a nice compromise for those of us who can’t really decide if they prefer XSLT or XQuery: Carrot extends the non XML syntax of XQuery to expose the templating system of XSLT.
It can be seen as yet another non XML syntax for XSLT, a templating extension for XQuery and borrows their best features to both languages!
Speaking of defining templates in XQuery, John Snelson came next to present Transform.XQ: A Transformation Library for XQuery 3.0. Taking profit of the functional programming features of XQuery 3.0, Transform.XQ is an XQuery library that implements templates in XQuery. These templates are not exactly similar to XSLT templates (the priority system is different) but like in XSLT you’ll find template definitions,apply templates methods, modes, priorities and other goodies.
Java had not been mentioned yet and Charles Foster came to propose Building Bridges from Java to XQuery. Based on XQuery API for Java (XQJ), this bridges rely on Java annotations to map Java classes and XQuery stored queries and of course POJOs are also mapped to XML to provide a very sleek integration.
The last talk was a use case by Lorenzo Bossi presenting A Wiki-based System for Schema and Data Evolution, providing a good summary of the kind of problem you have when you need to update schemas and corpus’s of documents.
Everyone was then holding their breath waiting for Michael Sperberg-McQueen’s closing keynote that has been brilliant as usual and almost impossible to summarize and should be watched on video!
Michael choose to use John Amos Comenius as an introduction for his keynote. Comenius has been the last bishop of Unity of the Brethren and became a religious refugee. That gave Michael an opportunity to call for tolerance and diversity in document formats like in real life. Comenius is also one of the earliest champions of universal education and Michael pointed out that structured markup languages were the new champions of this noble goal in his final conclusion.
Of course, there has been much more than that in his keynote, Michael taking care to mention each presentation, but this focus on Comenius confirmed my feeling of the religious feeling toward XML.
I agree with most what Michael said in his keynote except maybe when he seems to deny that the XML adoption can be considered disappointing. When he says that the original goal of XML to be able to use SGML on the web has been achieved because he, Michael Sperberg-McQueen, can use XML on his web sites, that’s true of course, but was the goal really to allow SGML experts to use SGML on the web?
It’s difficult for me to dissent because he is the one who was involved in XML at that time when I had never heard of SGML, but I would still argue that SGML was usable on the web by SGML experts and that I don’t understand the motivation of the simplification that gave birth to XML if that was not to lower the price to entry so that web developers could use XML.
The consequences of this simplifications have been very heavy: the whole stack of XML technologies had to be reinvented and SGML experts have lost a lot of time before these technologies could be considered to be at the same level as they were. And even now some features of SGML that have been stripped down could be very useful for experts on the web such as for instance DTDs powerful enough to describe wiki syntaxes.
Similarly, when discussing during lunch with Liam Quin about my talk, he said that he had always thought that XHTML would never replace HTML. I have no reason to contradict Liam, but the vision of the W3C Markup Activity was clearly to “Deliver the Web of the Future Today: Recasting HTML in XML” like it can be seen on this archive.
It’s not pleasant to admit that we’ve failed, but replacing HTML with XHTML so that XML became dominant on the browser was clearly the official vision of the W3C shared by a lot of us and this vision has failed!
We need to acknowledge that we’ve lost this battle and make peace with the web developers that have won…
Curiously, there seems to be much less aggressiveness toward JSON than toward HTML5 in the XML community as can be shown by the number of efforts to bridge XML and JSON. Can we explain this by the fact that many XML purists considered data oriented XML as less interesting and noble than document oriented XML?
Anyway, the key point is that very strong ecosystem has been created with an innovative, motivated and almost religious community and a technology stack which is both modern and mature.