Mise en sommeil de websemantique.org

Cela fait plusieurs années que la plupart des “contributions” sur le site websemantique.org sont le fait de spammeurs! Les informations qui s’y trouvent ne sont plus de première fraîcheur et le site ne reflète ni la vitalité ni l’état de l’art du web sémantique.

J’ai donc décidé (avec l’accord de l’équipe fondatrice) de rediriger toutes les page du site vers la “planète web sémantique” que je continue à apprécier en tant qu’outil de veille.

Je vais rediriger de même les pages de http://smob.websemantique.org/ (qui n’a plus été utilisé depuis 2009) vers la planète.

Merci à tous pour votre contribution!

RSS en campagne

Les agriculteurs AB sont rares en Haute-Normandie dont les bocages ont été transformés en grandes culture partout où c’était possible. Pour briser l’isolement, ils se sont rassemblés au sein de groupements et cherchent à garder le contact par tous les moyens.

En tant qu’arboriculteur AB, nous ne faisons pas exception à la règle et rendons volontiers visite à Paola et Benoît Lelièvre de la ferme de Pincheloup qui sont nos plus proches voisins AB, mais depuis le mois d’août et l’ouverture du magasin, nous avons du quelque peu espacer ces contacts.

Catherine a reçu hier un mail de Paola lui donnant quelques nouvelles et lui disant que nos aventures étaient suivies avec attention par la ferme de Pincheloup grâce… au flux RSS de notre site!

Lorsque j’ai participé à la rédaction de la spécification RSS 1.0 en 2000, j’étais loin de me douter que ce vocabulaire dont j’avais tant de mal à expliquer l’intérêt à mon entourage serait un jour utilisé dans les campagnes et me servirait à mener à bien un projet de nature si différente!

Web 2.0 at XML Prague

This coming week-end, I’ll have the pleasure to be at XML Prague, a small and friendly XML conference in a wonderful city.

This year, I’ll leave out my usual XML schema languages expert hat to speak on two topics:

  • An experience to define a RDF/XML Query By Example language. This presentation relates a very cool project that I am developing for one of my customers (INSEE) and that I have also presented at Extreme Markup Languages last year. It is very on topic with the focus of XML Prague this year which is “XML Native Databases and Querying XML”.
  • Web 2.0: myth and reality, a presentation derived from the blog entry with the same title. Even though people could probably argue that Web 2.0 is about making a web that can be queried, this talk will probably be felt more out of topic. I hope it will still be well received and look forward to delivering it in Prague.

XML Prague 2005 had also been an opportunity to see Prague that I hadn’t seen since… 1981… (I can tell you that so many things had changed that I could hardly recognize the city) and also to meet many members from an active and creative Eastern European XML community with whom I had often exchanged emails but had had few opportunities to meet face to face.

I have no doubt XML Prague 2006 will be as fun as its preceding issue.

Edd Dumbill on XTech 2006

Last year Edd Dumbill, XTech Conference Chair, had been kind enough to answer my questions about the 2005 issue of the conference previously known as “XML Europe”. We’re renewing the experience, taking the opportunity to look back at last year issue and to figure out how XTech 2006 should look like.

vdV: You mention in your blog the success of XTech 2005 and that’s an appreciation which is shared by many attendees (including myself). Can you elaborate for those who have missed XTech 2005 what makes you say that it has been a success?

Edd: What I was particularly pleased with was the way we adapted the conference topic areas to reflect the changing technology landscape.

With Firefox and Opera, web browser technology matters a lot more now, but there was no forum to discuss it. We provided one, and some good dialog was opened up between developers, users and standards bodies.

But, to sum up how I know the conference was successful: because everybody who went told me that they had a good and profitable time!

vdV: You said during our previous interview that two new tracks which “aren’t strictly about XML topics at all” have been introduced last year (Browser Technology and Open Data) to reflect the fact that “XML broadens out beyond traditional core topics”. Have these tracks met their goal to attract a new audience?

Edd: Yes, I’m most excited about them. As I said before, the browser track really worked at getting people talking. The Open Data track was also very exciting: we heard a lot from people out there in the real world providing public data services.

The thing is that people in these “new” audiences work closely with the existing XML technologists anyway. It didn’t make sense to talk about XML and leave SVG, XHTML and XUL out in the cold: these are just as much document technologies as DocBook!

One thing that highlighted this for me was that I heard from a long-time SGML and then XML conference attendee that XTech’s subject matter was the most interesting they’d seen in years.

vdV: Did the two “older” tracks (Core Technologies and Applications) resist to these two new tracks and would you quality them as successful too?

Edd: Yes, I would! XTech is still a very important home for leaders in the core of XML technology. Yet also I think there’s always a need to change to adapt to the priorities of the conference attendees. One thing I want to do this year is to freshen the Applications track to reflect the rapidly changing landscape in which web applications are now being constructed. As well as covering the use of XML vocabularies and its technologies, I think the frameworks such as Rails, Cocoon, Orbeon and Django are important topics.

vdV: What would you like to do better in 2006?

Edd: As I’ve mentioned above, I think the Applications track can and will be better. I’d like also for there to be increased access to the conference for people such as designers and information architects. The technology discussed at XTech often directly affects these people, but there’s not always much dialogue between the technologists and the users. I’d love to foster more understanding and collaboration in that way.

vdV: You mention in your blog and in the CFP that there will be panel discussions for each track. How do you see these panel discussions?

Edd: Based on feedback from 2005’s conference, I would like the chance for people to discuss the important issues of the day in their field. For instance, how should XML implementors choose between XQuery and XSLT2, or how can organisations safely manage exposing their data as a web service? There’s no simple answer to these questions, and discussions will foster greater understanding, and maybe bring some previously unknown insights to those responsible for steering the technology.

vdV: The description of the tracks for XTech 2006 looks very similar to its predecessor. Does that mean that this will be a replay of XTech 2005?

Edd: Yes, but even more so! In fact, XTech 2005 was really a “web 2.0” conference even before people put a name to what was happening. In 2006 I want to build on last year’s success and provide continuity.

vdV: l’année dernière: In last year’s description, the semantic web had its own bullet point in the “Open Data” track and this year, it’s sharing a bullet point with tagging and annotation. Does that mean that tagging and annotation can be seen as alternative to the semantic web? Doesn’t the semantic webtique deserve its own track?

Edd: The Semantic Web as a more formal sphere already has many conferences of its own. While XTech definitely wants to cover semantic web, it doesn’t want to get carried away with the complicated academic corners of the topic, but more see where semantic web technologies can be directly used today.

Also, I see the potential for semantic web technologies to pervade all areas that XTech covers. RDF for instance, is a “core technology”. RSS and FOAF are “applications” of RDF. RDF is used in browsers such as Mozilla. And RDF is used to describe metadata in the Creative Commons, relevant to “open data”. So why shut it off on its own? I’d far rather see ideas from semantic web spread throughout the conference.

vdV: In your blog, you’ve defended the choice of the tagline “Building Web 2.0” quoting Paul Graham and saying that the Web 2.0 is a handy label for “The Web as it was meant to be used”. Why have you not chosen “Building the web as it was meant to be” as a tagline, then?

Edd: Because we decided on the tagline earlier! I’ll save “the web as it was meant to be” for next year :)

vdV: What struck me with this definition is that XML, Web Services and the Semantic Web are also attempts to build the Web as it was meant to be. What’s different with the Web 2.0?

Isn’t “building the web as it was meant to be” an impossible quest and why should the Web 2.0 be more successful than the previous attempts?

Edd:deux questions à la fois. I’ll answer both these together. I think the “Web 2.0” name includes and builds on XML, Web Services and Semantic Web. But it also brings in the attitude of data sharing, community and the read/write web. Together, those things connote the web as it was intended by Berners-Lee: a two-way medium for both computers and humans.

Rather than an “attempt”, I think “Web 2.0” is a description of the latest evolution of web technologies. But I think it’s an important one, as we’re seeing a change in the notions of what makes a useful web service, and a validation of the core ideas of the web (such as REST) which the rush to make profit in “Web 1.0” ignored.

vdV: In your blog, you said that you’re “particularly interested in getting more in about databases, frameworks like Ruby on Rails, tagging and search”. By databases, do you mean XML databases? Can you explain why you find these points particularly interesting?

Edd: I mean all databases. Databases are now core to most web applications and many web sites. They’re growing features to directly support web and XML applications, whether they’re true “XML databases” or not. A little bit of extra knowledge about the database side of things can make a great difference when creating your application.

XTech is a forum for web and XML developers, the vast majority of whom will use a database as part of their systems. Therefore, we should have the database developers and vendors there to talk as well.

vdV: One of the good things last year was the wireless coverage. Will there be one this year too?

Edd: Absolutely.

vdV: What is your worse souvenir of XTech 2005?

Edd: I don’t remember bad things :)

vdV: What is your best souvenir of XTech 2005?

Edd: For me, getting so many of the Mozilla developers out there (I think there were around 25+ Mozilla folk in all). Their participation really got the browser track off to a great start.


TreeBind, Data binding and Design Patterns

I have released a new version of my Java data binding framework, TreeBind and I feel I need to explain why I am so excited by this API and by other lightweight binding APIs…

To make it short, to me these APIs are the latest episode of a complete paradigm shift in the relation between code and data.

This relationship has always been ambiguous because we are searching a balance between conflicting considerations:

  • We’d like to keep data separated because history has told us that legacy data is more important than programs and that data needs to survive during several program generations.
  • On the other hand, object orientation is about mixing programs and data.

The Strategy Pattern is about favouring composition over inheritance: basically, you create classes for behaviours and these behaviours become object properties.

This design pattern becomes still more powerful when you use a data binding API such as TreeBind since you gain the ability to directly express the behaviours as XML or RDF.

I have used this ability recently in at least two occasions.

The first one is in RDF, to implement the RDF/XML Query By Example language that I have presented at Extreme Markup Languages this summer.

RDF resources in a query such as:

<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"

Are binded into Java classes (in this case, a class “Select”, a generic class for other resources for “InseePerson” and a class “Conditions”) and these classes can be considered as behaviours.

The second project in which I have been using this ability is for a list manager which I am writing to run my mailing lists.

This list manager is designed as a set of behaviours to apply on incoming messages.

Instead of providing a set of rigid parameters to define the list configuration, I have decided to expose the behaviours themselves though TreeBind.

The result is incredibly flexible:

<?xml version="1.0" encoding="UTF-8"?>
                <subjectPrefix>[the XML Guild]</subjectPrefix>
Yet another mailing list manager!
                <header name="Precedence">List</header>
                <header name="List-Id">&lt;list.example.com></header>
                <header name="List-Post">&lt;mailto:list@example.com></header>

The whole behaviour of the list manager is exposed in this XML document and the Java classes corresponding to each element are no more than the code that implements this behaviour.

Unless you prefer to see it the other way round and consider that the XML document is the extraction of the data from their classes…

TreeBind: one infoset to bind them all

This is the first entry of a series dedicated to the TreeBind generic binding API.
I have recently made good progress in the extensive refactoring of TreeBind required by my proposal to support RDF and it’s time to start explaining these changes.

The first of them is the infoset on which TreeBind is now relying.

TreeBind’s target is to propose and implement a binding mechanism that can support XML, Java objects but also RDF and LDAP (my new implementation includes support for these two models as data sources) and, potentially, other sources such as relational databases or even PSVIs…

In order to cover all these data sources, TreeBind required an infoset (or data model) which is a superset of the data models of these sources.

The new TreeBind infoset is simple enough to cope with all these data models. It consists in:

  • Names. These different sources have different ways of defining names. Names can include both a domain name and a local name (that’s the case with XML and namespaces, but also with Java Class names and packages), they can include only a local name (that’s the case with LDAP but also for Java method names) or can be more complex like XML attribute names in which the namespace of the parent element has a role to play.
  • Complex properties. These are non leaf properties. Complex properties have a nature which is a name and a set of embedded properties that are either complex or leaf properties. When a sub property is attached to a property, the attachment carries a role which is a name.
  • Leaf properties. Leaf properties have a nature (which is a name) and a value.

That’s all…

This is enough to differentiate, for instance, an XML element from an XML attribute because their names belong to different name classes.

This would, potentially, allow to cope with mixed content by adding a new class of names to support text nodes. This is not implemented for the moment, just because I don’t have any business case to justify the additional workload.

If needed, the same could be done to support other XML constructions such as PIs and comments.

A concept which is clearly missing and should probably be added in a future version is the concept of identity.

Names are used to identify the nature of the objects and the roles they play in the different associations.

When we use TreeBind to bind not only trees but also graphs (which is the case of RDF, LDAP and even XML if we want to support some type of id/idref), we need to be able to identify objects in order to avoid creating binding loops.

This could be done by attaching an ID which could also be a name to each property.

So what?

The new version of TreeBind is implementing a SAX like paradigm built on top of this simple infoset like SAX is (more or less) built on top of the XML infoset.

Binding a source to a sink is done by implementing or just using:

  • A source that will read the data source and stream properties.
  • A sink that receives streamed properties and create the target data.
  • One or more filters to deal with the impedance mismatches between the source and the sink.

The strength of this architecture is that if the built-in pipe that does the binding is not flexible enough for your application, you can just add a filter that will cope with your specific requirements.

We’ll explore all that in more details in the next entries…

TreeBind goes RDF

TreeBind can be seen as yet another open source XML <-> Java object data binding framework.

The two major design decisions that differentiate TreeBind from other similar frameworks are that:

  1. TreeBind has been designed to work with existing classes through Java introspection and doesn’t rely on XML schemas.
  2. Its architecture is not specific to XML and TreeBind can be used to bind any source to any sink, assuming they can be browsed and built following the paradigm of trees (this is the reason why we have chosen this name).

Another difference with other frameworks is that its TreeBind has been sponsored by one of my customers (INSEE) and that I am its author…

The reason why we’ve started this project is that we’ve not found any framework that was meeting the two requirements I have mentioned and I am now bringing TreeBind a step forward by designing a RDF binding.

I have sent an email with the design decisions I am considering for this RDF binding to the TreeBind mailing list and I include a copy bellow for your convenience:


I am currently using TreeBind on a RDF/XML vocabulary.

Of course, a RDF/XML document is a well formed XML document and I could use the current XML bindings to read and write RDF/XML document.

However, these bindings focus on the actual XML syntax used to serialize the document. They don’t see the RDF graph behind that syntax and are sensitive to the “style” used in the XML document.

For instance, these two documents produce very similar triples:

<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#";
        <title>RELAX NG</title>
                <lname>van der Vlist</lname>


<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#";
        <title>RELAX NG</title>
        <written-by rdf:resource="#vdv"/>
    <author rdf:ID="vdv">
        <lname>van der Vlist</lname>

but the XML bindings will generate a quite different set of objects.

The solution to this problem is to create RDF bindings that will sit on top of a RDF parser to pour the content of the RDF model into a set of objects.

The overall architecture of TreeBind has been designed with this kind of extension in mind and that should be easy enough.

That being said, design decisions need to be made to define these RDF bindings and I’d like to discuss them in this forum.

RDF/XML isn’t so much an XML vocabulary in the common meaning of this term but rather a set of binding rules to bind an XML tree into a graph.

These binding rules introduce some conventions that are sometimes different from what we use to do in “raw” XML documents.

In raw XML, we would probably have written the previous example as:

<?xml version="1.0" encoding="UTF-8"?>
<book xmlns="http://ns.treebind.org/example/";>
    <title>RELAX NG</title>
        <lname>van der Vlist</lname>

The XML bindings would pour that content into a set of objects using the following algorithm:

  • find a class that matches the XML expanded name {http://ns.treebind.org/example/}book and create an object from that class.
  • try to find a method such as addTitle or setTitle with a string parameter on this book object and call that method with the string “RELAX NG”.
  • find a class that matches the XML expanded name {http://ns.treebind.org/example/}author and create an object from that class.
  • try to find a method such as addFname or setFname with a string parameter on this author object and call that method with the string “Eric”.
  • try to find a method such as addLname or setLname with a string parameter on this author object and call that method with the string “van der Vlist”.
  • try to find a method such as addAuthor or setAuthor with a string parameter on the book object and call that method with the author object.

We see that there is a difference between the way simple type and complex type elements are treated.

For a simple type element (such as “title”, “fname” and “lname”), the name of the element is used to determine the method to call and the parameter type is always string.

For a complex type element (such as author), the name of the element is used both to determine the method to call and the class of the object that needs to be created. The parameter type is this class.

This is because when we write in XML there is an implicit expectation that “author” is used both as a complex object and as a verb.

Unless instructed otherwise, RDF doesn’t allow these implicit shortcuts and an XML element is either a predicate or an object. That’s why we have added an “written-by” element in our RDF example:

<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#";
        <title>RELAX NG</title>
                <lname>van der Vlist</lname>

The first design decision we have to make is to decide how we will treat that “written-by” element.

To have everything in hand to take a decision, let’s also see what are the triples for that example:

rapper: Parsing file book1.rdf
_:genid1 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://ns.treebind.org/example/book> .
_:genid1 <http://ns.treebind.org/example/title> "RELAX NG" .
_:genid2 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://ns.treebind.org/example/author> .
_:genid2 <http://ns.treebind.org/example/fname> "Eric" .
_:genid2 <http://ns.treebind.org/example/lname> "van der Vlist" .
_:genid1 <http://ns.treebind.org/example/written-by> _:genid2 .
rapper: Parsing returned 6 statements

In these triples, two of them are defining element types:

_:genid1 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://ns.treebind.org/example/book> .


_:genid2 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://ns.treebind.org/example/author> .

I propose to use these statements to determine which classes must be used to create the objects. So far, that’s pretty similar to what we’re doing in XML.

Then, we have triples that assign literals to our objects:

_:genid1 <http://ns.treebind.org/example/title> "RELAX NG" .
_:genid2 <http://ns.treebind.org/example/fname> "Eric" .
_:genid2 <http://ns.treebind.org/example/lname> "van der Vlist" .

We can use the predicates of these triples (<http://ns.treebind.org/example/title>, <http://ns.treebind.org/example/fname>, <http://ns.treebind.org/example/lname>) to determine the names of the setter methods to use to add the corresponding information to the object. Again, that’s exactly similar to what we’re doing in XML.

Finally, we have a statement that links two objects together:

_:genid1 <http://ns.treebind.org/example/written-by> _:genid2 .

I think that it is quite natural to use the predicate (<http://ns.treebind.org/example/written-by>) to determine the setter method that needs to be called on the book object to set the author object.

This is different from what we would have been doing in XML: in XML, since there is a written-by element, we would have created a “written-by” object, added the author object to the written-by object and added the written-by object to the book object.

Does that difference make sense?

I think it does, but the downside is that the same simple document like this one:

<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#";
        <title>RELAX NG</title>
                <lname>van der Vlist</lname>

will give a quite different set of objects depending which binding (XML or RDF) will be used.

That seems to be the price to pay to try to get as close as possible to the RDF model.

What do you think?

Earlier, I have mentioned that RDF can be told to accept documents with “shortcuts”. What I had in mind is:

<?xml version="1.0" encoding="UTF-8"?>
<rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#";
        <title>RELAX NG</title>
        <author rdf:parseType="Resource">
            <lname>van der Vlist</lname>

Here, we have used an attribute rdf:parseType=”Resource” to specify that the author element is a resource.

The triples generated from this document are:

rapper: Parsing file book3.rdf
_:genid1 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://ns.treebind.org/example/book> .
_:genid1 <http://ns.treebind.org/example/title> "RELAX NG" .
_:genid2 <http://ns.treebind.org/example/fname> "Eric" .
_:genid2 <http://ns.treebind.org/example/lname> "van der Vlist" .
_:genid1 <http://ns.treebind.org/example/author> _:genid2 .
rapper: Parsing returned 5 statements

The model is pretty similar except that there is a triple missing (we have now 5 triples instead of 6).

The triple that is missing is the one that gave the type of the author element:

_:genid2 <http://www.w3.org/1999/02/22-rdf-syntax-ns#type> <http://ns.treebind.org/example/author> .

The other difference is that <http://ns.treebind.org/example/author> is now a predicate.

In this situation when we don’t have a type for a predicate linking to a resource, I propose that we follow the rule we use in XML and use the predicate to determine both the setter method and the class of the object to create to pour the resource.

What do you think? Does that make sense?



Thanks for your comments, either on this blog or (preferred) on the TreeBind mailing list.

SPARQL Versus Versa

A new working draft of SPARQL has been released.

While there is no doubt that the language is getting better and more polished with each new release of this specification, I am surprised to see that the limitations I had found in rdfDB back in early 2001 when I have tried to use it for XMLfr are still there.

This is an old story that I have presented in Austin at KT 2001 and published as an XML.com article: it can be very interesting to compute the distance between resources and to do so, you need the equivalent of a SQL “group by” clause and the related aggregate functions.

In the case of XMLfr, I rely on this feature to compute the distance between two topics by counting the number of articles in which they appear together. To do so, I use the SQL group by clause with the “count” aggregate function.

The fact that these features were missing in rdfDB has been the reason why I have had to drop rdfDB and RDF altogether and store my triples in a relational database that I query with SQL.

As far as I know, there is only one RDF query language that support these features: 4Suite’s Versa query language.

Versa is so different from SPARQL that these two languages are as difficult to compare as, let’s say the W3C XML Schema’s XML syntax and the RELAX NG’s compact syntax.

Instead of trying to bend the well known SQL syntax to make it work on triples, Versa has defined a totally new language for the purpose of traversing triples data stores.

The result is surprising. You won’t find anything that will remind you of SQL and, to take an example from “Versa by example“, to get a list of people’s first names sorted by their age, you’d write: “sortq(all(), “.-o:age->*”, vsort:number) – o:fname -> *”

If you insist and don’t let the first surprise stop you, the second surprise is that this language is working incredibly well. During the (unfortunately too few) opportunities I have had to work with Versa, I have never been blocked by a limit of the language like I had been with rdfDB or would be with SPARQL.

The bad news is that there is only one implementation of Versa (4Suite). This means that you won’t be able to use Versa over Redland or Jena and I wish people implementing RDF databases could consider more closely implementing Versa over their databases!

I also wish the W3C could have taken Versa as the main input for their RDF query language, but this wish doesn’t seem too likely to happen :-( …

Edd Dumbill on XTech 2005

XTech 2005 presents itself as “the premier European conference for developers and managers working with XML and Web technologies, bringing together the worlds of web development, open source, semantic web and open standards.” Edd Dumbill, XTech 2005 Conference Chair answered our questions about this conference previously known as XML Europe. This interview has been published in French on XMLfr.

vdV: XTech was formally known as XML Europe, what are the motivations for changing its name?

Edd: As the use of XML broadens out beyond traditional core topics, we want to reflect that in the conference. As well as XML, XTech 2005 will cover web development, the semantic web and more. XML’s always been about more than just the core, but we felt that having “XML” in the name made some people feel the conference wasn’t relevant to them. The two new tracks, Browser Technology and Open Data, aren’t strictly about XML topics at all.

vdV: In the new name (XTech), there is no mention of Europe, does that mean that the conference is no longer or less European?

Edd: Not at all! Why should “Europe” be a special case anyway? Even as XML Europe, we’ve always had a fair number of North American speakers and participants. I don’t see anything changing in this regard.

vdV: After a period where every event, product or company tried to embed “XML” in their name, the same events are now removing any reference to XML. How do you analyse this trend?

Edd: It’s a testament to the success of XML. As XML was getting better known, everybody knew it was a good thing and so used it as a sign in their names. Now XML is a basic requirement for many applications, it’s no longer remarkable in that sense.

vdV: How would you compare the 12 different track keys of XML Europe 2004 (ranging from Content Management to Legal through Government and Electronic Busines) and the 4 tracks of XTech 2005 (Core technologies, Applications, Browser technologies and Open data).

Edd: The switch to four clearly defined tracks is intended to help both attendees and speakers. The twelve tracks from before weren’t always easy to schedule in an easy-to-understand way, leading to a “patchwork” programme. Some of the previous tracks only had a handful of sessions in them anyway.

In addition to making the conference easier to understand, we get an opportunity to set the agenda as well as reflect the current practice. Take the new “Open Data” track as an example. There are various areas in which data is being opened up on the internet: political and government (theyrule.net, electoral-vote.com, theyworkforyou.com), cultural ( BBC Creative Archive), scientific and academic (Open Access). Many of the issues in these areas are the same, but there’s never been a forum bringing the various communities together.

vdV: Isn’t there a danger that the new focus on Web technologies becomes a specialisation and reduces that scope?

Edd: I don’t think that’s a danger. In fact, web technology is as much a part of the basic requirement for companies today as XML is, and it’s always been a running theme through the XML Europe conferences.

What we’re doing with the Browser Technology track is reflected the growing importance of decent web and XML-based user interfaces. Practically everybody needs to built web UIs these days, and practically everybody agrees the current situation isn’t much good. We’re bringing together, for the first time, everybody with a major technology offering here: W3C standards implementors, Mozilla, Microsoft. I hope again that new ideas will form, and attendees will get a good sense of the future


vdV: Does the new orientation means that some of the people who have enjoyed last XML Europe 2004 might not enjoy XTech 2005?

Edd: No, I don’t think so. In fact, I think they’ll enjoy it more because it will be more relevant to their work. Part of the reasoning in expanding the conference’s remit is the realisation that core XML people are always working with web people, and that any effort to archive or provide public data will heavily involve traditional XML topics. So we’re simply bringing together communities that always work closely anyway, to try and get a more “joined up” conference.

vdV: In these big international conferences, social activities are often as important as the sessions. What are your plans to encourage these activities?

Edd: The first and most important thing is the city, of course! Amsterdam is a great place to go out with other people.

We’ll be having birds-of-a-feather lunch tables, for ad-hoc meetings at lunch time. Additionally, there’ll be dinner sign-up sheets and restaurant suggestions. I’m personally not very keen on having formal evening conference sessions when we’re in such a great city, but I do want a way for people to meet others with common interests.

I’m also thinking about having a conference Wiki, where attendees can self-organise before arriving in Amsterdam.

vdV: Wireless access can play a role in these social activities (people can share their impression in real time using IRC channels, blogs and wikis). Will the conference be covered with wireless?

Edd: I really hope so. The RAI center are in the process of rolling out wireless throughout their facility, but unfortunately haven’t been able to say for sure.

Wireless internet is unfortunately very expensive, and we would need a sponsor to get free wireless throughout the conference. If anybody’s reading this and interested, please get in touch.

vdV: What topics would you absolutely like to see covered?

Edd: I think what I wrote in the track descriptions page at http://www.xtech-conference.org/2005/tracks.asp is a good starting point for this.

vdV: What topics would you prefer to leave away?

Edd: I don’t want to turn any topics away before proposals have been made. All proposed abstracts are blind reviewed by the reviewing team, so there’s a fair chance for everybody.

vdV: What is your best souvenir from the past editions of XML Europe?

Edd: I always love the opening sessions. It’s very gratifying to see all the attendees and to get a great sense of expectation about what will be achieved over the next three days.

vdV: What is your worse souvenir from the past editions of XML Europe?

Edd: The bad snail I ate in Barcelona — the ride over the bumpy road to the airport after the conference was agony!