Friday, February 13, 2009

XMetaL Reviewer Webinar

I attended a webinar yesterday hosted by Just Systems for their XMReviewer product.  The problem space is that conventional reviewing processes are cumbersome and inefficient, particularly when there are multiple reviewers that need to review a document concurrently.  In general, most review processes rely on either multiple draft copies being sent out, one to each reviewer, and then it’s up to the author to “merge” the comment feedback into the source.

With XMReviewer, the entire review process is centralized on the XM Reviewer server:  Reviewers simply access the document online, provide their comments and submit. What’s really cool is that reviewers are notified in almost real time when another reviewer has submitted their comments and can integrate their fellow reviewer’s comments into their own.

The real advantage is that authors have all reviewer comments integrated and merged into a single XML instance, and in context. Very Nice. 

There’s also a web service API that allows you to integrate XMReviewer with other systems including a CMS that can automatically deploy your content to the XMReviewer server.

There are some nice general reporting/auditing features built in as well.  However, I didn’t see anything that would allow me to customize the reports or to manipulate the data, but I wouldn’t consider that a show stopper.

For folks used to “offline” reviews, e.g., providing comments at home, or on a plane, this won’t work for you as it is a server application.  Nonetheless, having the ability to have full control and context for review comments far outweighs the minor inconvenient requirement of being online and getting access to the server (most companies these days have VPN, so it’s not a showstopper).  Though, I can envision the possibility of the server downloading and installing a small-footprint application that would allow users to review the document “offline” and being able to “submit” the comments back to the server when the reviewer is back online. 

The only other limitation right now is that XMReviewer doesn’t support DITA map-level reviews in which you can provide comments on multiple topics within a map.  This is currently in development for a future release – stay tuned.

Overall, XMReviewer looks great and can simplify your content review process.  Check it out.

Wednesday, February 11, 2009

Microsoft Live Writer Convert

After reading a few blogs here and there, I’ve seen a few posts about Microsoft’s Live Writer for creating blog posts.  Always on the lookout for new toys and tools, I decided to download it and try it out. I gotta admit, I’m sold.  This is a pretty nice application that allows me to work offline to write and edit my posts and when I am ready and able to connect, I simply push the “Publish” button and away it goes.  Sweet.

It’s simple to install, and simple to configure to point to virtually any blog host out there.  In short: It just works. 

This is what software should be like.  It should solve a particular set of problems and only those problems well without requiring massively complex installation and configuration steps.  The interface should be intuitive (Live Writer is wickedly intuitive) and should help rather than hinder me in my productivity.  This tool does that.  Well done, Microsoft!

Monday, February 9, 2009

Implementing XML in a Recession

With the economic hard times, a lot of proposed projects that would allow companies to leverage the real advantages of XML are being shelved until economic conditions improve.  Obviously, in my position, I would love to see more companies pushing to using XML throughout the enterprise. We’ve all heard of the advantages of XML: reuse, repurposing, distributed authoring, personalized content, and so on. These are underlying returns on investment for implementing an XML solution.  The old business axiom goes, “you have to spend money to make money.”  A corollary to that might suggest that getting the advantages of XML must mean spending lots of money.

However, here’s the reality: implementing an Enterprise-wide XML strategy doesn’t have to break the bank. In fact, with numerous XML standards that are ready to use out of the box, like DITA and DocBook for publishing and XBRL for business, the cost of entry is reduced dramatically compared to a customized grammar. 

And while no standard is always a 100 percent perfect match for any organization’s business needs, at least one is likely to support at least 80 percent.  We often consult our clients to use a standard directly out of the box (or with very little customization) until they have a good “feel” of how well it works in their environment before digging into the real customization work.  Given that funding for XML projects is likely to be reduced, this is the perfect opportunity to begin integrating one of these standards into your environment, try it on for size while the economy is slow, and when the economy improves, then consider how to customize your XML content to fit your environment.

Any XML architecture must encompass the ability to create content and to deliver it, even one on a budget.  Here again, most XML authoring tools available on the market have built-in support for many of these standards, with little to no effort, you can use these authoring environments out of the box and get up to speed. 

On the delivery side, these same standards, and in many cases the authoring tools have prebuilt rendering implementations that can be tweaked to deliver high-quality content, with all of the benefits that XML offers.  In this case, you might want to spend a little more to hire an expert in XSLT.  But it doesn’t have to break the bank to make it look good.

The bottom line: A recessionary economy is a golden opportunity to introduce XML into the enterprise. In the short term, keep it simple, leverage other people’s work and industry best practices and leave your options open for when you can afford to do more.  Over time when funding returns, then you can consider adding more “bells and whistles” that will allow you to more closely align your XML strategy with your business process.

Friday, February 6, 2009

DOXSL: Reflexive Code Documentation and Testing, and other random XSLT thoughts

One of the cool things about Doxsl is that I can test it on itself.  Since Doxsl is an XSLT application (v2.0), I can create documentation using itself.  I'll be posting these on the Sourceforge project website soon - when I finish documenting my own code.  Hmmm... walking the talk and eating your own dogfood at the same time - who woulda thunk it?

There's something about reflexive tools that is just pretty cool.  I built another application to document the DocBook RelaxNG schemas into DocBook.  

The Doxsl DocBook stylesheets are coming along.  If I can manage to get some free time at night, I might be able to finish these in about a week.  The one thing I really need to do is check out xspec to see if I can write test cases against the code.  I've tried XMLUnit about a year ago, but the critical difference is that it tests the artifact of the transform, rather than the code itself.  Implicit testing is better than no testing at all, but it doesn't mean that it's optimal.  I love JUnit and NUnit for testing my Java and .NET code, and it's great for the large enterprise-wide projects I work on.  While Doxsl is just a teeny, tiny little application (tool is more like it), there is enough code right now that even simple changes can cause big problems.  I'll let you know what I think about xspec when I've had a chance to tinker with it.

Another XSL application I've been working over the last year or so is an alternative to the DITA Open Toolkit.  The OT is OK as a reference implementation, but it can be a bear to work with even to handle minor customizations.  Part of the problem, in my opinion, is that the OT's stylesheets are dependent on the Ant scripts that drive it.  In fact, it takes some fancy footwork to get the stylesheets to run outside of the ant environment.  And here again, Ant is the tool for creating a consistent and reliable sequence of build steps for a development environment.  Where it falls short is dealing with sophisticated XSLT applications that have lots of parameters (optional or otherwise).  The parameters have to be "hardcoded" into the XSLT task.  Not my idea of extensible.

Add to that: the stylesheets are still using XSLT 1.0 - ehhh.  I'll use 1.0 if I have to (thanks Microsoft).  There's just so much more that 2.0 provides that makes stylesheet development much, much easier.  At any rate, I've been working on my own implementation of DITA using XSLT 2.0 and with relying on Ant.  HTML and CHM are working, FO is the hard part.  What I find interesting is that I can process a map containing over 160 topics into HTML in about 20 seconds with my stylesheets.  It takes over 2 minutes with the OT! The results are anecdotal , and I haven't really tested the stylesheets on anything really big, but I like what I see so far (in fact the DOXSL website uses DITA and my stylesheets to render it).


Wednesday, February 4, 2009

DOXSL Shout out

I recently found a post by a former client of mine, James Sulak who had some very positive feedback for my open source project, DOXSL. Check out his post. Thanks for the shout out, James!

On that front, I have been working on a new release. There are a few bugs I've discovered when I started processing some DITA stylesheets, particularly when trying to look for overrides using DITA's matching patterns (e.g., *[contains(@class,'- topic-type/element-name ')] - things got borked with speicializations when the class tokens contain the "parent" token.

There's still the other things on my roadmap to work on:
  • DocBook output (I have some designs in mind, specifically around modularity)
  • The Comment Stub Generator: this is a high priority for me
  • Comment Collector: This is similar to what .NET does when it compiles code documentation into a single XML file. The intent is to make the XSLT less "noisy"
  • There are additional validation ideas that Ken Holman suggested that I'd like to build in.

Progress is slow, given that my day job still takes precedence. Stay tuned.

It's Been Awhile

It's been a long time since I've posted to this blog. Life and work have been a bit insane. Work-wise, I've been working on a very complex project that marries DITA modular architecture with a more conventional monolithic document approach. It's been an interesting excercise - one that I will discuss in more detail later.

Personally, my father passed away on December 17th. He was in ICU since the beginning of December, and had been ill for quite some time before that. Needless to say, blogging was the last thing on mind.

I did have a presentation prepared for the XML 2008 conference, called "Optimizing Schemas for Enterprise Content", which I had intended to attend, but couldn't make it due to my father's illness. Thankfully, Eric Severson graciously and adeptly stepped in for me. From all accounts, it was received well. The premise behind the paper is that schemas shouldn't be the end-all/be-all for constraining content models (e.g., validation). Instead, I offer other strategies for controlling content.

In the meantime, I'm getting back in the swing with the DITA TC, hopefully having more time to devote to moving the v1.2 spec along. I'm also working on a proposal for creating "modular DocBook." There are some interesting concepts I'm playing around with in the proposal. Hopefully, I'll have enough to present to the DocBook TC later this month.

So much to do, so little time.