Saturday, December 31, 2011

Parallels between Punk and Anonymous

Prologue:  Before starting my career in the tech world 15+ years ago,  I was a graduate student in Sociology studying political movements and economies. 

At any rate, what’s intriguing about technology is not only about 0s or 1s, data structures, angle brackets, optimized queries or distributed architectures (don’t get me wrong, I love elegant code  and design as much as any other geek) – it’s also the intended and unintended consequences it has on society at large.   As the automobile and large manufacturing re-shaped our society a hundred years ago, the internet and all of the emerging technologies are transforming our social interactions today. 


2011 was a landmark year.  We saw “Arab Spring” unfold before us in large part because of mobile devices and social media (granted, the other necessary ingredients – anger, resentment, disenfranchisement, chronic poverty and unemployment – have been brewing for many years).  The “Occupy” movement harnessed the same political, social, economic, and technological ingredients along with a sprinkling of hyper-aggressive tactics of the NYPD and transformed a seemingly innocuous protest into a worldwide meme.  WikiLeaks, rightly or not, also changed the way we view government, particularly when sensitive or embarrassing information is exposed.  And to that end, this year demonstrated that the combination of mobile and social technology meant that information could spread virally, beyond the full control of any one entity.  This has spurred new tensions between individuals who interact with data and entities who provide and/or control the data.

In this case, I see many interesting parallels between the Punk subculture of the 1970s and early 1980s and the nascent subculture of Anonymous that is growing today.  Both have emerged during periods of economic turmoil, and both have a strong anti-authoritarian sentiment that are willing to challenge the current establishment.  

I love the Sex Pistols (and the Smiths, the Cure, The Damned, Souixsie and Banshees, and so on, and on, etc.).  I can listen to “Anarchy in the UK”, “God Save the Queen”, or “Pretty Vacant” any time. It’s loud and raucous.  It’s fun.  It’s… well, rebellious.  Johnny Rotten’s menacing, sarcastic vocals epitomized the political, social and philosophical undertones of the Punk subculture of the mid-to-late 1970s.

From many accounts, the Punk subculture, particularly in the UK, emerged during the mid-1970s in part because of the poor economy.  Disenfranchised youths with few economic prospects gravitated to a style of music and dress that was non-conformist by nature and expressed their anger and frustration against society and government.

The ethos, or ideology of Punk is well described here (source:  http://www.bunnysneezes.net/page192.html):
It is passionate, preferring to encounter hostility rather than complacent indifference; working class in style and attitude if not in actual socio-economic background; defiant, unconventional, bizarre, shocking; starkly realistic, anti- euphemism, anti-hypocrisy, anti-bullshit, anti-escapist, happy to rub people's noses in realities they don't wish to acknowledge; angry, aggressive, confrontational, tough, willing to fight — yet this stance is derived from an underlying vulnerability, for the archetypal Punk is young, small, poor, and powerless, and he knows it very well; sceptical, especially of authority, romance, business, school, the mass media, promises, and the future; socially critical, politically aware, pro-outlaw, anarchistic, anti-military; expressive of feelings which polite society would censor out; anti-heroic, anti-"rock star" ("Every musician a fan and every fan in a band!"); disdainful of respectability and careerism; night-oriented; with a strong, ironic, satirical (often self-satirical), put-on-loving sense of humor, which is its saving grace; stressing intelligent thinking and deriding stupidity; frankly sexual, frequently obscene; apparently devoted to machismo, yet welcoming "tough" females as equals (and female Punks are often as defiant of the males as of anyone else) and welcoming bisexuals, gays, and sexual experimentation generally; hostile to established religions but sometimes deeply spiritual; disorganized and spontaneous, but highly energetic; above all, it is honest.
Compare this to the first two parts of Quinn Norton’s (Wired Magazine) well-done analysis of Anonymous in “Anonymous: Beyond the Mask” (Part 1 here:  http://www.wired.com/threatlevel/2011/11/anonymous-101/all/1; Part 2 here: http://www.wired.com/threatlevel/2011/12/anonymous-101-part-deux/).  One of the first things this series does incredibly well is to identify Anonymous for what it is – a culture, or more accurately, a counter-culture. 

Like Punk, Quinn goes on to describe the Anonymous culture:
The birthplace of Anonymous is a website called 4chan founded in 2003, that developed an “anything goes” random section known as the /b/ board.

Like Alan Moore’s character V who inspired Anonymous to adopt the Guy Fawkes mask as an icon and fashion item, you’re never quite sure if Anonymous is the hero or antihero. The trickster is attracted to change and the need for change, and that’s where Anonymous goes. But they are not your personal army – that’s Rule 44 – yes, there are rules. And when they do something, it never goes quite as planned. The internet has no neat endings.
What’s more, both are media savvy in their own ways, leveraging them for their own purpose.  Obviously, in the ‘70s and ‘80s, the internet wasn’t even a twinkle in our eyes yet, so they relied on print and radio (typically either on small, low-band college stations or on pirate radio stations since mainstream radio stations wouldn’t give them airplay) to get their message out.   Anonymous, however, have the luxury of the internet and search engines, where information is easily accessible and available:
But to be historical, let’s start with 4chan.org, a wildly popular board for sharing images and talking about them, and in particular, 4chan’s /b/ board (Really, really, NSFW). /b/ is a web forum where posts have no author names and there are no archives and it’s explicitly about anything at all. This technological format meeting with the internet in the early 21st Century gave birth to Anonymous, and it remains the mother’s teat from which Anonymous sucks. (Rule 22)
Both follow its own rules, many of which run counter to conventionally accepted protocols, and frequently meant to shock, ridicule and otherwise laugh at mainstream society. 
/b/ is the id of the internet, the collective unconscious’s version of the place from which the base drives arise. There is no sophistication in the slurs, sexuality, and destruction in the savage landscape of /b/ — it is the natural state of networked man. 
In this, it has a kind of innocence and purity. Terms like ‘nigger’ and ‘faggot’ are common, but not there because of racism and bigotry – though racism and bigotry are easily found there. Their use is there to keep you out. These words are heads on pikes warning you that further in it gets much worse, and it does. 
Nearly any human appetite is acceptable, nearly any flaw exploited, and probably photographed with a time stamp. But /b/ reminds us that the id is the seat of creative energy. Much of it, hell even most of it, is harmless or even sweet. People reach out for help on /b/, and they find encouragement and advice. The id and /b/ are the foxholes of those who feel powerless and disenfranchised.
And like Punk, it never intended to be overtly political.  Rather, the circumstances and events of the time instigated it.  “The Guns of Brixton”, written by The Clash about the 1981 Brixton Riots is one of many examples.  For Anonymous, its forays into political protest were spurred on by their collective belief that Julian Assange and WikiLeaks were wrongfully targeted by governments and large, multinational corporations, and that fellow “compatriots” at BitTorrent site, Pirate Bay, were wrongfully attacked.  In all cases, the common thread was a belief of suppression by the establishment. 

Where they differ, however, is in their means of expression.  Punk is analog.  It could only reach those in proximity to a radio signal (or the occasional TV appearance), a concert venue, or to a “zine”.  It’s effect and impact on society at large could only scale to the number of members it could congregate in any one physical location, which meant that it could remain largely contained and isolated.  On the other hand, Anonymous is digital.  Its reach is unbounded and its impact on society much more significant.  The virtual nature of Anonymous means that they are able to challenge mainstream society more directly with little or no impunity.  With tools like the Low Orbit Ion Cannon for DDOS attacks, and with more talented hacker members able to break into corporate and government servers and stealing sensitive information from them, governments and corporations see them as a real threat.

At its essence, the Punk subculture provided its members a means of “flipping off” mainstream culture, through its music, dress, art, literature, and language.  Yet, it was easy for mainstream society to ignore early punk youth, since their access to media was relatively limited.  Anonymous shares this same “f--- you” attitude along with the same antipathy toward authority, yet they have the means to express their views more dramatically, and with greater reach, particularly because the internet, social media, and mobile devices enable members of Anonymous to be anywhere, or anyone.

Punk has evolved over the decades.  The music has changed; the aesthetics are different, and to some extent, what was considered shocking then is widely accepted now.  Yet, the idea of Punk is still here.  Anonymous is just the latest manifestation of it, and it could potentially have even greater impact on society-at-large.

Wednesday, December 14, 2011

SOPA Will Be Our Generation’s McCarthy Witch Hunt

In the late 1940s and early 1950s Joseph McCarthy was determined to eradicate the Red Scare by accusing numerous Americans of treason and being communists.  It resulted in many actors being blacklisted, and resulted in the now infamous question to the “Hollywood Ten” from the House Committee on Un-American Activities – “Are you now or have you ever been a member of the Communist Party?”  They exercised their 5th Amendment rights and refused to answer the question, principally because they felt their First Amendment rights were being impinged.

In its current form, the “Stop Online Piracy Act” (SOPA) would allow the Department of Justice and Copyright holders to seek injunction against websites that are accused of enabling, facilitating or engaging in copyright infringement.  It doesn’t stop there:  It would force search engines to remove all indexes for that site, mandate that ISPs block access to the site, and require 3rd party sites like PayPal from engaging or transacting with the offending website.  All because the copyright holder (or DOJ) makes an accusation.  The burden of proof is on the ISPs, the search engines and the 3rd party vendors to show that the “offending website” is not violating any copyright (So perhaps Congress should consult the 6th Amendment).   The implications are severe even for websites that reference these infringing sites.  They could be shut down too.

Let’s be clear, I’m not condoning piracy of any kind.  Intellectual Property vis-à-vis copyright is the coin of the realm of many companies, even whole industries like Publishing, Media, Software, and yes, the Entertainment world, and they should protect their assets. They should derive value and profit from their IP.  An author who pours their heart into a publication, or an artist whose performance I like should be paid.  Likewise, content producers – studios, publishers, media companies – should be able to garner payment for their role in providing content.  But they are looking at the whole piracy issue the wrong way.

Brute-force tactics to protect copyright have been epic failures.  DRM approaches don’t work.  In fact, they incite piracy, and worse, they harm the very companies they try to protect.  In 2007, Radiohead released their album “In Rainbows” DRM-free.  A year later, they had sold over 1.75 million copies and 1.2 million fans would buy tickets to their show.   Bottom line:  Locking down content doesn’t protect copyright holders.  Instead, DRM tactics will end up frustrating consumers who legally purchase content but can’t use it or copy it to a new device and, as a result, diminishes revenue.  And at that point, the opportunity cost of future purchases with the same DRM constraints will grow higher and higher.  Media, publishing and entertainment executives know that DRM has failed, and feel that their only recourse is through SOPA.

There will always be a small percentage of consumers who will use pirated content.  But it needn’t be a negative sum game.  In some cases, it should be written off as a business cost in order to generate more revenue:  a pirated song, might lead to the offending consumer to purchase a ticket to a concert, or to the next movie because they can’t wait.  Yet, to prevent wholesale piracy, technology exists today that can protect copyrighted content:  XMP (even ODRL can be serialized into XMP), digital fingerprinting for starters.  By using these, along with other tools that can scan the internet for matching assets, asset producers can identify and isolate pirated copies.  Then they can go after the offending sites directly. 

SOPA won’t stop piracy, but it will impact everyone’s access on the Internet.  And in that vein, SOPA legitimizes the piracy of 1st Amendment rights, much in the same way that McCarthyism censored free though in the 1950s, simply by accusation of copyright infringement. 

NOTE:  The views expressed in this post and on this blog are my own.  They do not reflect the views of my employer, its employees or its partners. 

Monday, November 21, 2011

Note to Fanboys: Don’t Hate the Player, Hate the Game…

This is a bit of a rant.  I get tired of hearing and reading fanboy comments that go along the lines of:  “X rules, Y[,Z] drools…”, “You’re just a hater…"  Blah. Blah Blah. It’s like listening to reverb on PA system.

My irritation stems from an article I read recently about the potential repercussions of Adobe’s move to stop development of Flash for mobile devices.  The article, in my opinion, was well balanced and made the point that while Flash is on the decline, there’s plenty of room for Adobe to maneuver and claim a stake in the RIA/HTML5 world.   What struck me though were the comments.  Several of them were antagonistic and claiming author bias against Flash.

The comments also struck a chord with me in that I recently ran into a buzzsaw-like argument with a client with respect to implementing and deploying a No-SQL data solution against trying to do the same thing in one of the big RDBMSs.  The debate was that they felt that there wasn’t anything their current RDBMS couldn’t do that the intended NO-SQL system did.  Sure, their system could, but it didn’t the specific kinds of things they wanted to do with the NO-SQL system nearly as well.  In fact some of the things were bolted on with the technical equivalent of bailing wire and duct tape, and in the long run, cost them more in overhead and maintenance. 

After the debate, I took some time to reflect on their argument.  The underlying theme that occurred to me was this:  they understood RDBMS; they didn’t understand the NO-SQL system we recommended they implement.  Bottom line:  Go with what you know. 

Yet I’ve seen this kind of resistance to various technologies throughout my career.  I’ve seen the esoteric debates between the DocBook and DITA content models and architecture, the religious orthodoxy of Windows vs. Linux vs. Mac, and more recently, the pissing contests of iOS vs. Android.  The main contention between camps always seems to boil down to “mine or bigger/better/faster/cooler than yours.”  My 5 year old twins do it better than anyone, but to hear it from grown-up professionals is like listening to a murder of cackling crows. 

If we’re intellectually honest, all of these arguments/dogmatic disputes boil down to the same time-tested axiom:  all of us will tend to gravitate to tools/technologies/practices that we’re familiar with, understand, are (reasonably) good at, scratch a particular (set of) itch(es), or just think are cool.  Any variance from these, or the suggestion that something is better/faster/cooler than what exists in your comfort zone warrants unabashed trolling, simply because they don’t fit within our particular paradigm.  

Tools and technology are applied to solve a specific set of problems, under a specific finite set of assumptions. Don’t like the “evil empire” Microsoft, but appreciate commodity hardware? here comes Linux; like beautiful form, closed, but controlled functionality? Mac seems a good fit.  Need structured data without a lot of noise?  JSON might be a good fit, however if your data is rich and structured?  XML is game for it.  Want a single seamless experience for your smartphone?  iPhone; want to use an open-source mobile platform with many choices of devices? Android. 

The point is this - When a problem veers away from these binding assumptions, or new assumptions are introduced, either the tool or technology must be modified/enhanced to fit these assumptions, or other technologies will be built to replace it. 

I’m not  entrenched in the idea that “all tools suck, some worse than others”.  Every tool and technology has limitations -  we need look no further than Joel Spolsky’s seminal work, “The Law of Leaky Abstractions.”  For instance, we rely heavily on virtualized environments for our development work.  Works great for most Linux and Windows environments, but you’re out of luck for Macs.   Does that mean Macs suck?  For working in the virtualized environment we have, it’s a buzzkill; but overall no.  We also do a lot of work with XML standards like DITA and DocBook.  DITA’s great for its flexibility and reusability; but DocBook still has a place too especially for longer content components where minimalism is not applicable.

But now we can begin to boil down tools and technology down to their real “suck factor”:  

In the grand scheme the evolution of technology plays out very much like Thomas Kuhn’s seminal work.  In many cases, it doesn’t build on older work, but rather there is a creative destruction and replacement with new technology.  During that process, there is a polarization between the two technical/philosophical camps.  Eventually, as the new technology attains enough momentum through adoption, the other older technology relinquishes (perhaps not to complete obscurity, and sometimes becoming a small, niche player).

As mentioned above, all tools and technology are constrained by the the underlying assumptions they were built on, and within the bounding box of a specific problem set.  Assumptions are rarely ever static – they evolve over time, and when they do, the underlying premise on which a particular tool or technology is built on will start to falter. 

For example, Flash works pretty damn well on my laptop with Firefox or Chrome – it works reasonably well on my Android phone, even though it does eat up my battery.  Flash basically did things that HTML + Javascript could never do (well).  Along comes HTML5, and the underlying assumptions are changing, and they are building in specifications into the standard that will make it possible to create rich internet applications natively (though not right away). 

Additionally, smart mobile devices are exceedingly becoming users’ primary access to the internet meaning that lightweight, small footprint applications are incredibly important.  Combine these with sprinkle of animosity/frustration/angst/whatever from Steve Jobs and Apple, and the foundations on which your technology are built will inevitably weaken.

Throw in some market forces and what you think is the greatest thing since Gutenberg’s press turns out to be yet another Edsel on the trash heap of “other great ideas”.  Case in point: we can argue ‘til the cows come home that BetaMax was far superior than VHS, but that and a couple of dollars will buy you a cup of coffee. 

So now that I’ve gone on a somewhat random dissertation of my original rant,  I’ll leave all any fanboys with the key message:  Don’t hate the player, hate the game.  Technology comes and goes.  Assumptions change constantly.  Try to keep an open mind and recognize when you’re falling into the familiarity trap.  Improvise and adapt, or you’ll be left behind like yesterday’s news.

Full Disclosure
In full disclosure, and keeping with the theme of intellectual honesty:

I own an Android phone, because my carrier didn’t support iPhone at the time.  I like my Android and continue to go with what I know, and like that it’s built on open source software.  I think the latest generation of iPhones with Siri are pretty amazing though.

I’ve used several Linux variants throughout my career, but do most of my work on Windows because that’s what’s on my laptop, and it works well with the tools I use everyday.  My last real experience with Mac was back in 1997-1998 when I was in grad school.  So I won’t claim any real knowledge here.

I use Eclipse plus numerous plugins for Java development, Microsoft Visual Studio for .NET development (though SharpDevelop is pretty cool too!), and Oxygen for XML development.  I prefer Notepad++ over TextPad, and I like Chrome over Firefox and use IE only when I have to. 

I use JSON when I’m working with jQuery, Dojo or YUI, and I use XML for structured authoring and when I work with XML databases, XSLT, and XQuery and for things like Rights Management.  I like Flex for building UIs quickly for prototypes (hey, demos are in controlled environments, right? :), but recognize its limitations when it comes to device support and will consider my options carefully in a production environment.

I like REST over SOAP over other RPC protocols.  RESTEasy rocks for simple apps; Spring for bigger implementations.  Eventual Consistency is in; ACID is out.

I still think HTML5 is a work in progress and needs maturity among the “Big Three” browsers and think Flash is still a few years from replacement (Firefox, IE, and Chrome/Safari – OK, I mention 4 but I lump Chrome and Safari together for their use of WebKit). While it’s still very early, I’m eager to see if Google Dart has legs and can displace Javascript (I’m not a big fan debugging other’s JS code when it comes to determining data types or scope). 

I’m still trying to grok my way through XProc pipelines and tend to use XSLT 2.0 in somewhat creative ways that it wasn’t intended for, and use Ant for processing pipelines even though I know that it is IO-bound.

And finally, I’m truly into Spanish Riojas right now, and only drink Merlots or Cabernets when I have to :)

Saturday, July 23, 2011

HTML5: Well, Maybe.

I just finished reading an article about Roger McNamee’s bold predictions about social media.  Aside from some the interesting business predictions (e.g., don’t invest in new social media startups: that train has left the station) that I mostly agree with, he is strongly emphasizing the emergence of HTML5 as the technology that will drive application development in the future.  On this point, I’m not ready to throw my FlexBuilder, Visual Studio, Eclipse and Android SDK development environments in the dust bin just yet.  Forget about scrapping my Notepad++ or Oxygen environments, these are keepers for the long, long term.

Yeah, HTML5 definitely has much promise:  Canvas alone is just cool.  I’ve seen some really interesting things done with this, and it can only get better from there.  Yet one cool enhancement for a browser isn’t enough to keep my attention long term, nor is it a game-changer that will revolutionize how users interact with application interfaces. 

So what kinds of things will keep me saying, “you had me at hello.”?   The big deal for me is looking at the world through the publishing industry’s collective eye:  Many of the big publishers are in the midst of what can considered a paradigmatic shift – while print will still be a prominent part of their business model, it won’t be the dominant model.  This is a significant change.  Publishers will transform themselves from content designers to media conduits

OK, so how what will HTML5 need to have to be compelling for publishers to adopt it?  I see three things, all of which are requirements for the browser vendors to reconcile:

  1. Media Codec Standardization
  2. Support other key technical standards (EPUB, MathML, etc.)
  3. Form-factor scaling

Media Codec Standardization

Right now, there are myriad of audio and video standards like H.264, Ogg (Theora for audio; Vorbis for video), MP3, Speex, AAC, WAV, and so on and on and on.  The problem is that none of the current browsers support a common set of these, and even when they do support them, their support varies.  Until they figure that out, HTML5 will not be able to leverage its full capability and publishers will be reluctant to adopt it.

References:  http://en.wikipedia.org/wiki/Comparison_of_layout_engines_(HTML5_Media);

Native Support for other Standards

OK, this one is a big, huge stretch and probably not going to happen anytime really soon.  Well, OK.  Ever. That said, these are the types of challenges that publishers have to face currently as well as going forward.  EPUB is the biggest stretch only because it leverages HTML (and ZIP compression) anyway, but the capability to embed EPUB in an HTML container would be a big win.  Yet for technical publishers, i.e., engineering, science and math publishers,  there hasn’t been a good solution for displaying all manner of math equations in browsers – they’ve had to rely on either transforming the equation to a raster image (and only recently to vector images like SVG) or rely on plugins to render the equation.  More recently, we’ve seen developments like MathJAX (http://mathjax.org) that rely on Javascript libraries to consume LaTeX scripts and display equations.  A bit better, but not quite as elegant as leveraging structural markup. 

The bottom line is that this requirement is probably more of a “nice to have,” but for STM publishers, its key to their business. 

Form Factor

This is probably the most significant limitation today.  It would be one thing if all applications/browsers were bound to desktops and laptops.  The reality is that mobile devices, all of which have different dimensions ranging from relatively small smart phones to now tablets means that application interfaces have added challenges to support these different form factors.  Today, I would be hard-pressed to recommend HTML5 UI libraries over native mobile OS UI controls. 

The Future

Will HTML5 become the preeminent technology platform? My magic 8-ball on my smart phone says “Ask again later…”  This resonates the same for me.  I’m hopeful that HTML5 can live up to the promise and can become the common technology platform for all applications.  But right now, there’s just too many holes in the various browser engines to make it practical.  Don’t expect browser vendors to patch these holes quickly.  In the meantime, several factors will impede HTML5 adoption:  Flash, warts and all, is still largely ubiquitous.  Its influence is slowly diminishing, but it won’t go away anytime soon.  In addition, Javascript libraries like JQuery, YUI and Dojo are maturing, but I think we’ll need to see how they shake out over time.  I’ll defer to Javascript experts to tell me which of these will become integral for HTML5 applications.

Lastly, HTML5 won’t be promoted to true standard status for another 10 – 11 years.  This is a lifetime, almost an epoch, for technology.  Lots can happen in that time.  It’s hard to predict right now what emerging technologies will come along that will impact content and media, but chances are something will.

Update (7/23/2011 05:08 PM MST):  Even more articles are coming out suggesting HTML5 will be a boom industry (see http://gigaom.com/2011/07/22/the-html5-boom-is-coming-fast/):  Could be real, but could be a bubble.  I’m not convinced yet that browsers are up to the task – yet.

Thursday, June 23, 2011

IPRM != DRM

Over the last year, I've been developing strategies that allow publishers to define and identify IP Rights. The big difference between digital rights management (DRM), and IP rights management IPRM is that DRM is about locking down assets to mitigate against piracy. IPRM is about identifying and calculating clearance to use assets for any given context, and enabling publishers to make informed decisions about using specific assets.

ODRL, or Open Digital Rights Language, is a well-established, robust, extensible XML markup designed specifically for this purpose. At it's core is the ability to define relationships between parties, assets, and permissions (i.e., print, display, execute). But it's real power is the ability to express complex permissions that include conditions and constraints. For example, "a licensee can use an asset in a printed book, but the print run is limited to 2,000 copies, and the asset creator must be given proper attribution and will receive two copies of the book prior to its release", or "the asset can be used in print, except that it can't be distributed in North Korea".

This is powerful, and gives publishers the capability to monitor and evaluate rights clearance while the product is in development. Using an XML Database and XQuery, it's relatively trivial to calculate clearances for all assets for a product and to display the information in a dashboard. Editors can monitor the progress of rights clearances against all assets and determine whether to acquire additional rights to use assets that haven't been cleared, or to use other assets instead. Publishers can also track asset usage to ensure that the proper royalties are paid. It also helps publishers in "what if" scenarios: they can easily determine the cost and feasibility of adapting a product for a different market, which will tell them how many of the existing assets are cleared for use in that market and how many remain that either need additional clearance or should be replace with other assets.

Another scenario we're working on is using ODRL for wholly-owned assets. Publishers frequently commission third parties to produce photos, images, and other rich media for which the publisher retains the rights to. They want to reuse these assets for obvious cost savings, however, they don't want to over-expose assets. Frequently, editorial teams are primarily focused on one project or program, and have little insight as to what others are doing, so it's quite possible that an image could be used by more than one product at the same time. Not that this is always a bad thing, but it can lead to over-exposure. Using ODRL to manage access to assets, using embargo dates and other usage information, editorial groups can quickly make informed decisions whether to use an asset or look for another.

Pretty cool stuff

Using DITA for Genealogical Data

I’ve been working on putting my family history together for the last few years.  Most of the genealogical applications have some pretty nice features, but none seemed to have all of the features I wanted.  I wanted the ability to manage all of the summary information and relationships (all applications do this), but also cross reference the factual data with individual biographies.  And, I want to be able to display the information in different ways and formats – not just the ones supported by any particular application.

I started looking at the format that most genealogy programs store the data into.  With few exceptions, they all use GEDCOM, or GEnealogical Data COMmunication.  The standard was developed by the Church of Jesus Christ of Latter-day Saints as a means of creating a portable data format to express information about individuals, families and sources (bibliography).

GEDCOM is line-delimited field format that identifies the start of a new record with the number 0.  Fields within a record are identified with an incremented number.  For example a first level field line would start with the number 1.  A subfield (e.g, the given name of a person’s full name) would start with the next highest number. The following is an example record for an individual

      0 @1@ INDI
    1 NAME Robert Eugene/Williams/
    1 SEX M
    1 BIRT
    2 DATE 02 OCT 1822
    2 PLAC Weston, Madison, Connecticut
    2 SOUR @6@
    3 PAGE Sec. 2, p. 45
    3 EVEN BIRT
    4 ROLE CHIL
    1 DEAT
    2 DATE 14 APR 1905
    2 PLAC Stamford, Fairfield, CT
    1 BURI
    2 PLAC Spring Hill Cem., Stamford, CT
    1 RESI
    2 ADDR 73 North Ashley
    3 CONT Spencer, Utah UT84991
    2 DATE from 1900 to 1905
    1 FAMS @4@
    1 FAMS @9@

 


Other than the line/sequencing delimiters, the data structures are pretty free form, and is parser dependent.  Even the field names, outside the common set supplied by GEDCOM are parser dependent.  So if you use one genealogy tool, it can understand these fields, but if you try to load it in another, it blows up.  Gah!  Add to that, GEDCOM just isn’t that great for handling rich content like pictures in a biography.


This sounds like a job for XML. 


So the first question I had to address is how to model this.  I’ve looked at some the of GEDCOM XML sites, and they suffer from the same problems as the text data structure do.  Just not enough rich data. 


The answer I came up with was to use DITA, which has several things going for it:



  1. I can easily mimic GEDCOM’s data structure with a specialized map
  2. I can extend the model to support other potentially valuable metadata
  3. I can easily model rich biographical content as a topic specialization
  4. DITA’s numerous linking mechanisms work well for the various types of links I would need:  internal references within a map, rel-tables, cross-references, external hyperlinks to third-party websites and content.

The first thing I did was to model and create a map specialization that mimics the GEDCOM data.  For the sake of brevity, I’ll show a sample of a specialized map.  If you want more information, ping me:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE familytree SYSTEM "file:/opt/dita/1.2/dtd1.2/genealogy/dtd/familytree.dtd">
<familytree>
    <title>Schmoe Family Tree</title>


    <individual id="I1" keys="I000001" gender="male">
        <vitals>
            <personname>
                <firstname>Joseph</firstname>
                <firstname type="nickname">Joe</firstname>
                <middlename>Aloysius</middlename>
                <lastname>Schmoe</lastname>
                <generationidentifier>III</generationidentifier>
            </personname>
            <birth>
                <date>
                    <day>1</day>
                    <month>1</month>
                    <year>1968</year>
                </date>
                <location>
                    <placename>The Stork Factory</placename>
                    <addressdetails>
                        <locality>Anytown</locality>
                        <administrativearea>Anystate</administrativearea>
                        <country>USA</country>
                    </addressdetails>
                </location>
            </birth>
        </vitals>
    </individual>


    <individual id="I2" keys="I00002" gender="male">
        <vitals>
            <personname>
                <firstname>John</firstname>
                <firstname type="nickname">Jack</firstname>
                <middlename>Michael</middlename>
                <lastname>Schmoe</lastname>


            </personname>
            <birth>
                <date><day>1</day><month>1</month><year>1948</year></date>
            </birth>
        </vitals>
    </individual>


    <individual id="I000003" keys="I000003" gender="female">
        <vitals>
            <personname>
                <firstname>Jane</firstname>
                <middlename></middlename>
                <lastname type="maidenname">Doe</lastname>
                <lastname type="marriedname">Schmoe</lastname>
            </personname>
            <birth>
                <date>
                    <day>1</day>
                    <month>1</month>
                    <year>1947</year>
                </date>
            </birth>
        </vitals>
    </individual>
   
    <family id="f1" keys="F1">
        <familymeta>
            <marriage>
                <date>
                    <day>1</day>
                    <month>1</month>
                    <year>1967</year>
                   
                </date>
            </marriage>



        </familymeta>
        <child keyref="I000001"/>
    </family>


    <familyreltable>
        <record>
            <indi>
                <personref keyref="I00001"/>
            </indi>
            <famc>
                <familyref keyref="F1"/>
            </famc>
            <fams/>
        </record>
        <record>
            <indi>
                <personref keyref="I00002"/>
            </indi>
            <famc/>
            <fams>
                <familyref keyref="F1"/>
            </fams>
        </record>
        <record>
            <indi><personref keyref="I00003"/></indi>
            <famc></famc>
            <fams>
                <familyref keyref="F1"/>
            </fams>
        </record>
       
    </familyreltable>


</familytree>