Topics – like Walking Bourbon Street

I was in New Orleans for Lavacon (which was great) and I spent some time on Bourbon Street.  Like the Las Vegas strip, Bourbon Street is something to experience. There are so many little places side by side with great music. The effect is aural chaos. Wonderful, joyous cacophony. In each little place, fabulous musicians deliver great noise.

But to hear one song, the listener must step into one place to reduce the noise. Doing this reminded me of user-driven navigation in a large document set.


The information world is like readable cacophony – there are so many words and images and files and more. Distractions are endless.

One of the great things about writing in topics is that you can keep the information focused. To the point. This may not seem like a lot, but when you step back and think about all the information that bombards everyone every day, one poignant and pithy nugget of knowledge can effect someone immensely. That may be all it takes.

One great topic can draw people in. By connecting that topic to other great topics (ones that are just what the reader/user wants), you create an insightful, rewarding user experience.  Just like Bourbon Street’s string of great pubs featuring super music – a loud and wonderful place where people want to return.

So keep writing excellent topics. Focus on making one shine. Then do the next one. And repeat.

Before you know it, your content will be drawing them in like Bourbon Street (and the entire French Quarter) does.

Posted in Inpiration, Writing | Tagged , , | Leave a comment

Using DITA

Second in a series of DITA from a writer’s perspective – answering the How question. DITA is all about structured writing. If you have a strategy or an approach to deal with your content, DITA can easily help you implement it – that is, write and deliver your information. I’ll start by getting a grip on topics in DITA, then string the topics into a map (outline) and a map into deliverables.

Topic Types

Topics are chunks of content. Generally, each should be a stand-alone chunk. This means that the topic should cover a subject.

As an aside: discussions are ongoing about how much of that subject should be in one topic. I tend to agree with Mark Baker of Analecta that the topic should include all that the reader needs. The definition of ‘reader need’ has to be a result of your audience analysis and your deliverable intention (web/pdf/other and support/sales/troubleshooting focus, etc.)

“Out-of-the-box” DITA comes with 3 topic types. Each type has a standard structure. However, you can alter the standard so it fits your information – DITA allows you great flexibility. For example, concept topics have the loosest structure with only a short description and paragraph tags. Task topics are designed for step-by-step instructions with step outcomes and examples. Reference topics have tables so you can display referential information in tabular form.

As another aside: the newest version of the language (DITA 1.3) has built-in specializations for:

  • troubleshooting to integrate problem-solving information
  • learning to integrate technical information with training delivery.

Lots of areas to expand your reach. But here, I’m trying to keep it simple.

Why Topics?  By writing in structured topics, you can have consistency. This benefits the reader by standardizing the way information is organized. This benefits the writer by reminding them (me!) what other information to include.

Emerging From Stone

Emerging From Stone

 Mapping the Document

To create your deliverable structure, pull all the needed topics into a map.

The map is the ‘table of contents’ for your deliverable. Use this map to pull information in the order you want it delivered. You can have a map of maps – so you create a table of contents for all things about installing ProductX. Use this map to pull information wherever Installation content is needed – like the getting started information, the full manual, the troubleshooting info, etc.

Keep in mind that in today’s deliverables, information is no longer linear. People don’t open a manual and read start-to-finish. They read-to-do, they skim, they search. Tailor your content to current user needs.

Document Structure

When you’re ready to create a deliverable (Help file, policy document, procedure PDF and HTML page, Word document, etc.), you generate the content using the structure defined by your map for that document.

For me, that step removes all the discussion about templates – I don’t need to know which one to use, I didn’t see the changes made by another group, etc.)

Ideally, your DITA techie person gets the information about the required look and feel (content layout, size of headings, corporate colours, etc.) for the document type and makes the transformation code available to you. You just point to the map and to the output type and hit the ‘GO’ button. Ta Da!

As a writer, this removes the countless hours of tweaking Word documents. I can spend my time researching and writing perfect content.:)

Information Development

As you develop and write your content, you capture the relevant information the necessary topics. But wait, what about that wonderful blurb that you created yesterday? It should be included! Or the marketing department’s awesome product description or the support group’s perfect solution to an issue?

With DITA, you can reuse blocks of content. That means you don’t have to rewrite something you’ve already written or hunt for the content and then copy and paste it into your new document.

Reuse also means you don’t have to find and replace changeable content – like maybe a product name if you’re product is still in development. Instead of typing ProductX, you can create a phrase for “ProductX”, give it an identifier and save it in a warehouse topic. Then, as you write about it, reference the phrase instead of rewriting it. In my experience, the advantage of this functionality is that when the product is close to launch (and when you’re super busy capturing all the last minute changes and sales messages in your content) and the Marketers decide on the really cool name, you only have to change one phrase. The references pick up the change when you generate your deliverables. No more frantic late nights trying to find all the mentions of “ProductX”.

Yet another aside, if the marketing department wants to add some fancy ‘look’ to the product name, you can do that in the one reference point (the phrase). No more fussing around with each instance!

But How?

My advice is to get a good XML editor that is DITA aware. My favourite is oXygen ( Its integration with DITA is wonderful, its support is fabulous and the interface can be set to simple or not, depending on your need.

You can test it out with a free 30-day trial. Start creating a few topics (or copying the info in from an existing document), organize them in a map and use one of the transformations to create a PDF, webhelp (tri-page web page) or XHTML output. Or all three from one action. You’ll have DITA output in minutes.

Why do this?

If your content is locked in one document, it has a limited life span. Move it to XML so you can repurpose it to match your business need(s). Unlock the power. Open up access to the information that you’ve spent time creating/crafting/capturing. Help your readers by giving them what they need, when they need it, using the device that they use.

You could argue that by saving time with the way you create and deliver information, you may be ‘strategically’ working yourself out of a job. I used this argument with my favourite developer once, years ago – if you create perfect code, who needs a developer any more? He laughed and laughed. There’s always a need to augment the code or to write something else. The need evolves with the business.

If you’re spending all your business time doing low value activities, you become a low value worker. If you work smarter and can deliver more, you add value to the business and are a value-add worker. Which would you rather be?

Posted in Content Strategy, Implementation, Reuse, Writing | Leave a comment

DITA Basics – What is DITA?

Lately, I’ve tried to encapsulate “DITA” to technical writers who are not clear on what it is or why it helps you create and deliver information. With my reply, I generally encourage Google searches. However, the information I found when I tried that dove straight into the details and became complex very quickly. So I am attempting a series of posts to address the very basic questions and to give you just enough information to see if DITA is something you should investigate. My intent is to provide basic information to help writers take in the small chunks of this rather large knowledge area. These posts are meant to be appetizers – tantalizing tidbits to whet your appetite – they are not meant to be a full course meal.

What is DITA?

…according to Karen.

Officially, it’s “an XML-based, end-to-end architecture for authoring, producing, and delivering readable information” (OASIS – or as Lu has said, it’s an XML language (’s-so-great-about-dita—in plain english/). To me as a writer, DITA is a “tool” for me to create and deliver technical information in XML. By having information in XML, I can use it more efficiently. I can spend more time on the content (to get the information, hone it and ensure its accuracy and quality) and deliver results quicker, more consistently and in more ways. The ultimate result is that the information I develop can better support the business needs. In their basic form, these needs are:

  1. Save money
  2. Make money

To save money, DITA helps you be more efficient by saving time in the long term. For example, it can help you reduce your maintenance efforts, which saves work time, and improve your content’s consistency, which saves time for you and your users.

To make money, DITA enables you to increase your productivity, that is, to deliver more content and more outputs with less effort. It also sets you up to create new channels to deliver information thereby reaching clients on their multiple devices.

Why Change?

One of my favourite quotes is from W. Edwards Denning: “It is not necessary to change. Survival is not mandatory.” Businesses constantly adapt to changing economies and trends. They evolve. Technical communications must evolve with business and respond to the business environment.

To be efficient, business must manage their assets. Their content is an asset of the business and therefore it requires effective management. As a technical writer, your responsibility is to do or help to accomplish that goal.

A lot of companies equate managing documents as managing information. They are different. Pieces of information make up documents (aka deliverables). Documents are a compilation of these pieces. To effectively manage business content, both documents and the pieces of information should be managed. In larger companies where these areas are managed by more resources, a technical writer should be responsible for (or at the very least pay attention to) the effective management the pieces of information. With your content in an XML (DITA) format, you can develop management practises to use these assets more efficiently and with better results. By managing the pieces, you contribute to the management of the documents.

How Is DITA Different than Other Tools, say, Word or Unstructured FrameMaker?

For me, the biggest difference is the way you can reuse content. By reuse, I don’t mean copy and paste. I mean referencing the original content. This makes one source of the content the ‘source of truth’ so where ever you have to use that bit of information you always use the true source. In a nutshell, that’s single-sourcing information. Information in DITA lets you unlock the content from one deliverable so you can use it in multiple deliverables. NOTE: Other tools like Flare and RoboHelp let you manage pieces of information, but they are a stored in a proprietary way. XML is an open standard, meaning that it can be used by a multitude of tools and utilities. With XML, you are not locked in to a tool.

Technical Information Is Reused

Most technical information in not used just once. Most technical information is not accessed in just one way. Rather, it can be ‘sliced and diced’ and served up in a multitude of contexts, categories, devices (print, web, mobile) and more.

An Example

For example, say your company has a product for which you (or someone from marketing) have created a wonderful description. Odds are high that the description is used in your product sales material, on your web site, in your online help (if you have it) and probably a few other places. For consistency, that content should be identical.

However, as business responds to the changing economy, they change, the product changes or the company identifies a feature/benefit that they now want to promote. That means re-writing existing content with the new information.

If each deliverable is disconnected from all the others, someone has to rewrite each deliverable. If that content is single sourced and the deliverables are generated from that source, you rewrite the content once and regenerate all the deliverables that contain the description. The idea is the same for process, policy and many other business documents.

Be strategic

Information single-sourced in DITA can be used to create these deliverables with less effort and, because DITA (and the tools that go along with it) outputs content in multiple formats, you can output the required information in all the ways that your company wants to deliver it. That means you’re working more efficiently and making the company more cost-effective. Delivering information (something important to your business) in a strategic way (a way that supports your business’s goals of saving money and making money) contributes to the success of your business.

Links and More to Come

I’ve since found some other posts about this same subject – check out Jacqui Samuels post on the TechWhirl site and Tom Johnson’s The appeal of DITA on I’d Rather Be Writing.

I’m intending my next post to be an intro to the mechanics of DITA – topics and structure. Please leave a comment if there’s something specific you want covered.

Posted in Content Strategy, Management, Reuse, Writing | Leave a comment

Creating Draft or Approved PDFs – a Science Experiment

The Problem

How can I simplify the output process for each document as it gets approved?

Some Background

Lately, I am using DITA to create a lot of small documents (generally less than 20 pages), each owned by different groups. All groups are in different stages of development and finalization.

Documents live in draft state for a long time and then (hopefullyeventually) get approved. The output must be PDF. The PDF requirements are fairly stringent: an unapproved document must have a “Draft” watermark on each page and an approved documents must display certain specific information on both the cover page and the header. Most of the required information is consistent and aligns with DITA metadata.


  1. I am ‘doing DITA’ on my own, with only books, Yahoo user group, web searches and enthusiasm to guide me. I am not an XSLT expert. My knowledge of XSLT is limited and driven by need, not the desire to learn how to be an XSLT programmer. For the watermark XSLT code, I want to thank Derek Read for his response on adding a watermark to a page in PDF output to this support question:
  2. I am not a programmer at all. I do not think logically most of the time and I suck at being methodical. I do have a persistent streak in me that drives me. I constantly repeat “there must be a better way to do what I want to do”. Please feel free to illuminate me (or humour me)!
  3. I am using RenderX as my PDF rendering engine, not FOP or Antenna House, so my results may be tool-specific.

The Hypothesis

Since each document has its own map for the PDF output, if I add specific metadata to the map I can have the XSLT control or trigger PDF output appearance.

The Experiment

Set up some metadata in the map and tweak my PDF customization code (XSLT mostly) to obtain draft behaviour if the metadata is not there or to obtain final behaviour if the metadata is there. Specifically, without the metadata, the PDF will:

  1. Show a “Draft” watermark on all pages if the document is not approved
  2. Not show an effective date on the front cover if the document is not approved
  3. Not show a release date in the document header if the document is not approved

If the metadata is in the bookmap, the PDF will:

  1. Not show the watermark
  2. Show the effective date
  3. Show the release date in the header


I won’t bore you with all the attempts and errors that I made, but I can assure you that there were quite a few. Instead, I’ll tell you what worked. With the following code in place when the document is approved, I open the bookmap, uncomment the metadata, add the information (the month and year of the approval) and then generate the PDF.

In my bookmap metadata, I had this code:

                 <person>Approver Name </person>

In the custom_layouts.xslt, I added CHOOSE code within the <fo:region-body> for all the document parts – the first/last/even/odd for each of the front-matter, toc, body and appendix masters. The first line says when this condition exists, don’t do anything. The otherwise line says that if there is no completed element in the bookmap, add the watermark, just once, and center it horizontally and vertically in the region. 

The code looked like this:

    <xsl:when test="//*[contains(@class, ' bookmap/completed' )]"> </xsl:when>
              <xsl:attribute name="background-image">url('Customization/OpenTopic/common/artwork/watermark.gif')</xsl:attribute>
              <xsl:attribute name="background-repeat">no-repeat</xsl:attribute>
              <xsl:attribute name="background-position-horizontal">center</xsl:attribute>
               <xsl:attribute name="background-position-vertical">center</xsl:attribute>

Of course, I had to create a watermark image and store that in the folder indicated by the URL call.

The other triggers are in my custom.XSLT file. For the front cover, in the front-matter-container, I added this code:

     <xsl:text>Effective Date: </xsl:text>
     <xsl:value-of select="//bookchangehistory/approved/completed/month"/>
     <xsl:text> - </xsl:text>
     <xsl:value-of select="//bookchangehistory/approved/completed/year"/>

If there was no entry for the <completed> metadata (i.e., the document was still in Draft stage), nothing showed up.

For the header trigger, I added this code to the fo:static-content for each type of header:

<fo:block font-size="10pt" space-after="6pt" font-family="sans-serif">
 <fo:inline font-weight="bold">
   <xsl:text>Release Date: </xsl:text>
 <fo:inline font-weight="normal">
   <xsl:value-of select="//bookchangehistory/approved/completed"/>

Again, if there was no <completed> entry, nothing showed up after the text “Release Date:” in the header.

The Conclusion and Comments

It worked! (To celebrate it, I did a really embarrassing happy dance.)

By commenting out the <completed> element, I could get my draft behaviour to appear. If that code was not commented out and there was data in those elements (month and year), I could get the final behaviour to appear.

I chose the <approved> metadata element for obvious reasons. However, because of this choice, I couldn’t use the Reviewed elements because they also contain a <completed> element. If I had the <reviewed> <completed> elements in my bookmap, the behaviour was triggered. Since company requirements didn’t include any information about the review, I could exclude that metadata.

Another potentially problematic issue is that the image path for the watermark is hardcoded. I am still wrapping my head around variables so I skipped that part. Also, I am the only one using it and the file structure is set on my hard drive, so I know that the path coded will work. I know, I’m putting my head in the sand. For now.:)

I have also used this trigger approach (CHOOSE) to control what the front cover looks like. I have been working on 2 distinctive document types; one required a table showing owner information and the other didn’t. That frontmatter code works off the existence of the <isbn> metadata.

Although I used an existing metadata element, creating my own metadata type might work better. However, I’m not confident enough in my knowledge of DITA processing, XSLT or metadata to create my own metadata types.

For you experts out there, if you could answer these questions, I’d really appreciate it:

  1. Is there a better way to add the XSLT code that doesn’t involve so much repetition?
  2. Will this work for all PDF processing engines (FOP and Antennae House)?
  3. Is there a magic bullet that will help me to understand XSLT better?
Posted in Implementation, Technology, XSLT | Tagged | 2 Comments

Compliance, Content Strategy, and DITA

or Everything I Needed to Know About Compliance, I Learned From My Dog

I’ve had some big changes in my life the last couple of years. I got a puppy and I changed jobs. Both experiences have taught me some things about compliance and the principles for eliciting specific behaviors. I believe a content strategy and DITA have a lot to offer the compliance process in an enterprise environment.

First, Some Background

Dave on Prarie Mountain

Dave on Prarie Mountain

My dog Dave recently celebrated his second birthday. He’s a great little guy who shares my no-fence neighborhood, with forest, streams and a rocky mountain riverbed literally in our back yard. In the spirit of this wild and natural setting, Dave spends a lot of time off-leash. Even though he has a fair bit of freedom, he listens well and is reasonably well behaved.

But it wasn’t always this way. Dave graduated, just barely, at the bottom of his puppy class. Of course, these 6-week classes aren’t really designed to train your puppy. They’re intended to teach you how to train your puppy.

In my work world, I left the comfortable world of software documentation to venture into a large enterprise environment where I’ve been documenting standards (procedures and minimum requirements). The organization’s goals are to standardize behaviors and bring efficiencies while respecting the need for some freedom to accommodate the diverse nature of business units.

As I look back on these experiences, I can see similarities between training my dog and my work in compliance. I feel very constrained by the traditional enterprise tools for content development and delivery, such as MS Word, network file systems and even Enterprise Content Management Systems. I can see how a content strategy and single-source tools can help, especially around these three principles:

  1. Consistency
  2. Context
  3. Audience Focus


My husband and I were initially using different commands to instruct our dog: Different from time to time, and different from each other. We needed to get on the same page and be consistent in our messages to our pooch. Once we did this, Dave eventually figured out what we were talking about and was able to respond more or less consistently.

Consistency is a simple concept, but it can be difficult to achieve. Part of a content strategy includes a controlled vocabulary: an overall strategy and agreement on the terminology you will use and what those terms mean. Then you need to use that vocabulary consistently: use the same words and say the same things, the same way, every time.

The problem with traditional enterprise content creation tools is they do not scale to meet this challenge. Sure, you can cut and paste content from one document to another. But once you get beyond a certain number of documents and/or writers, this becomes un-manageable. There is no easy way to determine where all instances of a statement are used. Searches are slow and painful, and they only find exact matches. Any variations that were made are very likely to be missed. Content quickly gets out of sync and the message gets diluted. Uncertainty creeps in and people are no longer clear on what is expected of them.

Single source tools such as DITA allow you to re-use content by reference rather than by creating multiple copies. A true Component Content Management System (CCMS) will be able to tell you all the documents a statement is used in. You only need to update the statement in one location, and you can assess the impact on all affected documents before you make the change.

DITA and CCMSs both allow you to create relationships between pieces of information. Provided you develop a proper content strategy for this, these relationships can help you to determine what related content may be impacted by your change.


Dogs are situational learners. We were happy when Dave learned to “stay” in the kitchen, but he had no idea what we were talking about when he got outside or went to Gramma’s house. He needed to learn the command in multiple situations before he truly got it.

A content strategy needs to address the variations in content that are required to address different contexts. Context can be a change in location or a change in equipment. For example, the rules may change from province to province (or state to state). Different models of the same pump might have slightly different specifications or operating instructions. 80% of the information may be exactly the same in all situations, but we need to accommodate the 20% variation. We need to provide information that is relevant to the employee’s unique situation.

Again, traditional tools don’t meet these challenges. You have two options:

  1. Clone (cut and paste) and modify to create context sensitive information. As I’ve said, it becomes totally unmanageable very quickly.
  2. Lump all the information into one document and let the employee wade through it all to get to the nuggets they need. While this option isn’t a challenge for traditional tools, it is a challenge for the reader. This approach goes against the desire to promote understanding and create efficiencies.

Single-source tools provide mechanisms for creating content variations while re-using the core content. DITA provides multiple ways to do this, so chances are good that there is a method to meet your needs.

Context can also affect the medium for delivery. In the quiet of our home, I can get my message across to Dave with a soft voice or even just a hand gesture. In the great outdoors, he can be distracted by a myriad of things: smells, other dogs, squirrels, rabbits, birds, butterflies, or leaves blowing in the wind. My message needs to be louder, sharper, and sometimes accompanied with a whistle or hand-clapping to get his attention.

A content strategy also needs to address multi-channel delivery. Mobile is all the rage these days, and with good reason. Many people don’t sit behind a desk all day and they need access to information on the go. Some work in remote locations where connectivity is an issue and they need offline storage on a mobile device. Still, there are times when workers do have access to larger screens and would prefer to use them over the small screen of their mobile device.

Traditional tools offer conversions to other formats such as HTML for web-based output. The problem is they are not optimized for these other formats. The original design of the information is geared towards standard book or print delivery. A simple conversion from one format to another does not equate to a practical implementation. The navigation structures for web-based content are completely different. The organization and styling of the content requires a lot of laborious, time-consuming, and expensive tweaking.

A true single-source system such as DITA separates content from style. It also separates content from organization by breaking content into discrete chunks. This allows the content to be re-organized quickly. Style and navigation features are not applied until the content is published, so they are always appropriate to the output and don’t need to be tweaked after the fact.

I use the term “true single-source system” because there are a lot of tools out there that claim to be “single-source”. However, if the system does not separate content from style and organization, you will forever have to do some sort of tweaking to get from one format to another. IMHO, this is not a true single-source system.

Audience Focus

I can (and do) talk incessantly to Dave all day long. I’m sure most of what he hears sounds like Charlie Brown’s teacher. Unless I’m speaking his language (the limited vocabulary he’s learned) and telling him exactly what I want from him, he doesn’t really know or care what I’m saying.

The truth is, people aren’t much different. We are all bombarded with information and we are constantly filtering what we read, see and hear to determine how the information affects us. Is it good or bad news? Can we or do we need to do anything with the information? If it doesn’t affect us in some way, it’s just noise.

A content strategy needs to accommodate our audiences need to filter content. Employees need to easily filter and find content that matches their role, their situation and their experience levels. As writers/information developers, we need to eliminate the noise and deliver content that is truly relevant to them.

Traditional authoring tools don’t generally offer anything in this area. Enterprise Content Management Tools have features for adding metadata. The problems are:

  1. You need that controlled vocabulary we talked about earlier.
  2. Adding metadata to long documents is much more challenging than a single topic. The keyword list for a 50 page document would be daunting.
  3. The task is often left up to the general corporate population. While it’s not rocket science, it adds enough cognitive overload that the average employee does not have the time or willingness to give the task the attention required to fulfill the needs of the content strategy.

Single-source tools offer conditional processing features for filtering and metadata tagging schemes to allow for faceted searches. Content can be filtered at a granular and personal level. It can be delivered in a method that suits the reader.

Content can even be delivered in the multiple languages. If you have translation requirements, this is a whole other area the content strategy needs to address. I’m no expert in this area, but I believe the ROI metrics for translation alone can often justify the switch to a single-source system. Here’s the expert on DITA Metrics.


In any organization, implementing a set of corporate wide standards involves creating a common understanding of what you want your employees to do. A solid content strategy is essential to support the process. A single-source tool isn’t a magic bullet – it won’t do the work for you. But it can provide a solid and scalable infrastructure to effectively and efficiently implement the strategy.

Still not convinced? Check out Using DITA XML for standards: a manifesto by John Tait. It’s a short easy read, sans dog analogies.

Posted in Content Strategy, Implementation, Management | Leave a comment

DITA Rock Stars – Part Deux

Like Lu said, the CIDM Conference was super. My goal was to find ways to articulate single sourcing content and strategic content creation. I picked up some nuggets of technical info too all while meeting and talking to people in our industry.My Other Favourite Rock Stars

When you get so many like-minded people in a common area, the ideas flow and the excitement level rises. I would have loved more time between sessions to chat, more times to mill around the vendor area and strike up conversations and more opportunities to share thoughts with others. I guess I wanted the conference days to be more than 24 hours… Continue reading

Posted in Uncategorized | Tagged | 6 Comments

DITA Rock Star Round-Up – Part I

Rock Star

Rock Star Flickr:Adam Penney

The DITAChicks recently attended the CIDM/DITA North America conference. It was a thrill to meet and chat with the big DITA Rock Stars. You know the ones – the names that are synonymous with DITA: Micheal Priestley, Don Day, Robert Anderson and Eliot Kimber. We were feeling a little star struck, being in the presence of such brilliant minds. Our feelings were echoed by another attendee who admitted that that his wife accused him of having a man-crush on Eliot. (Not mentioning any names. You know who you are.) But all these guys were very down to earth, approachable, and as always, willing to share their knowledge. They all gave great presentations on what’s ahead for DITA.

The conference provided a great opportunity to meet up with other DITA-minded folks that we’ve met in the Blogosphere and the DITA User’s Group as well as introducing us to some new ones. We thought we should do a round-up of our favorite DITA Rock Stars. Part I will outline Lu’s picks and Part II will summarize Karen’s.

Part I – Lu’s Picks

  • Joe Gollner, Gnostyx Research Inc.:  The Joy of Reuse: Content, Structure and Solutions

The Content Philosopher is an Accidental Content Strategist. He was a Content Strategist long before it became a popular job title.  He wrote the book (well the whitepaper) on Intelligent Content.

While Joe isn’t strictly a DITA guy, he’s a big fan of the technology and a proponent of XML from its start.  Joe’s talk highlighted the many levels of re-use, from the granular content and model reuse of DITA, to reuse of standards, solutions, technology, infrastructure, process, knowledge, and resources. My take-away: We need to think about reuse at all these levels and integrate them to provide cost effective solutions and a phased approach to our DITA implementations.

I always feel odd when someone comments on my title of Information Architect (IA). It’s is a term that spans as much ground as Information Management or Information Technology.  Do a Google search and you’ll find a lot of stuff about wire frames and traditional marketing and e-commerce web design. But what does it mean in the content heavy DITA world?  Well, Severin’s day-to-day responsibilities include:

  • Analyzing requirements (customers and writers)
  • Building business justification, project plans, metrics
  • Developing information models
  • Helping with the tools selection process
  • Communicating vision and progress
  • Training writers, editors, managers, and other architects
  • Writing stylesheets to apply appropriate formatting and
  • Managing culture changes from the above activities

Severin did say that “What it means to be an Information Architect will vary”.  If you’ve ever felt like an IA wannabe, compare your responsibilities with Severin’s. His talk affirmed for us:  The DITAChicks are Information Architects. Thanks Severin!

  • Eileen Thournir & Hal Hamond, Landmark/ Haliburton: Using DITA Principles for Video

When Hal and Eileen presented their ideas about creating and combining short segments of talking heads, screen captures, and static slides with audio overlay, I had a déjà-vu moment. My former colleague, Jackie Gough, employed these same techniques to reduce bandwidth and apply minimalism to our e-learning content.  Landmark has taken the approach even further, applying DITA concepts to create libraries of reusable storyboards, video, audio, and static content, and pulling the storyboards together in a “map” to produce a single deliverable.  

The map process is entirely manual at the moment. And there are other issues such as tagging audio files. If the idea of DITA for Video seems useful and intriguing to you:

  • Join the LinkedIn DITA for Video Group
  • Talk to your vendor about the features you need to support DITA for Video.

Kaplan Publishing provides test prepartion study guides for over 90 regulated tests (For example, GED, SAT). They have a large library of test questions that are potentially reusable across many different exams. Edwina’s team is working on a faceted metadata scheme to classify test questions based on the exam,  the structure of the question (multiple choice, free form) and skill the question is designed to test. The skills break down into a two-level classification scheme to test various aspects of skills such as vocabulary and math. To make it interesting, the different exams use different terminology to  classify the skills being tested. Suffice to say, it’s complicated. 
DITA’s Subject Scheme allows you to create relationships between metadata classifications, using a structure similar to a relationship table. I hadn’t seen Subject Schemes before and I thought they were cool and worth sharing. You can extrapolate from Edwina’s scenario and apply Subject Shemes to any complex metadata relationship problem. 
Due to other project priorities, Edwina’s team has had to put the effort on hold. If you’ve done something like this, or you think you need to, reach out to Edwina and share your ideas.

  • Frank Miller, Comtech Services Inc.: Specializing DITA: How RelaxNG Enhances DITA’s Specialization Capabilities

RelaxNG  (pronounced relaxing) is an XML format that allows you to define an XML  markup. The syntax is decidedly easier than the DTD syntax.

Schematron is another XML format that allows you to validate the XML markup.  It can be embedded within the RelaxNG file, or it can be created as a standalone file and used in conjunction with DITA’s DTD.

Frank’s presentation chronicled his test drive “just around the block” of RelaxNG and Schematron to see  just how easy it really is. He did a simple test of adding an element to a concept (using RelaxNG) and validating that a task has at least two steps (via Shematron). The Oxygen Editor supports both these technologies, so that’s what Frank used to create the concept and task test topics and generate the output through the DITA OT.

The result of Frank’s test drive: He admitted to hitting the ditch a couple of times, and it still only took him about 15 minutes to implement. This is not to under-estimate the effort required for specialization. You still need to do the analysis and develop your strategy, but the tools greatly simplify the syntax and reduce the effort to create the specialization.

  • Fabrice Lacroix, Antidot:  From Static to Dynamic Semantic Publishing for Your Documentation

Antidot’s Fluid Topics was the coolest product at the conference, probably because I’ve been wishing and hoping for dynamic publishing ever since Michael Priestley and Amber Swope presented the DITA Maturity Model way back in 2008. (For those who don’t know the Maturity Model off by heart, this is level 5 0f 6 in the quest for DITA utopia.) And now it’s here!  I’m so excited.

Ok, I know there are other dynamic publishing solutions in the works or available from some of the CCMS vendors. So, what’s so compelling about this one?

  • In Michael Priestley’s words: “It’s shiny.” I concur. It’s very slick looking and it has incorporated nice features such as auto-complete, faceted search and filter, feedback, bookmarking, and commenting.
  • It’s CMS/CCMS independent. You can even use it on a file based system. For the perpetually CCMS budget challenged, this is awesome news.
  • It’s affordable. They have a SaaS model. And depending on the amount of content, you can get started for as low as $1000/mo.  Gotta love that.

I felt really stupid asking the print question, but I knew I had to. Yes, you can pump the ditamap out to your publishing chain and get a printed copy. But come on, people. Printing is so 2006! A better idea: pump out to an e-format for off-line viewing if you need it.

A big thanks to CIDM and their staff. It takes a lot of work to put on such an event, and have things run like clockwork. The sponsored meals and birds of a feather lunches were amazing (and fun) opportunities for networking.

So, how do you get to be a DITA Rock Star? It’s easy – Do cool things with DITA and share it with us!

NOTE: Links to Linked In profiles were used with permission. Where no links are provided, no permission was obtained.

Posted in Implementation, Management, Technology, Uncategorized | 4 Comments