Author Archives: saprentice

About saprentice

President and Chief Leximator

DITA-FMx 2.0 upgrade pricing ends April 14!

The discounted upgrade pricing from DITA-FMx 1.1 to 2.0 will end on April 14 (the 1-year anniversary of DITA-FMx 2.0). If you’re still using DITA-FMx 1.1, you can upgrade to 2.0 for $165. Just log in to www.leximation.com and go to your Tool Admin page, then click the upgrade link.

On April 14 the special upgrade price for DITA-FMx 2.0 will end and the cost will be $235 (same as a new individual license).

Note that upgrading to DITA-FMx 2.0 does not force you to change your current DITA 1.1-based structure applications. DITA-FMx 2.0 supports both DITA 1.1 and DITA 1.2. DITA-FMx 2.0 includes many enhancements and new features that benefit both DITA 1.1 and 1.2 models.

In addition to support for the DITA 1.2 features (keyref, conkeyref, coderef, key-based glossary term referencing, conref ranges, etc.), DITA-FMx 2.0 provides the following features:

  • Robust support for FrameMaker variables.
  • New Generate Book from Map dialog makes it easier to publish DITA maps to FM books.
  • Auto-prolog options now apply to maps as well as topics.
  • Automatically add author and date to <draft-comment> elements.
  • Additional support for specialized table formatting (row and cell shading and table indenting).
  • Support for imagemap and hotspot development.
  • And much more…

To compare the features in DITA-FMx with those in different versions of FrameMaker, review the FrameMaker DITA Feature Comparison.

To download a trial, or purchase DITA-FMx .. www.leximation.com/dita-fmx/.

Unique use of AutoFM for PDF generation

The localization team at medical technology company came to us with an interesting automation request. As part of their localization process, all files for a given language and deliverable are provided in multiple ZIP files in a single folder. The ZIPs contain the following:

  • FrameMaker book file (optional)
  • MIF file(s)
  • “variables” file (optional)
  • image files

They wanted to automate the assembly and PDF generation of the book rather than needing to put it all together manually. This task involves the following logic:

  1. Start FrameMaker.
  2. Open all MIF files and save them as FM binary files.
  3. If a book file exists, open it (otherwise, just one file is used).
  4. If a file with “_VRB” in the name exists, import the variable definitions from that file into all FM files in the book.
  5. Update the book.
  6. Save the book or single FM file to a PDF (using a pre-defined job options file).
  7. Exit FrameMaker.

The user workflow should be as follows:

  1. Copy ZIP files into a folder on the user’s desktop.
  2. Right-click the folder, and select “GeneratePDF”

We were able to quickly develop a prototype of this process using the AutoFM plugin. This handled the bulk of the work in opening the files in FrameMaker, converting from MIF to FM, importing the variables, updating the book, and generating the PDF.

All that was needed was some type of script or utility to drive the whole process. This entails, unzipping the files into a temporary folder, analyzing the contents, and generating the AutoFM script (a simple XML file that is passed to FrameMaker on startup to perform the specified actions). This could be developed as a compiled EXE utility, or as some type of script. We opted for a Perl script because of the reduced effort in development and because it would be easier to modify in the future. The “right-click” functionality is handled by adding a batch file in the Windows “SendTo” folder.

The Perl script took a couple hours to develop and test, and now the localization team can quickly and reliably generate a PDF from the provided content.

This is just one example of how you could use AutoFM to automate tasks with FrameMaker. The AutoFM process is driven by an XML “script”, but it can perform various tasks on all types of FrameMaker files. It can also run additional scripting within FrameMaker (ExtendScript, FrameScript, FDK client), to perform whatever custom processing is required.

Working with DITA Codeblocks in FrameMaker

When using a DITA <codeblock> element, the intent is for all of the code to be enclosed in a single <codeblock> element (to the extent that it makes sense). From time to time I see documents where each line is in a separate <codeblock>. While this may “work,” it’s not an idea way to operate. If the intent was to put single code lines in a <codeblock>, it would have been called “codeline”!

Here’s the one-line-per-codeblock approach (not ideal),
codeblocks-wrong

and here’s the recommended method.
codeblocks-right

If you’re using FrameMaker, this can happen without consciously realizing it. You may insert a <codeblock>, type the first line and press ENTER, then type the next line and press ENTER again. Each time you press ENTER, you’re creating a new <codeblock> element.

Another problem can happen when you copy and paste from a text file. You insert a <codeblock> then paste the code that you copied from the source file. It looks reasonable (except for the “end of paragraph” symbols), and you save the file and move on.
after-paste

The next time you open the file, the code is all messed up, it’s essentially all on one line that may wrap down the page. Not at all what you had intended!
reopen-after-paste

In order to properly format a <codeblock>, you need to end each line with a “forced return” (aka. “soft return” or “line break”); press the SHIFT+ENTER keys to get a new line. If you do this, your lines will be properly formatted and will remain as a single block of code.

The “trick” to pasting in content from another file is as follows:

  1. Insert the <codeblock> element, then place the insertion point inside the element and paste the code.
  2. Open the Find dialog and select the <codeblock> element.
  3. Enter “\p” as the Find text, and enter “\r” as the Change text.
  4. Select “Look in: Selection” and choose Change All.

before-replace

This is what you’ll see after replacing the end of paragraph (Pilcrow) symbols with forced returns.
after-replace

Does this code have tabs? It’s typically a good idea to replace tabs with spaces (not required, but a good practice). Use the same process to replace all tabs with 4 spaces (or whatever makes sense for your code).

  1. Open the Find dialog and select the <codeblock> element.
  2. Enter “\t” as the Find text, and enter “␣␣␣␣” (4 spaces) as the Change text.
  3. Select “Look in: Selection” and choose Change All.

Of course .. if you’re using FM11 or FM12, you can always just switch to the XML Code View and paste the code directly in there! As long as the code doesn’t include angle brackets, which you’ll need to manually replace with entities (&lt; and &gt;), and tabs, which you’ll need to replace with spaces, this option works great!

(Be aware that in FM11, the Code View feature has a problem with indented code lines. It’s likely that any indents will be lost if you flip between WYSIWYG and Code View. FM12 has fixed this problem.)

The techniques above work equally well for the other “preformatted” elements, <pre>, <msgblock>, and <screen>.

That all works fine with DITA 1.1 files, but if you’re using DITA 1.2, you can take advantage of the <coderef> element! A <coderef> is just what it sounds like, a reference to a code file. Essentially it’s a “conref” to a non-DITA file used for code samples. A <coderef> must be inserted into a <codeblock>, and if needed you can include multiple <coderef> elements in a <codeblock> or mix hard-coded text with a <coderef>, it’s up to you.

The default FrameMaker DITA support (in FM11 and FM12) doesn’t properly handle <coderef> elements, hopefully that will be fixed in an update to FM12. But if you are using DITA-FMx 2.0, you can use this feature. See the blog post Using Coderefs in DITA-FMx for details.

If you’re interested in the possibility of using syntax-based formatting in your code samples in FrameMaker, watch for the CodeFormatter plugin from Leximation. It’s currently in beta testing, but will be available soon. This is intended as a “publishing plugin” for the DITA-FMx book-build process (the formatting is applied when the book is built), but can be used as a stand-alone command or called from other scripting, so it can be used even if you’re not using DITA-FMx.

Using Coderefs in DITA-FMx

One of the possibly more under-appreciated features of DITA 1.2 is the <coderef> element. A <coderef> lets you include the content of a non-DITA code file within a <codeblock> element. Using a <coderef> can help to ensure the accuracy of your documentation by including real working code samples.

DITA-FMx 2.0 provides complete support for the <coderef> element. (Default FM-DITA does not support this feature.) This video provides a quick (2.5 minute) tour of authoring and publishing. For publishing, it demonstrates the use of the CodeFormatter plugin for providing syntax-based formatting of code samples in DITA-FMx.

» Link to video on YouTube.

Running a FrameScript Script from DITA-FMx

As a follow-up to Custom scripting in the DITA-FMx book-build process (which focuses on using ExtendScript), I asked Rick Quatro (FrameScripter extraordinaire) if he could provide some detailed information on setting up a FrameScript for use in the DITA-FMx book-build process.

The following is a repost (with permission) of that article (also available on Rick’s website).


DITA-FMx is a great plugin for working with DITA in FrameMaker. One of its best features is the ability to create a complete FrameMaker book from a ditamap. In some situations you may want to run a script on the book before creating the PDF. Scott Prentice, the developer of DITA-FMx, has a blog post explaining how you can call an ExtendScript script from DITA-FMx. This article will show you how to call a FrameScript script from DITA-FMx.

To set this up in DITA-FMx, you will need to edit the BookBuildOverrides section of the book-build INI file that you are using with DITA-FMx. Here are the three keys that need to be edited:

[BookBuildOverrides]
...
RunScript=1
ScriptName=fsl
ScriptArgs=myEventScript

RunScript is a 0 or 1 value. Setting it to a 1 tells DITA-FMx that you want run one or more scripts or FDK clients. ScriptName for FrameScript is fsl. The ScriptArgs value is the name of the installed FrameScript “event script” that you want to run.

Before we go further, let me give a little background on FrameScript scripts. FrameScript has two kinds of scripts: Standard scripts and Event scripts. A standard script can consist of functions, subroutines, and events, but it always has an entry point that is not inside of a function, subroutine, etc. Typically, you “run” a Standard script, it loads into memory, runs to completion, then is flushed from memory.

Event scripts are not run directly; they are “installed” first and then “wait” for some kind of event to happen; for example, a menu command, a FrameMaker event, an outside call, etc. All of the code in an event script must be inside of a function, subroutine, or event. The entry point for an event script is some kind of an event inside of the script. One point that is pertinent to this post is that an installed Event script has a name, and this name is the value you use for the ScriptArgs key.

Instead of installing your event script manually, it is best to install it automatically with an Initial Script, which runs automatically whenever you start FrameMaker. That way, your event script will be installed automatically when you start FrameMaker. Here is an example Initial Script:

// InitialScript.fsl Version 0.1b August 26, 2013

Local sPath('');

// Get the path to this script.
Set sPath = eSys.ExtractPathName{ThisScript};

// Install the event script that will receive the DITA-FMx call.
Install Script File(sPath+'Script1.fsl') Name('myEventScript');

This command will install the script “Script1.fsl” that is in the same folder as the Initial Script (InitialScript.fsl). The important parameter on the Install Script command is Name; the name supplied must match the name you give to the ScriptArgs key in DITA-FMx’s book-build INI file. Here we are using myEventScript.

To run this script automatically whenever FrameMaker starts, choose FrameScript > Options and click the Browse button next to the Initial Script Name field. Navigate to the InitialScript.fsl file and select it, and then click Save to close the Options dialog box.

Before you quit and restart FrameMaker, you will need to have Script1.fsl in the same folder as the InitialScript.fsl file. Here is a shell you can use for this script:

// Script1.fsl Version 0.1b August 26, 2013

Event NoteClientCall
//	
If ActiveBook
  Set iResult = ProcessBook{ActiveBook};
EndIf
//
EndEvent

Function ProcessBook oBook
//
//... Do something with the book here ...
//
EndFunc

The NoteClientCall event is a built-in event that “waits” for the outside call; in this case, from DITA-FMx. We test to make sure that there is an active book, which should be the book that DITA-FMx just created from the ditamap. If there is an active book, we call the ProcessBook function, which is where we process our book with FrameScript code. We could have our book code right in the ProcessBook function, or we could use this function to call other scripts or functions.

Please let me [Rick] know if you have any questions or comments about calling FrameScript scripts with DITA-FMx.


Thanks Rick! I’m sure that this article will be a big help to those who want to refine the DITA-FMx ditamap to PDF publishing process with FrameScript. Do contact Rick Quatro at FrameAutomation.com with any FrameScript questions or scripting needs.


Custom scripting in the DITA-FMx book-build process

One of the popular features in DITA-FMx is its ability to generate a PDF-ready FrameMaker book and files from a DITA map. The idea is that these files are actually PDF-ready .. one click to build the book, quick review (optional), then save to PDF. Because you can define all of the setup and publishing properties in a book-build INI (ditafmx-bookbuild.ini to be precise) that is saved with each book file, you’re able to reliably and accurately regenerate the FM files for each DITA map without worrying about getting the settings right each time. This saves time and ensures accuracy and consistency.

One feature of the book-build process is the option to include custom processing in the publishing process. DITA-FMx provides many useful publishing features, but there’s always the need to perform some additional processing to achieve formatting that is specific to your needs.

The Run Custom Script option provides the ability to run one or more FDK client (plugin) command, ExtendScript, or FrameScript, at the end of the default book-build process so you can perform whatever additional processing you need. This eliminates the need to make any manual tweaks to the generated files and ensures that your output always looks the same.

If you want the same script to run for all book-builds, you can set this option through the DITA-FMx > DITA Options dialog. In the Options dialog choose the Book Builds button and on the Book Build Settings dialog you’ll see the Run Custom Script option.

fmx-custom-script

If you’re running an ExtendScript, enter “ScriptingSupport” in the Client field, and the full path (using the forward slash as directory delimiter) to the script file name in the Args field (for FrameScript, use “fsl” in the Client field).

Yes .. that’s the easy part. I suppose you want to know how to create a script?

Here’s an example of an ExtendScript that applies a background color to all elements in your files that have the @status attribute set to “changed” or “new”. You might not want to use this for the final PDF, but it’s a nice way to flag new and changed content for your reviewers.

Just copy this code to a new file named tag-changes.jsx (or whatever you’d like), and place that file in a known location on your file system. It doesn’t really matter where you put it, you just need the full path and file name  for the “Args” field in the Run Custom Script settings. (You can download the file from here .. tag-changes.jsx.)

/* 
 * ExtendScript to set a background color on elements 
 * with @status='changed' or @status='new' in all 
 * files in book 
 *
 */
Console("Running ExtendScript on book."); 
// get the active book
var book = app.ActiveBook;
// get the first component in that book
var comp = book.FirstComponentInBook;
// iterate over all components
while (comp.id) 
{
  // get the component name (file name)
  var compName = comp.Name;
  // open that file (ok if already open)
  var doc = SimpleOpen (compName, false);
  // print a message to the console
  Console("Processing: "+compName); 
  // call our function to perform the processing
  fnProcessDoc(doc);
  // get the next component and loop back
  comp = comp.NextComponentInBook;
}

// function that starts the processing of a document
function fnProcessDoc(doc)
{
  // get the root element in the file
  var rootElem = doc.MainFlowInDoc.HighestLevelElement;
  // run function to process all elements in the file 
  // .. (if it's structured)
  if (rootElem.id) {
    fnProcessElements(doc, rootElem);
  }
}

// iterative function to walk through all elements 
function fnProcessElements(doc, elem)
{
  // get the value of the status attribute
  var statusVal = fnGetAttributeValue(elem, "status");
  // check if value is "new" or "changed"
  if ((statusVal == "new") || 
      (statusVal == "changed")) 
  {
    // set the background color to "Green"
    fnSetBackgroundColor(doc, elem, "Green") ;
  }
  // scan for more elements
  var child = elem.FirstChildElement;
  while (child.id) {
    fnProcessElements(doc, child);
    child = child.NextSiblingElement;
  }
}

// function to get the specified attribute value
function fnGetAttributeValue(elem, attrName)
{
  var attrVal = "";
  var attrs = elem.GetAttributes();
  for (n=0; n<attrs.len; n++) {
    if (attrs[n].name == attrName) {
      attrVal = attrs[n].values;
      break;
    }
  }
  return attrVal;
}

// function to set the background color
function fnSetBackgroundColor(doc, elem, colorName) 
{
  var color = doc.GetNamedColor(colorName);
  // check that the color is valid
  if (color.ObjectValid()) {
    var setVal = new TypedVal();
    setVal.valType = Constants.FT_Integer;
    setVal.ival = true;
    doc.SetTextVal(elem.TextRange, 
        Constants.FP_UseBkColor, setVal);
    setVal.valType = Constants.FT_Id;
    setVal.obj = color;
    doc.SetTextVal(elem.TextRange, 
        Constants.FP_BkColor, setVal);
  }
  else {
    Err("Invalid color: " + colorName + "\n");
  }
}

If you’re using a book-build INI file, you can add these settings in the BookBuildOverrides section:

[BookBuildOverrides]
...
RunScript=1
ScriptName=ScriptingSupport
ScriptArgs=C:/publishing/scripts/tag-changes.jsx

When you run the DITA-FMx > Generate Book from Map command, this script will run and your files will be processed accordingly.

If you want to run multiple scripts (or different types of scripts, ExtendScript + FDK plugin), you just use the vertical bar as a delimiter in the ScriptName and ScriptArgs values (be sure to start with a vertical bar). The following example shows how to run the tag-changes script followed by the StructureSnippets plugin.

[BookBuildOverrides]
...
RunScript=1
ScriptName=| ScriptingSupport | Pubs-Tools:StructureSnippets
ScriptArgs=| C:/scripts/tag-changes.jsx | RUNSNIPPETSCRIPT

For FDK plugins, the ScriptName is the “client name” as defined in the maker.ini file, and the ScriptArgs value will vary depending on how that plugin was set up.

This Run Custom Script feature makes it so you can tailor the way FM files are generated from a DITA map to exactly suit your needs. Don’t let anyone tell you that PDFs from DITA have to be boring or ugly!

Have UI designers forgotten about usability?

I’m seeing a disturbing trend. It’s been happening for some time now but seems to be getting more and more prevalent. User interface designers appear to be focusing solely on design, with usability taking a back seat. This isn’t a new battle, the question of form follows function vs. function follows form was debated long before computers existed. Just to be clear, I’m squarely in the form follows function camp.

It does feel to me that the world of user interface design has reverted to the mentality of the early days of web design. You remember, the <blink> tag and animated GIFs? People did things “just because you can,” not because it served any useful purpose. More and more applications have decided that it’s far more important to have a “cool” interface than one that might be familiar or easy to use.

It used to be that once you understood the basics of how an application’s UI was set up on your particular operating system (Windows, Mac, Linux, etc), you had a substantial leg up when learning how to other applications. You could rely on basic windowing features acting in a known and expected manner. You could rely on the application having a menu bar in a consistent location on the screen that functioned in a consistent way, and offered basic functionality that operated in a consistent manner. In many cases, that consistent functionality may be 50% or more of the primary features you used in any given application.

This not only benefits the end user in being able to accomplish the given task quickly and accurately, but also allows documentation developers and trainers to make certain assumptions about the knowledge of their audience. You didn’t have to explain how to Print or how to perform basic file operations like Save, Open, Close, and so on.

Application developers actually tried to follow the user interface guidelines put forth by the operating system developers. There was even the concept of certification that your application actually did follow these guidelines (as well as other lower-level operations).

This all seems to have gone out the window. Much of this happens now with the rise of browser-based applications, where you’re really starting from scratch with an application’s interface. But many native OS (desktop) applications have taken to the practice of “skinning” their applications to create a different interface than the one you’d get by default from the operating system.

Everyone seems to think they know a “better” way to interact with an application.

Application developers: remember that you’re not creating a game (well, unless you are, in which case you’re off the hook). It’s likely that someone uses your application as a productivity tool (this would presumably be something that you would strive for). It could be that people use this application for 8 hours a day (or more), and their livelihood depends on it. Users don’t care that the application is “pretty” or “cool,” they just want it to work properly so they can get their job done.

A cool UI doesn’t actually benefit anyone (except possibly the designer’s portfolio). User’s typically don’t like it, and developers lose money in two ways. One, you have to pay to develop the cool design and two, fewer users will upgrade. I believe that the main reason people don’t upgrade to the latest version of an application is not the cost of the upgrade, but the expense related to the time it takes to learn a new UI; above all else, people just want to get their job done.

When the developer of a desktop application applies a “skin” or otherwise modifies the UI to not follow the default operating system UI, that tells me that the developer places their coolness factor over my ability to get my job done. This strikes me as complete arrogance and disregard to my needs as a customer and user. In addition to familiarity and consistency, adhering to the operating system UI, means that any custom colors and font properties defined by the user will (should) be applied to the application UI as well. This means less cost to the developer and better functionality for the user, where’s the problem with that?

The purpose of a UI is to provide a language to allow an interaction between the human user and the virtual computer program. When the application uses the same language as that defined by the operating system, the human user is more likely to be able to communicate successfully because they are more likely to know that language. When one application tries to redefine that language, it only complicates that communication.

Different types of devices (phone, tablet, refrigerator) will define new ways to communicate with the underlying applications. That’s fine, new device, new language. But the applications that run on that device should follow the guidelines established by the operating system on the device, not the whims of a “creative” UI designer.

Browser-based applications are in a bit of a tough spot. Now the browser (an application running on some host operating system) becomes the “operating system” for applications that are running on it. To some degree, most browsers do pass on the default styling of widgets based on the underlying OS, but because these applications are coded in HTML each developer really has to reinvent the UI for their app. More often than not, these UI designers seem to think that this gives them the right to go wild and come up with some completely new UI paradigm. However, most of the browser apps that are successful take their lead from well-established UI design practices that employ widgets that emulate physical (mechanical) objects like buttons, folders, tabs, and the like. When I am introduced to a new application that has none of these familiar features, I’m forced to feel my way around the interface waiting for things to pop up or glow as the cursor moves across. This is not an efficient or effective way to learn a new tool, and I’m typically inclined to find another application with a more stable UI.

Often when a developer decides to upgrade their application’s UI it means they have run out of useful new features to add. I strongly believe that the best “new feature” that any application can add is to fix all of the bugs. That’s all. If the only new feature was that all of the known bugs have been fixed, you’ll see more upgrades than ever before. I know it’s not sexy or exciting, but that’s what is important to the people who actually make their living using your application.

When a developer “reimagines” the way humans should interact with a computer program, this not only complicates the lives of the intended users of that application, but also the lives of those whose job it is to document and teach this application. We have reasonably well understood names for traditional UI widgets (button, dialog, window, folder), but these names are typically not available for new widgets, and if they are, they most certainly won’t be understood by the new user. First off, the technical writer can no longer make assumptions about how much the user knows, it’s unlikely that they will know anything, so more has to be documented. Secondly, the act of describing and referring to these somewhat amorphous objects (glowing text in the upper left, but down a bit from the top) is a challenge if not impossible.

Don’t get me wrong, I can appreciate a well designed and efficient user interface, even if it doesn’t adhere to the operating system standards. Just be sure that when you do create a new way of interacting with an application that you’re doing it for a reason, and not just because you want your application to look “special.” Also, try to maintain an understanding of who your customers are and what they do with your application. Do they really need the UI you’re developing, or would something simpler serve their needs better?

Sorry, this post has gone on quite far enough, and has turned into a bit of a rant. It’s just something that has been bothering me lately, and based on what I’ve seen on maillists and other forms of communication, it’s bothering others as well. I’d love to hear your thoughts.

EPUB3 XHTML5 is invalid in oXygen?

oXygen (14) is a great tool for working with and cleaning up EPUB files. You can point it an EPUB file, and the Archive Viewer will show you the contents of that EPUB, letting you edit the files within. You can also validate the EPUB, with each error in the report linked directly to the problem location. Very handy for making those necessary tweaks or updates after creating an EPUB with some other tool.

But I recently ran into a little annoyance while working with EPUB3 files. The content and navigation files in an EPUB3 use XHTML5 as the markup format. When editing these files, oXygen shows them as invalid .. it seems to not be recognizing them as XHTML5.

In XHTML5, the DOCTYPE declaration is optional (yes .. that does bother me, but that’s the way it is). If the DOCTYPE declaration is included, they validate as you’d expect. I was hoping that oXygen would recognize the fact that I’m working on an EPUB3, and know that the files inside would be XHTML5.

I reported this to the oXygen product support and received an quick reply that they’d look into it. A few days later I was told that a fix would be in a future version, but they also provided a workaround .. which I thought I’d share.

Here’s what you’ll see if you open a DOCTYPE-less XHTML5 file in oXygen.

Here’s what to do to make these files validate properly.

1) In oXygen, select Options > Preferences > Document Type Association.

2) Select the XHTML entry in the Document Type association list, then choose Edit.

3) In the Document Type dialog select the Schema “tab” and change Schema Type to “NVDL” and set the Schema URI to “${framework}/xhtml5 (epub3)/epub-xhtml-30.nvdl”

4) Choose OK in the Document Type dialog, and OK in Preferences. Close and reopen your XHTML5 file (not the whole EPUB, just the content file), and it’ll now validate as expected!

Thanks oXygen!

FrameMaker 11 Review: XML and Structured Authoring

Despite the fact that FrameMaker was one of the first authoring tools to support structured authoring (FrameBuilder and SGML in 1991 then FrameMaker and XML in 2002), it is still seen by many as purely a book authoring and publishing tool. While it excels at long document production and provides excellent page layout and formatting capabilities, it also offers a robust XML authoring interface and related features. However, in the XML community, it’s generally not taken seriously as an XML editor. The release of FrameMaker 11 may start to change this perspective.

One of first things that people may say when asked about FrameMaker and XML editing is, “you can’t see the angle brackets, so it’s not an XML editor!” Well, with FrameMaker 11, you now have a real Code view authoring option. This isn’t just a simple text editor, it’s a real XML code editor with many of the productivity features you’d expect in a professional XML editor.

FrameMaker 11 is packed with lots of features, for both structured and unstructured authoring. In this review, I’ll just be focusing on the structured authoring and XML features. I was involved in the FrameMaker 11 prerelease testing, and have had the opportunity to offer suggestions and report bugs during the development cycle. I’m quite pleased with the way this release has turned out, although I am one of those people who always wishes that the “bake time” was just a bit longer.

Performance is always on the list of new features. Adobe claims that DITA maps and topics open faster in FM11. To be honest, with my working files I’m not seeing a significant change from FM10, although we’re talking about an open time of a few seconds, even for large maps, so I wouldn’t expect much improvement there. In tests of extreme cases of element use, I do start to see some significant improvement with FM11. I’m seeing a decrease in file open times of up to 92%; that’s a pretty significant change. One document (100 pages with very dense use of inline elements) took over 7 minutes to open in FM10, and only 36 seconds in FM11. Wow. While this test may not represent real-life content, it does show that some serious work was done to speed things up.

Smart Paste is a new feature that lets you copy and paste from HTML or other rich text sources like Word, and automatically have the styles converted into your structured model. FM11 comes pre-configured to support Smart Paste into DITA, but you can customize this feature to work with any data model. This will be a huge time savings when migrating existing content. The resulting element structure may not be exactly what you want, but it’s much better than the old way of copy and paste. I think that this will be a very popular feature.

Banner Text is a new feature that provides a clickable label text in newly inserted elements to help guide the author as to the type of content to place in an element. Values such as “title text” or “list item” are defined in the EDD, and you can change them to suit your needs. When combined with the ability to set up boilerplate (or “straw man”) content through the use of the element auto-insertion feature in the EDD, you can make it much easier to start writing a new topic, and will make it easier for people to get more comfortable with structured authoring. The default DITA templates are set up this way.

Authoring of structured documents (and XML files) has been made even easier through some new keyboard shortcuts. Pressing CTRL+1 displays a popup list of all valid elements at the current insertion point. Not only can you easily select an element to insert, but this list also lets you select multiple levels (as deep as the model supports) of child elements so that with one click you can insert many levels of nested elements. CTRL+2 does the same thing but lets you wrap the current selection with the selected structure. CTRL+3 provides a change option.  CTRL+7 gives you quick access to setting an attribute value. I love the ability to insert multiple elements with one click!

Another great new keyboard shortcut displays an inline attribute editor. Use Esc,i,a,e to pop up a small window that allows selection and entry of attribute values at the current insertion point. Yes, that’s a bit of a long “shortcut,” but you’ll get used to it!

Now you can easily author XML files without creating a structure application. One common complaint about using FrameMaker as an XML editor was the belief that you had to create a structure application in order to edit an XML file. Although this wasn’t strictly true, it wasn’t entirely obvious how to work with XML files without a structure application. Now one of the New File options is “Empty XML,” which is just that, an XML file that isn’t linked to a DTD or structure application. You can quickly create an XML file of any arbitrary model, or one that’s linked to a DTD for validation, but no structure application is required. You can also open up and edit any XML file, as you’d expect from an XML editor.

Whitespace handling has been greatly improved. Because the Code view mode includes a pretty printing option, the system is designed to handle pretty printed content without trouble. In the past it was often problematic when you were working in a mixed editor environment, and some people pretty printed their content. This should no longer be a problem!

FrameMaker adds two new authoring views in addition to the standard WYSIWYG view. You can now work in “Author mode,” a stripped down WYSIWYG view, which lets you focus on the content rather than thinking about the formatting or page layout. You can also work in “Code view,” which provides access to the underlying XML coding if you prefer to see the “real” XML under the hood.

Code View!

Providing a Code view option in FrameMaker is huge. It’s definitely not something that all FrameMaker users will use, but there are times that you really do need to modify the underlying XML coding, and this provides you with that option. There are people who are more comfortable editing code directly, and this lets them do that right inside of FrameMaker. No need to use another XML editor just because you want access to the XML coding. What’s nice is that FrameMaker remains in the authoring mode you last used. So if you’re in Code view, and you quit for the day, when you come back the next day and open an XML document, it’ll start off in Code view.

Code view has all of the productivity enhancements that you’d expect in an XML editor. Standard features like line numbering, syntax coloring, and code folding, make it very easy to work with large documents. Also, as you type, you are presented with auto-suggest popups for element and attribute names as well as automatically adding the closing element tag. Because of the runtime XML validation, any invalid coding is indicated with a squiggly underline as well as being reported in the Errors panel, if that is enabled (View > Pods > Errors).

Code view also offers an XML Tree view pod. This is a simplified version of the standard Structure view pod available in the WYSIWYG view, but is really just used for easy viewing and selection of element groups. Unfortunately, you can’t use this view to move elements, maybe in an update?

Code view also provides an XPath parser to allow flexible searching in the current file, all files in a map, or all files in a folder. This will greatly simplify the process of locating that content that you know is somewhere on the file system. Once authors get comfortable with basic XPath syntax, this will be very popular. It would be great to see this method for searching for files available in the other authoring views, not just in Code view. One caveat, which will hopefully be fixed in an update, if you perform an XPath search on a folder, you’ll get errors for files in that folder that are not parsable, such as FM binary files. Not a big deal, just a little annoying, since you’d expect it to just skip those files silently.

In addition to XPath support, Code view also provides the ability to run an XSLT transformation on the current file, all open files, or all files in a folder. You can create XSL scripts that perform various operations, and just pop over to Code view to run the script as needed. Granted, not all FrameMaker users will be writing their own XSLT scripts, but once they start to see the power behind it, more people will give it a try.

Yes, there are a couple of problems in Code view:

Preformatted content can run into trouble when passed through the Code view. If you’re working with a data model that includes elements like a <codeblock> or <pre>, the content in those elements should not be indented via the pretty printing process. Unfortunately, in FM11 it currently is, which will likely cause formatting problems in your output. I’m hoping that this is something that will be addressed in an early update, but until it is you may want to avoid opening files in Code view that contain preformatted elements.

Another problem with Code view is one that affects older systems that have a single single processor core (or virtual machines with only one core assigned). If you’re one of these groups you can pretty much forget about using Code view in any useful manner. Apparently the XML validation requires two processor cores, so systems with one, will notice a serious lag time between keypresses. This can be mitigated by selecting “No Application” for the structured application name, but that’s not really a good solution in the long run. While this should only affect a small number of people, it’s pretty serious if you’re one of those people. I’m personally hoping that this is at the top of the “fix” list.

DITA Support

As with the previous release, FrameMaker 11 continues to support the DITA 1.2 specification (as well as DITA 1.1). A number of fixes have been made that make it easier to work with FrameMaker in a mixed editor environment and issues have been resolved that created potentially non-compliant DITA topics.

URI notation (for href, conref, etc. attributes) now properly uses the forward slash as the directory delimiter instead of the backslash. A setting in the Options dialog controls this feature. Make sure that “URI Notation for Paths” is selected, and your files will be compliant in this regard.

Cross-reference format names are now stored in the xref/@outputclass attribute rather than the @type attribute, making this feature compliant with the DITA specification. When opening topic files created in earlier releases of FM, your format names will be migrated to the proper attribute. However, note that migrated topics will contain both attributes going forward unless you do something to strip out the @type values. It seems like you’d want the invalid values removed from the @type attribute (so it could be set to the value of the target topic type), but perhaps this will make for an easier transition from earlier versions of FM since it will continue to work properly in both environments. If you are migrating to FM11 and not planning to go back to editing topics in earlier versions of FM, you should set up a script of some kind to strip out the old @type values. (This might be a good use of the new XSLT transformation feature!)

By default, maps open in the Resource Manager, which makes for a very nice map editor. While this is a very useful and efficient view for most map editing operations, such as inserting and arranging topics, it does not yet allow for editing of relationship tables. You’ll need to switch to Document view for that (same as in FM10). Also, the new cool keyboard shortcuts for working with elements and attributes (described earlier) don’t seem to work in the Resource Manager. When you do switch your map to Document View, you’ll see an odd looking structure with your topicref titles shown twice, once for the clickable label and once for the navtitle element. Would be nice to get this cleaned up at some point.

Unfortunately, frontmatter and backmatter elements are not supported in FM11. When you try to insert a <toc> or <indexlist> element in a bookmap, you’ll be required to specify a target file. Because these elements represent generated lists, there’s typically no reason to specify a target file. Add this to the list of things I hope are fixed, sooner rather than later.

DITA map publishing has been greatly enhanced. Now when you save a DITA map to a book with components, the resulting files will have the pagination and numbering applied and any generated files will be included (TOC, Index, etc.). These properties are defined in the ditafm-output.ini file (found in %appdata%\Adobe\FrameMaker\11). This INI file also lets you specify that the “chapters” are aggregated FM files rather than multiple FM files as nested topics (GenerateFlatBook=1), as well as other options for controlling the output. The enabling of generated lists is done through this INI file as well, which is a bit odd, since the DITA bookmap structure provides a perfectly nice model for this purpose. In fact, if your bookmap does specify frontmatter and backmatter, this will be ignored, and only the lists defined in the INI file are used.

I think that the enhanced map publishing is a big step in the right direction, but it falls short of being as useful as it could be. For one, there’s only one INI file. This assumes that all of your books use the exact same properties and generated lists, which is likely not the case. Also, it’s a shame that the generated lists aren’t enabled by the existence of the associated frontmatter or backmatter elements in the map being processed. Finally, it would be really nice if this INI file was to some degree exposed through a user interface, frequently editing an INI file is a bit cumbersome.

Summary

In my opinion, FrameMaker can now be considered a full featured XML editor. That combined with its ability to perform impeccable page layout functions, makes it the ideal authoring and publishing tool for all types of documents and workflows. If I could have only one editor, FrameMaker would certainly be that tool. If you’re an existing FrameMaker user, you should definitely download the trial or give this new release a test drive. If you’re in the market for an XML editor, don’t rule out FrameMaker before you try it; compare the features and benefits of FrameMaker with other XML editors, and you may see that it’s quite the worthy competitor.

 

PDFs from DITA with FMx-Auto, oXygen, and Ant

If you’re not satisfied with your current DITA to PDF workflow, you might consider FMx-Auto. FMx-Auto enables automated (scripted) publishing of a DITA map through FrameMaker and DITA-FMx. This can be integrated into most publishing workflows on the desktop or server. If you’re interested in getting high-quality PDFs from your DITA content, like you used to get from unstructured FrameMaker, this may be the solution you’re looking for.

As an example of using FMx-Auto for publishing to PDF from a popular XML editor I set up an Ant script that creates the necessary AutoFM script from the current DITA map file name, then passes that to FrameMaker. The AutoFM script instructs FrameMaker to use the DITA-FMx Map-to-Book process to create a book and chapter files from the DITA map. This book is then saved to PDF.

This Ant script can be modified to suit your needs and can be used by other tools and publishing workflows as an integration option for publishing to PDF from DITA through FrameMaker. FMx-Auto supports FrameMaker versions 7.2 through 10, so even if you’re not using Frame for authoring, you can still use it for publishing.


Best viewed at high resolution

Resources and information: