TEA(n) 0.2 "TEA"

Name

TEA - TEA documentation

Table Of Contents

Description

The Tcl Extension Architecture is meant to help developers set up a standardised environment so that any user can compile the extension without any knowledge of the extension. This way a lot of work can be saved. This document describes the various aspects of TEA in detail.

Chapter 1. OVERVIEW

TEA relies heavily on the GNU tool autoconf. An intimate knowledge of this tool is, fortunately, not required, but for complicated extensions that rely on many things specific to a particular platform, it may be necessary to add your own checks and procedures to the existing TEA macro library. The structure of this document is as follows: Chapter 2. DESIGN AND CODING describes the typical organisation in files and directories of an extension.

Chapter 3. RECOMMENDED CODING STYLE holds information about what you should and should not do when coding an extension.

Chapter 4. TCL PACKAGES highlights the package mechanism that is used by Tcl, whereas Chapter 5. TCL STUBS explains the stubs mechanism, important for creating compiled extensions that are independent of the particular Tcl version.

Chapter 6. CONFIGURE AND MAKE FILES is perhaps the most important chapter, as this describes how to create the input for the autoconf tool.

The subjects of Chapter 7. WRITING AND RUNNING TESTS and Chapter 8. DOCUMENTATION may not among most programmers' favourites, but they are very important to users. And everybody at some point is a user!

Appendix A. Explanation of make files and the make utility is meant especially for those programmers not familiar to make files, because their development environment shields the complexities from them.

Chapter 2. DESIGN AND CODING

Tcl extensions

In this document a Tcl extension is simply a collection of Tcl commands that can be distributed separately: an extension can be used in any application that wants to use the functionality it offers.

Well-known examples of extensions are:

Each of the four extensions we described exemplifies a category of extensions:

Note:

We have talked about C code, not because it is the only possibility, but because it is the most commonly used compiled language to extend Tcl. It is very well possible to use C++ or Fortran to build libraries that can be used from Tcl scripts. However, we will not go into these matters, they are beyond the scope of this document.

Note:

It is somewhat confusing that we use the words packages, extensions and modules to mean the same thing: Tcl commands or procedures that can be added to an application from some generally useable collection. The words each indicate a slightly different viewpoint:

Actually, all these terms can be used interchangeably and rather than invent a new word to replace them all, we will simply use all three terms, depending on the context.

Let us examine each of the above types of extensions in more detail in the next few sections.

Tcl-only extensions

If you have a collection of Tcl scripts that you would like to distribute as a whole to others, then the situation is easy:

Well, things are a bit more complex than that. Let us have a look again at the msgcat extension as it is present in any or all Tcl installations:

(*) Note the uppercase "i" in this name: under Windows file names may be case-insensitive, this is not so under UNIX/Linux and many other operating systems. So, please take note of the precise spelling of file names.

Here is an excerpt of the directory that holds the extension itself (we used Tcl8.4 for this):

   lib/
      tcl8.4/
         msgcat1.3/
            msgcat.tcl
            pkgIndex.tcl

In the source distribution, we can also find the file "msgcat.test" (containing the test cases) in the directory "tests" and the documentation in the file "msgcat.man".

The number "1.3" at the end of the directory name is no coincidence: it is the version number of the msgcat package. This way you can keep various versions of a package around.

In Tcllib, such a collection of files is found more commonly together, like for the CRC module (an excerpt from the CVS repository):

   tcllib/
      modules/
         crc/
            ChangeLog           file containing short descriptions of
                                changes over time
            cksum.man           the documentation file
            cksum.tcl           the actual Tcl script that implements
                                the cksum command
            cksum.test          the test cases
            ...                 (other related sources)

There is no "pkgIndex.tcl" script, as this is created by the installation procedure for Tcllib as a whole.

Note:

If you are not familiar with CVS, it is a widely used source code control system. CVS is short for "concurrent version system". Such systems help automate the tedious task of keeping track of changes in the source code, documentation and other things over time. This is important, because this way you can see if a bug has actually been solved and when, for instance.

Extensions that are platform-independent

A well-known extension like Tktable has to be built for all platforms that Tcl supports. Tktable itself does not have (much) code that differs from platform to platform. Yet, as the directory structure shows, it does have directories specific for the major platform categories, in the same way as do Tcl and Tk themselves:

   tktable/
      demos/                 directory with demonstration scripts
      doc/                   documentation
      generic/               the platform-independent sources
      library/               supporting scripts (in this case: key
                             bindings and such)
      mac/                   mac-specific sources (here: the
                             project-file for building on Mac)
      tclconfig/             scripts that are used while building
                             the extension
      tests/                 test scripts
      unix/                  unix-specific sources (here: nothing???)
      win/                   Windows-specific sources (here: the
                             project-file for building on Windows)
      ChangeLog              description of changes
      Makefile.in            TEA-template for the makefile
      aclocal.m4             "include" file required by Autoconf
      configure              configure script generated by Autoconf
      configure.ac           TEA-template for the configure script
      ...                    (others files, not related to the building
                             process itself)

In this overview you will see some files related to the TEA, most notably, Makefile.in and configure.ac. These will be described in great detail in Chapter 7 (fortunately the details have become less and less painful with the further development of TEA).

The most important thing is that this extension uses the same directory structure as Tcl/Tk itself. This may not always seem necessary (for instance, a Windows-specific extension might do without the unix and mac directories), but by keeping this same structure, you make it easier for others to manoeuvre through the source directory: they know what to expect.

Platform-dependent extensions

Extensions that depend on the particular platform may need to deal with various OS aspects:

The directory structure described in the previous section is quite useable in this case too. The platform-dependencies are expressed in the mac, unix and win directories containing one or more source files (not just project or make files), but also in the configure.ac and Makefile.in files containing extra code to find the relevant OS libraries or checks for deficiencies.

Extensions exporting their own functions

Complications arise when extensions need to export their own (compiled) functions, to provide their functionality. Again, this is not so much a matter of the directory structure, as it is a problem for building the extension.

In tkimg this is necessary because the photo command must have access to the functions that tkimg supplies. To do this in a way that does not tie tkimg to a particular version of Tcl/Tk, the stubs mechanism (see Chapter 5) is needed.

A simpler extension that provides its own stubs library is the backport of the dictionary data structure that is new in Tcl 8.5 to Tcl 4.x (done by Pascal Scheffers, <http://www.scheffers.net/....>). The stubs library allows other extensions to use the functionality this dict extension provides, in much the same way as you can use the general Tcl API. We will discuss the implications in the next section and in the chapter on Tcl stubs.

Coding a C extension

Now that we have covered the directory structure an extension should use, let us have a quick look at the C source code itself. (This is not a tutorial about how to use the Tcl/Tk API to build new compiled extensions. We simpy give some very basic information.)

A C extension, like the example that comes with TEA, contains an (exported) function with a specific name: Package_Init(), where "Package" is to be replaced by the name of the package (the first letter is capitalised, the rest is in lower-case). Here is the skeleton code for such a function:

   int
   Package_Init(Tcl_Interp *interp)
   {
       /* Initialise the stubs library - see chapter 5
       */
       if (Tcl_InitStubs(interp, "8.0", 0) == NULL) {
           return TCL_ERROR;
       }
       /* Require some version of Tcl, at least 8.0
       */
       if (Tcl_PkgRequire(interp, "Tcl", TCL_VERSION, 0) == NULL) {
           if (TCL_VERSION[0] == '7') {
               if (Tcl_PkgRequire(interp, "Tcl", "8.0", 0) == NULL) {
                   return TCL_ERROR;
               }
           }
       }
       /* Make the package known
       */
       if (Tcl_PkgProvide(interp, "Package", VERSION) != TCL_OK) {
           return TCL_ERROR;
       }
       /* Create all the commands:
          Tcl command "cmd1" is implemented by the function Cmd1,
          etc.
       */
       Tcl_CreateObjCommand(interp, "cmd1", Cmd1,
               (ClientData)NULL, (Tcl_CmdDeleteProc *)NULL);
       ... other commands ...
       return TCL_OK;
   }

The functions that actually implement the Tcl commands are usually static functions, so that there is no name clash with other libraries.

The structure of the functions can be anything you like, but it is usual to:

(Code example)

The functions that your extension implements for public use should be properly prototyped via a header file - this is a matter of C coding style, but it also gives other people the opportunity to use your extension in their C extension.

The header file for an extension also contains the magic for building it on various platforms. To explain this magic and what you should be aware of, have a look at the header file of the sample extension:

   /*
    * exampleA.h --
    *
    *      This header file contains the function declarations needed for
    *      all of the source files in this package.
    *
    * Copyright (c) 1998-1999 Scriptics Corporation.
    *
    * See the file "license.terms" for information on usage and redistribution
    * of this file, and for a DISCLAIMER OF ALL WARRANTIES.
    *
    */
   #ifndef _EXAMPLEA
   #define _EXAMPLEA
   #include <tcl.h>
   /*
    * Windows needs to know which symbols to export.  Unix does not.
    * BUILD_exampleA should be undefined for Unix.
    */
   #ifdef BUILD_exampleA
   #undef TCL_STORAGE_CLASS
   #define TCL_STORAGE_CLASS DLLEXPORT
   #endif /* BUILD_exampleA */
   typedef struct {
       unsigned long state[5];
       unsigned long count[2];
       unsigned char buffer[64];
   } SHA1_CTX;
   void SHA1Init   _ANSI_ARGS_((SHA1_CTX* context));
   void SHA1Update _ANSI_ARGS_((SHA1_CTX* context, unsigned char* data,
                    unsigned int len));
   void SHA1Final  _ANSI_ARGS_((SHA1_CTX* context, unsigned char digest[20]));
   /*
    * Only the _Init function is exported.
    */
   EXTERN int      Examplea_Init _ANSI_ARGS_((Tcl_Interp * interp));
   #endif /* _EXAMPLEA */

Explanation:

If the extension provides its own stubs library, then:

Some recommendations

We conclude this chapter with the following general recommendations:

TODO: Refer to TIP #55 and to Tcllib Describe a preferred directory structure (based on TIP #55) What about namespaces?

Chapter 3. RECOMMENDED CODING STYLE

We do not want to say too much about coding style, and certainly we do not want to prescribe any particular style. Just make sure for yourself that:

A very good example of coding style is Tcl itself: the code is well-documented, the layout is clean, with a bit of study you can really understand what is going on. More textual descriptions of a recommended coding style for C and for Tcl can be found in: ....

We can add a few conventions here, almost trivial, perhaps, but since they are very often used, it will help people to understand your code better:

If you implement your extension in C, remember to use the Tcl_Obj interface: it is much faster than the old pre-8.0 interface that used strings. This means that you may need to pay quite some attention to issues like reference counts, but it is certainly worth the effort.

Chapter 4. TCL PACKAGES

NOTE: needs an update!!

The package mechanism, by which an extension is made available to a Tcl application is both simple and daunting, simple if you can stick to the user side, daunting if you need to understand how it actually works. In this chapter we give an overview of what the average programmer needs to know about it and nothing more. (The man pages on the package command are quite extensive and they document some of the inner workings too.)

The user side

For the programmer who merely uses a particular package, there are two things he/she needs to know:

The programmer who needs, say, the Tktable extension, uses the following command:

   package require Tktable

(mostly without a version number, sometimes with a version number)

This instructs Tcl to go look for files "pkgIndex.tcl" in the directories listed in the auto_path variable and any of their subdirectories. Usually an installed package will be found in the standard place for extensions (packages): the lib directory under the Tcl installation directory. If not, the programmer can extend the auto_path variable with the directory that does contain the required package.

The package side

If you are creating a package, then you need to know a bit more: - the package provide command for Tcl-based extensions/packages - the Tcl_PkgProvide() function for C-based extensions/packages

Here are small examples of both:

From the msgcat extension:

   # msgcat.tcl --
   #
   #       This file defines various procedures which implement a
   #       message catalog facility for Tcl programs.  It should be
   #       loaded with the command "package require msgcat".
   #
   # Copyright (c) 1998-2000 by Ajuba Solutions.
   # Copyright (c) 1998 by Mark Harrison.
   #
   # See the file "license.terms" for information on usage and redistribution
   # of this file, and for a DISCLAIMER OF ALL WARRANTIES.
   package require Tcl 8.2
   # When the version number changes, be sure to update the pkgIndex.tcl file,
   # and the installation directory in the Makefiles.
   package provide msgcat 1.3
   namespace eval msgcat {
      ...
   }

The corresponding pkgIndex.tcl file looks like this:

   if {![package vsatisfies [package provide Tcl] 8.2]} {return}
   package ifneeded msgcat 1.3 [list source [file join $dir msgcat.tcl]]

Note the variable "$dir" in the second statement: this variable is set by the package mechanism when it is looking for the required package.

From the example extension (Sample):

/*
 *----------------------------------------------------------------------
 *
 * Sample_Init --
 *
 *      Initialize the new package.  The string "Sample" in the
 *      function name must match the PACKAGE declaration at the top of
 *      configure.ac.
 *
 * Results:
 *      A standard Tcl result
 *
 * Side effects:
 *      The Sample package is created.
 *      One new command "sha1" is added to the Tcl interpreter.
 *
 *----------------------------------------------------------------------
 */
int
Sample_Init(Tcl_Interp *interp)
{
    if (Tcl_InitStubs(interp, "8.0", 0) == NULL) {
        return TCL_ERROR;
    }
    if (Tcl_PkgRequire(interp, "Tcl", TCL_VERSION, 0) == NULL) {
        if (TCL_VERSION[0] == '7') {
            if (Tcl_PkgRequire(interp, "Tcl", "8.0", 0) == NULL) {
                return TCL_ERROR;
            }
        }
    }
    if (Tcl_PkgProvide(interp, "sample", VERSION) != TCL_OK) {
        return TCL_ERROR;
    }
    Tcl_CreateCommand(interp, "sha1", Sha1,
            (ClientData)NULL, (Tcl_CmdDeleteProc *)NULL);
    ...
}

As this extension is built via the TEA facilities, a pkgIndex.tcl file is provided for automatically. Instead of simply sourcing a Tcl script, it will load the shared object or DLL that was built from the C sources, roughly like:

   package ifneeded sample 1.0 [list load [file join $dir libsample.so]]

or:

   package ifneeded sample 1.0 [list load [file join $dir sample.dll]]

The TEA will create the pkgIndex.tcl file automatically via the pkg_mkIndex command.

Details on version numbers

The package command defines a number of administrative subcommands, like package vsatisfies that deal with version numbers and so on. In most cases, all you need to do as an extension writer is to use the well-known two-part version numbering.

It is, however, important to be consistent with the version numbers:

For this reason, the TEA defines a C macro, VERSION, that holds the version string (you can see how it is used in the code for the sample extension). This string is built up of the major version and a combination of the minor version and the patchlevel. From the file "configure.ac" (see Chapter 6) we have:

   #--------------------------------------------------------------------
   # __CHANGE__
   # Set your package name and version numbers here.  The NODOT_VERSION is
   # required for constructing the library name on systems that don't like
   # dots in library names (Windows).  The VERSION variable is used on the
   # other systems.
   #--------------------------------------------------------------------
   PACKAGE=sample
   MAJOR_VERSION=0
   MINOR_VERSION=5
   PATCHLEVEL=
   VERSION=${MAJOR_VERSION}.${MINOR_VERSION}${PATCHLEVEL}
   NODOT_VERSION=${MAJOR_VERSION}${MINOR_VERSION}

So, the above would result in a macro VERSION with the value 0.5

The version number is also used to construct a distinguishing name for the library, like: libsample0.5.so or sample05.dll. This way there is much less chance of a conflict between versions.

Subtleties with package names

There are a few things you must be aware of when choosing a name for the extension:

Subtleties at the C side

Things are a bit more intertwined on the C side. As the command that will be used to load a binary extension is:

   load filename

Tcl will have to deduce from the filename alone what the name is of the initialisation routine. The precise rules are explained in the manual page for the load command, but here are a few examples:

   Filename        Package name    Initial procedure
   foo.dll         foo             Foo_Init
   FOO.DLL         fOO             Foo_Init
   libFOO.so       fOO             Foo_Init
   libfoo1.2.so    fOo             Foo_Init

So, in the index file you might see:

   package ifneeded fOo 1.2 [list load [file join $dir libfoo1.2.so]]

A second issue is that for the extension to useful in a safe interpreter, you need to define an initialisation routine with a name like "Foo_SafeInit" (and possibly define the commands in another, safer, way).

The third issue that we would like to bring under your attention is the fact that under Windows you must explicitly indicate that a function in a DLL is to made visible (exported) to the outside world. Without this instruction the initialisation routine can not be loaded and therefore loading the extension fails.

The sample extension contains this fragment of code (sample.h) to take care of this (in general, the initialisation routine is the only one that needs to be exported - all others should be defined as static to reduce the chance of naming conflicts):

   /*
    * Windows needs to know which symbols to export.
    */
   #ifdef BUILD_sample
   #undef TCL_STORAGE_CLASS
   #define TCL_STORAGE_CLASS DLLEXPORT
   #endif /* BUILD_sample */
   ...
   /*
    * Only the _Init function is exported.
    */
   EXTERN int      Sample_Init _ANSI_ARGS_((Tcl_Interp * interp));

Chapter 5. TCL STUBS

The highly recommended way of using the Tcl and Tk libraries in a compiled extension is via stubs. Stubs allow you to compile and link an extension and use it with any (later) version of Tcl/Tk. If you do not, then the libraries can only be used with a very specific version of Tcl/Tk. This chapter provides some information about what the advantages and disadvantages are of using stubs.

It may seem intimidating at first, but the stubs mechanism in Tcl (available since version 8.1) is actually very simple - from the point of view of the programmer:

(Needless to say that most is automatically taken care of by the TEA.)

Here is what stubs are all about: rather than using the functions in the Tcl/Tk libraries directly, you access them via a pointer. The actual code that is involved is hidden from you via C macros, so you have nothing to worry about, except for the USE_TCL_STUBS macro and the proper initialisation. More information can be found in ...

The limitations of using stubs are that you can only use the Tcl functions that are publically available in the stub table (see for details the header files tcl.h and tk.h). You can not use the private functions (found in the other header files), but this is a bad idea in the first place, because the interface to these functions may change from one release to the next - they are simply not meant for use outside the Tcl library itself.

The advantages of stubs are plenty:

To summarise:

When you use the TEA, then the only thing you need to take care of in your code, is that the initialisation routine calls Tcl_InitStubs().

Using stubs gives benefits both to you and the users of your extension that can not be had in another way.

Providing your own stubs

A more complicated situation arises when your extension itself defines a stubs library. This was discussed in some length in Chapter 2. The advantage is that your functions can be used at the C level too and would form a veritable extension to the Tcl/Tk API.

In the build step this means that besides the ordinary shared object or DLL also a stubs library must be created. The process is almost automatic, except that you have to tell which functions are to be made available in the stubs library (via the .decls file) and you have to make some provisions in the TEA configuration and make files.

If the functions of your extension are to be registered in the Tcl or Tk library, as is the case with tkimg that provides new formats for the photo command, then it is necessary or at least highly recommended that you provide them via the stubs mechanism.

Chapter 6. CONFIGURE AND MAKE FILES

A large part of TEA is devoted to making the actual build process as smoothly as possible, that is, people who install your extension should not need to know anything of the build process, except for a few simple commands, found and explained in the INSTALL file.

For you as an extension writer, things are a bit more involved, but again TEA has the task to make the preparation of the build process for your specific extension as smooth and painless as possible. Nevertheless, you will have to do something:

This chapter will guide you through this process. First it will gloss over the Autoconf program and its purpose. Then it will describe in detail what the average extension writer has to do.

Note:

If you are not familiar with make files, read the expose in appendix A first. It will help appreciate the steps you will have to take.

Note:

TEA version 3 assumes that you have Autoconf version 2.50 or later. Autoconf is only necessary when you build the configure script, it is not required by the users of your extension (see the section "Required software")

(*) For Windows programmers: a shell script can be regarded as a sophisticated DOS batch file. It has the same purpose, but because UNIX shell languages (there are several) are more powerful than DOS batch commands, you can do a lot more with them.

The purpose of Autoconf

The GNU program Autoconf, which is one of a suite of programs, is meant to create a configuration script, which then takes care of everything. To do this, it reads a file called configure.ac (or configure.in in older versions) and processes the macros it finds in there. These macros have been developed to take care of all the nasty little details that can bother a programmer. (If you familiarise yourself with the Autoconf tool, you will be able to create new macros. This is actually one of the great advantages of Autoconf.)

While Autoconf takes a template file "configure.ac", TEA provides you with a template for this template. All you need to do is:

The resulting script, called configure, will check for a lot of things:

All of these things are substituted into the template for the make file (Makefile.in) so that the user can run the make utility to create the library or program. For instance, in the template you may see:

  CFLAGS_DEBUG = @CFLAGS_DEBUG@

The first is a variable in the make file that is used to set the flags for a debug version of your extension. The configure script replaces the string "@CFLAGS_DEBUG@" by whatever is appropriate for that platform, often "-g -c". This is accomplished by a command AC_SUBST([CFLAGS_DEBUG]) somewhere in the configure.ac file (or the macros it uses).

The macros used by TEA

The template files that come with TEA are very well documented, but still some explanation is required, if you are not familiar with Autoconf (see the example below).

Note:

The order of the macros is important, as the wrong order can make the resulting configure script useless (in Autoconf all variables are global and sometimes the value of a variable causes erroneous macro processing).

   Example

You can use a number of other macros in your configure.ac file:

The Makefile.in template

If you study the (well-documented) make file template that comes with the sample extension, Makefile.in, you will notice that there is very little that you need to adapt. In fact, there are only two sections that require your attention:

Making the distribution

Once you have created the configure script, you as the author of the extension can use to create a bundled file with all the source and other files for distribution:

If you want other files to be generated as part of the distribution, for instance, the INSTALL file must be adjusted to include the name and purpose of your extension, then you can do so by defining a variable specific for that file and define it in the configure script:

Required software

To create a working configure script via Autoconf, you will need to have Autoconf 2.50 or better. Autoconf is only needed if you are developing an extension. The users of your extension only need the configure script and the make file template.

Once the configure script is created, you can run it without any additional software on a UNIX/Linux/*BSD/MacOSX system, as these systems almost always have the UNIX shells and the make utility.

For Windows systems, you will need to have such tools as Cygwin or MingW/MSYS - both ports of the UNIX tools to Windows.

See the References section for the websites where you can get this software.

Making the extension

The steps taken by any user who uses the distributed source files are:

The various steps are handled via special keywords in the makefile, targets that are not connected to a specific file - they are always considered out of date.

So, the above procedure is carried out by running the following commands on UNIX/Linux (> is the command prompt):

  > ./configure
  > make
  > make test
  > make install

(assuming all steps are successful).

If you want to restart (some compilation error had to solved), you would run:

  > make clean

to throw away most intermediate files, or:

  > make distclean

to start from a really clean slate.

TODO: how to use mingw/msys to accomplish this on Windows

TODO: what about Windows project files?

Chapter 7. WRITING AND RUNNING TESTS

As the developer of an extension you are probably in the best position to identify the kind of things that need to be tested:

It is quite possible to give theoretically sound guidelines for a complete test suite (complete that is in the sense of some test criterium, for instance that all code is executed at least once when all the test cases are run). However, the same theory also teaches us that each criterium has its weaknesses and will let through certain types of bugs. Furthermore, the number of test cases and the management of the test code becomes unwieldy when your extension achieves a considerable size. (For more information, the classical book by Boris Beizer, Software Testing Techniques, can be consulted. There are many more similar publications.)

Let us deal instead with some more practical guidelines. Here are some minimal requirements:

If practical, then:

How we set up the tests? Simple, use the tcltest package. This package provides a complete and versatile infrastructure to deal with running tests and reporting the results (see the quite extensive manual page).

Here is a summary:

To illustrate this discussion with an example, consider an extension with a command "isValidXYZFile", this command checks the contents of the given file (the one argument to this command) and returns 1 if it is indeed a valid XYZ file (whatever XYZ files are) and 0 if it is not.

Test cases for this command could include:

The first two fall in the category "valid input", the others represent particular invalid input that the command is supposed to gracefully deal with (recognise that the input is not correct and report an error).

Traditionally, test scripts are contained in files with the extension ".test". So let us put the following code in a file "xyzfiles.test" to test our "xyzfiles" extension:

   #
   # Initialise the test package (we need only the "test" command)
   #
   package require tcltest
   namespace import ::tcltest::test
   #
   # Get our xyzfiles extension
   #
   package require xyzfiles
   namespace import xyzfiles::isValidXYZFile
   #
   # Our tests for valid input (sample.xyz is a valid XYZ file
   # ships with the extension, sampe2.xyz is an invalid but
   # existing file)
   #
   test "xyzfiles-1.1" "Valid XYZ file" -body {
      isValidXYZFile "sample.xyz"
   } -result 1 ;# It is a valid file, so the command returns 1
   test "xyzfiles-1.2" "Not a valid XYZ file" -body {
      isValidXYZFile "sample2.xyz"
   } -result 0
   #
   # Invalid input (the major test number is changed for convenience
   # only):
   #
   test "xyzfiles-2.1" "No argument" -body {
      isValidXYZFile
   } -returnCodes error -result "wrong # args: *" -match glob
   # tcltest uses exact matching by default, so we change that for this case
   test "xyzfiles-2.2" "Non-existent file" -body {
      isValidXYZFile "non-existent-file.xyz"
   } -returnCodes error -result "Non existent file *" -match glob
   test "xyzfiles-2.3" "Too many arguments" -body {
      isValidXYZFile 1 2 3 4
   } -returnCodes error -result "wrong # args: *" -match glob

Note that in the last three cases we use the [catch] command to catch the intended errors and we use the glob matching option to match against a rough string, rather than an exact string.

Testing the arguments is of course much more important in case of a compiled extension than in case of a Tcl-only procedure, but if there are subcommands that require different numbers of arguments it can be a good idea to add tests like the above.

These tests can be run by the command:

   > make test

(See the code for "test" target in the make file)

It is good style to always run every test in a test suite. If some tests fail on some platforms, use test constraints to prevent running of those particular tests. You can also use test constraints to protect non-test pieces of code in the test file.

Chapter 8. DOCUMENTATION

It may seem a heavy burden for many a programmer, but documentation is necessary, even though one sometimes gets the impression (quite wrongly of course, but still) that no user ever bothers to read it.

Besides proper comments in the code, we need a guide for users to fall back on. Traditionally for Tcl/Tk this has been in the form of UNIX-style man pages:

This is a format that works well for more or less experienced users - they use the man page as a reference manual. On the other hand, new users may find them a bit awkward, as a lot of background is usually assumed.

For most extensions, it will suffice to use the classical man page format with perhaps some more explanation of what the extension is all about and some more examples and elaborated code fragments than usual.

To help with writing the documentation, we strongly suggest you use the so-called doctools that are now part of the standard Tcl applications. The basic idea of doctools is simple:

Here is a small example of such an input file:

......

This file can be processed by the doctools application, to give an HTML-file that is rendered like this:

/picture/ ??

Short overview of the macros supported by doctools

[arg $name] - argument in a [call] statement

[arg_def $type $name $intent] - description of the argument, what type it is (widget, integer, list, ...), its name and whether it simply passed to the command (in) or gets set as well (out or in/out)

[bullet] - start a new item in a bullet list

[call $cmd $args] - define a command (its argument list; the command gets added to the synopsis)

[cmd $name] - name of the command (first argument for [call])

[comment $text] - add comments to the original, does not show up in the output

[copyright $name] - insert a copyright string at the end of the rendered document

[description] - start the man page section DESCRIPTION (note: this section is required at the beginning of the document)

[emph $text] - show the given text as emphasized (typically italic)

[example $example] - insert preformatted text that serves as an example for the commands being discussed

[keywords $args] - add the keywords given in the variable argument list to the end of the document

[list_begin arg] - start a list of arguments (after a corresponding [call] command)

[list_begin bullet] - start a bullet list

[list_begin definitions] - start a definitions list

[list_begin opt] - start a list of options

[list_end] - end the current list, brackets [list_begin]

[manpage_begin $name $section $version] - indicate the start of the manual page with the name of the module/package/extension, the section of the manual pages it should go into (always "n") and the version number

[manpage_end] mandatory end of the man page, brackets the [manpage_begin] command

[moddesc $name] - identify the module

[nl] - put a break in the flowing text. Useable only within lists

[opt_def $keyword $type] - the keyword for an option (without the leading minus sign) and the expected value type (if any)

[para] - start a new paragraph

[require $package $version] - insert a "package require" command in the synopsis of the man page to indicate the dependencies

[section $title] - start a new section, the title is traditionally given in capitals

[titledesc $title] - give the man page a proper title

REFERENCES

Appendix A. Explanation of make files and the make utility

If you are not familiar with the make program, here is a brief explanation of it. In some of its features it is very similar to MicroSoft's Visual Studio or other integrated development environments: it processes descriptions of how to create a program from its sources and does so in an efficient way. In fact, many IDE's use the make utility in some form or other underneath.

The main difference between the make utility and most, "modern" IDE's is that make does not itself manage the description, the so-called make file. This is left to the programmer. Another difference is that make can be used for almost any task where files need to be built from other files using some program, in other words it is very flexible.

A small example

So far the introduction to make. Let us now describe how make does the job it is supposed to do, using the following sample program:

The program "sample" is built from two C source files, sample.c and utils.c. The first source file includes a header file utils.h, which contains the interface to the functions in utils.c.

Now changes to any of these files mean that the whole program has to be rebuilt. For this small program, we could easily type the command:

   cc -o sample sample.c utils.c

(or something similar, depending on the compiler you want to use). This would recompile the entire source and relink the program against all its parts.

Now:

This is what the make utility would do, when properly instructed:

The makefile might look like this:

    sample->:->sample.o utils.o
    ->cc -o sample sample.o utils.o
    sample.o->:->sample.c utils.h
    ->cc -c sample.c
    utils.o->:->utils.c utils.h
    ->cc -c utils.c

(the symbol "->" indicates a tab character - this is essential in makefiles, as it is used to identify what lines belong together)

This is a very simple makefile, in practice programs that are maintained with makefiles are much larger, you need to take of building the documentation, libraries, of installing the files at the proper location and so on. To make things worse: many compilers use different options to specify similar tasks and operating systems require different libraries (or put the libraries in different places).

To help manage the makefiles, the autoconf utility was created. This utility prepares the correct makefile from a given template, using a complicated but relatively easy to use configuration script.

Appendix B. Configuration options

The configure script as generated by Autoconf will support a number of standard configuration options as well as any that you define for your extension via the AC_ARG_WITH and AC_ARG_ENABLE macros.

The table below describes the standard configuration options (most of the text is copied from B. Welch's book "Practical Tcl and Tk programming", 3rd edition)

Note that all of these options come with reasonable defaults, so that you only have to worry about them when the configure script terminates with some kind of error.