The MeasTex framework defines a complete environment for benchmarking texture classification algorithms. It specifies the format of the test files, the interface requirements on an algorithm, a comparison metric, and the procedure for testing the algorithm.
The framework is made available and installation is straight forward following the simple instructions. To begin using the framework you require the following
The framework directory structure keeps test suite files and result files in distinct directory trees ; imgs and algs, respectively. A separate subdirectory under the imgs directory exists for each test suite and contains the associated test problem files (*.Test, *.Valid) and image files (*.pgm).
Results for each algorithm (including variants) are stored in a separate directory under the algs directory. Within each algorithm directory is a directory for each of the test suites. These directories contain the result files (*.out) and score files (*.score, *.score_wta) for the algorithm on each of the test problems in the test suite.
The framework files are kept in the remaining directories. All the
programs including algorithm executables and scripts for running the
framework are found in the bin directory. Source
code and html documentation reside in the src and
www directories, respectively.
A map of the directory structure is shown below. The dotted region
indicates directories which may be located independent of the rest of
the structure. See Using Alternative Algorithm
and Test Suite Directories below.
Using the programs provided, results can be obtained automatically. All control script commands should be issued from the MeasTex top-level directory unless you have set the MEASTEX_ROOT environment variable.
sh: set MEASTEX_ROOT <dir> ; export $MEASTEX_ROOT OR csh: setenv MEASTEX_ROOT <dir> where <dir> is the top-level directory of the MeasTex installation.
To benchmark algorithm algName with command algCommand on all available test suites, use the runalg command.
> runalg algName algCommandThis script creates the directory algs/algName.alg and then further directories algs/algName.alg/testSuite.ts for each test suite, testSuite. Within each testSuite directory are written the classification result files (.out) for each individual test in testSuite. For example, using the GMRF algorithm with the standard fourth-order mask (See Gauss Markov Random Fields, markovClass) we type
> runalg markov markovClass -mask std4s
This command creates the directory algs/markov.alg with a subdirectory for
each test suite directory found in the imgs directory, along with the
appropriate classification output files.
When runalg has finished, we can
measure the performance by scoring the algorithm. The runscore script has been provided
to accomplish this task and is called with the single argument,
This script processes each classification result file (.out) in
each algs/algName.alg/testSuite.ts directory,
scoring the results and writing a score file (.score) for each. The
.score files contain individual performance results for each class and
also an overall score for all classes. If the wta
option is given to runscore, winner-take-all score files (.score_wta)
are created.
Three other options are available to the runscore program : they
specify the type of normalization to use in the metric. The default
behaviour (nn) is no normalization. Using this
option makes the winner-take-all scores identical to the
Percentage Correct score. The other two options
normalize the measure by subtracting the L1-norm (bn)
or L2-norm (pn) normalized priors and dividing by 1.0
- (L1-norm or L2-norm normalized priors), respectively. When the
algorithm output probabilities are used (no wta
option), the normalize the measure will give a maximum value of 1 to
an algorithm which is 100 % confident and correct. A score of 0 is
awarded to an algorithm which scores equivalent to the expectation of
the prior probabilities of the winner-take-all classification
(bn) or probability vector classification
(pn). Hence, negative scores ARE possible and
indicate an algorithm which does worse than guess to the prior
probabilities.
collateResults
collects the scores for a test suite and produces a file
algs/algName.alg/testSuite.res containing the
overall scores and a summary score (summed individual test scores).
The above two steps (runalg
and collateResults) has
been automated by two other scripts, processScores and processAllScores.
processScores performs the above for a single algName and
processAllScores calls processScores for each algName.alg in the
algs directory.
Finally, HTML pages can be created from the .res files by using the
scripts genAlgHTML and genResHTML. Again, genAlgHTML
processes a single algName while genResHTML processes all
algName.alg's in the algs directory.
Two other environment variables give users the flexibility to
specify alternative algorithm result directories and alternative test
suite directories.
The MEASTEX_ALGS environment variable allows algorithm results to
be written to an alternative directory other than $MEASTEX_ROOT/algs.
This is particularly useful when multiple users access the same
framework installation.
The MEASTEX_IMGS environment variable allows the default
$MEASTEX_ROOT/imgs directory to be re-specified. This is also useful
in multi-user installations BUT more importantly, provides a means of
using only a subset of the test suites. That is, by creating a new
imgs directory and creating symbolic links to a subset of the test
suite directories in $MEASTEX_ROOT/imgs, only these tests will be run.
Scoring the Algorithm
> runscore algName
Using Alternative Algorithm and Test Suite Directories
See other scripts and programs.