Label Graph Evaluation Library (LgEval) (*Update (April 27): version 0.2.10 now released): (R. Zanibbi and Harold Mouchere). This Python library was used to produce results for our paper with Christian Viard-Gaudin at DRR XX (2013), Evaluating structural pattern recognition for handwritten math via primitive label graphs. System outputs, data, metric files and results summaries used to produce the label graph results in the paper are available here: CROHME 2012 Part 3 Label Graph Metric Results.
We ask that you please cite our paper if you use LgEval for your own work.
LgEval allows structural pattern recognition evaluation to be performed at the level of primitives, for example handwritten strokes for online data, or connected-components in images. The label graph representation allows simple, specific identification of recognition errors, with intuitive Hamming distance-based metrics to quantify classification, segmentation, and structural errors. Some features of the library include:
- Primitive and Object-Level Evaluation: compute classification, segmentation and parsing errors for primitives using Hamming distances over labels, along with recall and precision of segments (i.e. objects) and segment relationships. The representation is also able to handle the case where one or both files have missing primitives.
- Error Analysis: '.diff' files (in CSV format) identify specific errors.
- Visualization: tools are provided to create .dot files (for GraphViz) for visualizing structure, and differences in structure between label graphs.
- Conversion Tools: tools are provided for converting label graphs to LaTeX strings or other math encodings, along with a perl program that converts CROHME InkML files to label graph files.
- Flexible: the system is designed to make it easy to evaluate individual pairs of files, or large groups, while being able to easily combine metric ('.m', also CSV format) files to summarize results from different comparisons.

The LgEval library and CROHME 2012 label graph results are licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License.