This benchmark operates either in full mode for registered users (U) or in a restricted anonymous mode (no custom mosaics creation, no database entry for test results or algorithm data). For registration go to the registration page. Single hot spots are explained using the contextual help in the status bar.
![](scheme1c.png)
![](results1c.png)
Welcome to the Prague texture segmentation benchmark, whose purpose is
- to mutually compare and rank different texture segmenters (supervised or unsupervised),
- to support new segmentation and classification methods development.
This server allows
- to obtain customized experimental texture mosaics and their corresponding ground truth (U),
- to obtain the benchmark texture mosaic set with their corresponding ground truth,
- to evaluate your working segmentation results and compare them with the state of art algorithms (U),
- to include your algorithm (reference, abstract, benchmark results) into the benchmark database (U),
- to check single mosaics evaluation details (criteria values and resulted thematic maps),
- to rank segmentation algorithms according to the most common benchmark criteria,
- to obtain LaTeX coded resulting criteria tables (U).
Dataset
- Computer generated texture mosaics and benchmarks are composed from the following texture types:
- monospectral textures,
- multispectral textures,
- BTF (bidirectional texture function) textures,
- rotation invariant texture set,
- scale invariant texture set,
- illumination invariant texture set.
- All generated texture mosaics can be corrupted with additive Gaussian noise, Poisson or salt&pepper noise.
- The corresponding trainee sets are supplied in the classification (supervised) mode.
Benchmark evaluation
- Submitted results are stored in the server database and used for the algorithm ranking based on a selected region-based criterion from the following criteria set:
- region-based (including the sensitivity graphs)
- CS - correct segmentation,
- OS - over-segmentation,
- US - under-segmentation,
- ME - missed error,
- NE - noise error,
- pixel-wise
- O - omission error,
- C - commission error,
- CA - class accuracy,
- CO - recall = correct assignment,
- CC - precision = object accuracy,
- I. - type I error,
- II. - type II error,
- EA - mean class accuracy estimate,
- MS - mapping score,
- RM - root mean square proportion estimation error,
- CI - comparison index,
- F-measure (weighted harmonic mean of precision and recall) graph,
- consistency measures
- GCE - global consistency error,
- LCE - local consistency error,
- clustering
- dM - Mirkin metric,
- dD - Van Dongen metric,
- dVI - variation of information.
- region-based (including the sensitivity graphs)