FAsT-Match: Fast Affine Template Matching

Simon Korman1, Daniel Reichman2, Gilad Tsur2, Shai Avidan1

[Abstract] [Paper] [Presentation] [Source-code] [BibTex] [Proof of Thm. 3.1] [Net construction] [Experimental results] [The 'overlap' criterion]

[1] School of Electrical Engineering, Tel-Aviv University
[2] Department of Computer Science and Applied Math Weizmann Institute


Fast-Match is a fast algorithm for approximate template matching under 2D affine transformations that minimizes the Sum-of-Absolute-Differences (SAD) error measure. There is a huge number of transformations to consider but we prove that they can be sampled using a density that depends on the smoothness of the image. For each potential transformation, we approximate the SAD error using a sublinear algorithm that randomly examines only a small number of pixels. We further accelerate the algorithm using a branch-and-bound scheme. As images are known to be piecewise smooth, the result is a practical affine template matching algorithm with approximation guarantees, which takes a few seconds to run on a standard machine. We perform several experiments on three different datasets, and report very good results. To the best of our knowledge, this is the first template matching algorithm which is guaranteed to handle arbitrary 2D affine transformations.


"FAsT-Match: Fast Affine Template Matching" [pdf]
Simon Korman, Daniel Reichman, Gilad Tsur, Shai Avidan
CVPR 2013, Portland



Source code - NOW AVAILABLE

Version 1.0 (4.3 MB zip file)
Version 2.0 (4.4 MB zip file)
Version 3.0 (4.3 MB zip file)


  1. Unzip the zip file
  2. See the file 'README.txt' for explanations
Please email simon.korman AT regarding any issues you might have found when using the code.

If using the code, please cite the paper: "Fast-Match: Fast Affine Template Matching", CVPR 2013, Portland, using the following bibtex:


title={Fast-Match: Fast Affine Template Matching},
author={Korman, Simon and Reichman, Daniel and Tsur, Gilad and Avidan, Shai},
booktitle={Computer Vision and Pattern Recognition (CVPR), 2013 IEEE Conference on},

Proof of Theorem 3.1


Construction of the Net


Experimental Results

Here we complement Section 5 (Experiments) of the main paper with additional visual information.
[Experiment I] [Experiment II] [Experiment III]

Experiment I (template sizes and image degradations on the Pascal VOC 2010 dataset) - Some Examples

Here, we give several visual examples for the 3 degradation types: Blur, Noise and JPEG. For each, we show 3 different template dimension: 50%, 20% (which were presented in the paper, e.g. Figure 4.) and 10% (on which ASIFT couldn't be evaluated, for the limit of size) of image dimension at 6 different levels of degradation (level 0 = no degradation, level 5 = highest degradation). Green parallelograms are the ground truth locations, while Magenta parallelograms are the Fast-Match results. Follow the links below:

[Blur] [Noise] [JPEG]

BLUR Template Dimension: 50% Template Dimension: 20% Template Dimension: 10%
Blur Level 0
Blur Level 1
Blur Level 2
Blur Level 3
Blur Level 4
Blur Level 5

NOISE Template Dimension: 50% Template Dimension: 20% Template Dimension: 10%
Noise Level 0
Noise Level 1
Noise Level 2
Noise Level 3
Noise Level 4
Noise Level 5

JPEG Template Dimension: 50% Template Dimension: 20% Template Dimension: 10%
JPEG Level 0
JPEG Level 1
JPEG Level 2
JPEG Level 3
JPEG Level 4
JPEG Level 5

Experiment II (Mikolajczyk Sequences) - Additional Examples

Here, we bring one more experiment from each of the 8 Mikolajczyk sequences [4]. Notice that in this experiment we try to match a quadrilateral (the images differ by a Homography), using a parallelogram (generated by an Affinity). This comes into account, for instance, in the large viewpoint changes.

Sequence Query Template
(in image 1)
Findings in images 2-6 (green = GT Quadrilateral, blue = Fast-Match Parallelogram)
wall (view-point)
graffiti (view-point)
bikes (blur)
trees (blur)
bark (zoom+rotation)
boat (zoom+rotation)
light (illumination-change)
UBC (JPEG-compression)

Experiment III (Zurich Buildings) - The Complete Experiment

Here is the complete set of 200 instances of our experiment on the Zurich Buildings Dataset. The 200 automatically generated instances were distributed as follows:

[Good (128 instances)] : In these cases Fast-Match seems to capture the correct location (finds a parallelogram, under an affinity, which matches well the correct region, which generally cannot be defined by an affinity or even a homography).

[Occluded (12 instances)] : These are cases where in either the template itself or in the area to which the template should be matched there is an obstacle, which doesn't appear in the other side.

[Out of Image/Plane (40 instances)] : These cases are of 2 types: 1) Either the template doesn't entirely map into the target image. 2) Or, the template includes at least 2 very different dominant planes (in these cases, affine transformations cannot explain the mapping).

[Bad (19 instances)] : In these cases Fast-Match did not find the correct area, and the reason isn't one of the above (occluded, out of plane/image).

Good (128 instances)

Occluded (12 instances)

Out of Image/Plane (40 instances)

Bad (19 instances)

The 20% Overlap Criterion - Discussion and Examples

Here, we show examples where we reported Fast-Match to succeed or fail (in Experiments I and II of the paper) according to the '20% Overlap Error' criterion. As we explained in the paper (and this is discussed extensively in Mikolajczyk et al [4]) - this Error measure is more and more demanding the smaller the areas are. This is clearly evident from the following collection of borderline 'success' and 'failure' examples of Fast-Match. These examples also give a feeling regarding the required quality in the success rates we report in experiments I and II.

(just under the 20% threshold)
overlaps: 17.6% 17.9% 19.1% 18.2% 18.4%
18.3% 17.8% 16.9% 16.6% 15.4% 15.1%

(just over the 20% threshold)
overlaps: 20.0% 20.5% 23.4% 23.2%
20.9% 21.9% 24.2% 24.2% 24.1%