Compute Matches

This step will find keypoints in each image and match them with the keypoints in all other images. To start, select the picture set item in the project tree and click on "Compute matches...". The following dialog appears:

 

 

  • The keypoint sensitivity defines how many keypoints will be found. A good starting point is "Normal" (numerical value 0.0007). Images with very high resolutions require fewer keypoints (setting "Minimal", numerical value 0.001), while low resolution images require more (settings "High" or "Ultra")
  • The keypoint matching ratio defines how good a match must be in order to be used. The higher the number, the more matches will be considered.
  • Keypoint detector: Classic A-KAZE is more precise but slower, while Fast A-KAZE is faster and gives slightly less good keypoints.
  • When Add TBMR is selected, a second keypoint detector (Tree-Based Morse Regions, see here) is run which will result in more keypoints and matches. Use it if you need more keypoints, or keypoints in areas where A-KAZE does not detect any.
  • Matching algorithm: This feature is for testing only. FLANN is the default, please leave it like this unless you know what you are doing. The other options are:
    • KGraph (fast, medium, precise): This algorithm is based on the "small-world" algorithm. It gives very good results, but is slower than FLANN. Fast, medium and precise are different sets of parameters for the algorithm.
    • Brute force: Please don't use this algorithm except for very small pictures/picture sets. It is very slow, much slower than the other ones, especially with many keypoints.
    • MRPT: This is an interesting candidate. This algorithm is based on random projections. It gives good results and is sometimes faster than FLANN. For those of you who like to experiment, it might be an alternative.
  • Camera model: This setting defines the camera model of the triangulation. Generally, the default is fine. For certain camera types, use "Pinhole Fisheye", this might improve triangulation results.

 

Tips:

  • For big datasets, start with the lowest settings (sliders to the left). If you do not receive satisfying results, increase the keypoint sensitivity first. If you still need more matches, increase the matching ratio.
  • Computing time is strongly influenced by these parameters. Test the different settings first with small picture sets and lower keypoint detection rate, and only then start to use larger picture sets and higher keypoint detection settings.

 

The computed matches can be viewed in the matching results dialog. To open it, select the Matches item in the project tree and select "Show matching results...". The following dialog opens:

In the top half of the dialog, you see the list of pictures from the picture set defined earlier. To see the detected keypoints, click on an image and then on "Show Keypoints". To see them in more details, click on "Open Preview Window". In the preview window, it is possible to select between "No keypoints", "Simple keypoints" (every keypoint is shown with a circle) or "Rich keypoints" (every keypoint is shown with a circle plus a direction, and the diameter of the circle represents the size of the keypoint). The image in the preview window can be zoomed in and out with the mouse wheel or the buttons "Zoom in" and "Zoom out".

In the lower half of the dialog, you see the list of image pairs where matches have been found. Filter the matches with the drop-down box. You have the following choices:

  • No filter (putative matches): Here you see the unfiltered matches. Usually the matches are correct, some are wrong
  • Homography matrix: These are the matches geometrically filtered by finding a homography matrix and removing the matches that do not fit this matrix
  • Fundamental matrix: These are the matches geometrically filtered by finding a fundamental matrix and removing the matches that do not fit this matrix. Those matches are used for the Incremental SfM engine.
  • Essential matrix: These are the matches geometrically filtered by finding an essential matrix and removing the matches that do not fit this matrix. Those matches are used for the Global SfM engine.

To see the matches in more detail, open the preview window. Here you have the same filtering options, plus the "Track filter". This filter uses all image pairs to find "tracks", meaning keypoints that occur in more than one image pair.

 

Technical background information:

  • The algorithm used to detect keypoints is called A-KAZE (by Pablo F. Alcantarilla, see homepage)
  • The feature descriptor algorithm used in Regard3D is called LIOP (Local Intensity Order Patterns by Zhenhua Wang an others, see homepage). Regard3D uses the implementation from vlfeat.org.
  • Both algorithms are not patented (to my knowledge) and are free to use, in contrast to state-of-the-art methods like SIFT (Scale-Invariant Feature Transform by David Lowe, see homepage) or SURF (Speeded Up Robust Features, see homepage). That means Regard3D can be used freely, for commercial or non-commercial applications. The combination of A-KAZE and LIOP delivers comparable results to state-of-the-art algorithms, but are without any restrictions.
  • More information about the implemented "track filter" can be found in the paper "Unordered feature tracking made fast and easy" by Pierre Moulon and Pascal Monasse (CVMP 2012). The paper is available from this page.
  • If you are interested in the matching algorithms, here are the homepages for the libraries:

 

Go to next article: Triangulate