Robust matching of images containing perspective projection and repetitive elements

  • Christopher Le Brese

Western Sydney University thesis: Doctoral thesis

Abstract

Image matching is an important stage of many computer vision tasks including image registration, object tracking, 3-dimensional reconstruction, augmented reality, and photogrammetry to name a few. It can be defined as the process of establishing reliable correspondence between interest points that have been found in two or more images. Current state-of-the-art methods use processes such as normalisation and simulation to match scenes that may have undergone geometric or photometric transformation. However, none have been shown to effectively handle all types of transformations. Although many simulation methods such as Affine-SIFT (ASIFT) do well when presented with large viewpoint changes, the large number of simulations required may make the method inefficient. When matching images that contain repetitive elements, many methods fail to remove erroneous matches due to ambiguity between detected features. Some methods have tried to avoid ambiguity by providing local context, global context, or combined neighbouring features. Filtering methods attempt to remove erroneous matches using geometric models or spatial relationships. However, some errors may go undetected. The aim of this thesis is to successfully match images containing affine or perspective viewpoint change, and scenes containing repetitive elements more accurately and efficiently than current state-of-the-art methods. This thesis proposes a new affine invariant method that utilises tentatively matched features to approximate the transform between views through normalisation and whitening. The calculated orientation is then used to simulate potential view transforms. An advantage of the proposed approach is that fewer simulations of the scene are required than state-of-the-art methods. Two novel methods of match filtering are proposed in an attempt to solve the ambiguity caused by repetitive patterns. The first is an enhancement of ASIFT to allow it to match scenes that contain repetitive elements. The second is the proposed Hierarchical Match Filtering (HMF) algorithm. HMF segments feature points into local cliques which are checked for spatial consistency by validating the area encapsulated by the clique; then neighbouring cliques are compared and collapsed into stronger ones if they are spatially consistent. An advantage of the proposed approach is that the accuracy of the filtered matches is better than current methods. Results for the affine invariant method show that scenes containing perspective projection of up to 80 degrees can be matched with an average accuracy of 97%, and an average residual error of 3.46 pixels, which is comparable to current methods. However, the method is more efficient than previous methods. Results have shown that HMF can match planar and 3D scenes containing repetitive elements with an accuracy of 98.66% and a residual error of 1.92 pixels. Current methods are only able to achieve 92% accuracy and a minimum error of 10.76 pixels. The algorithms have also been applied to the field of photogrammetry. Results have shown that the matches found by the proposed methods are able to be used to automate the detection of retro reflective markers (non-coded targets) and average proportional accuracies of the photogrammetry network are similar to those produced by the coded variety, 1:52000 compared to 1:57100 by current methods. The methods proposed in this thesis achieve the aim of accurately and robustly matching both perspectively warped scenes and images containing repetitive elements. Thus, the methods introduced may be used in fields such as 3D urban reconstruction, image registration, and augmented reality where both perspective projection and repetitive elements are commonly found.
Date of Award2014
Original languageEnglish

Keywords

  • image matching
  • image processing
  • image registration
  • digital techniques
  • mathematical models
  • computer vision

Cite this

'