Invariant feature descriptors such as SIFT and GLOH have been demonstrated to be very robust for image matching and visual recognition. However, such descriptors are generally parameterised in very high dimensional spaces e.g. 128 dimensions in the case of SIFT. This limits the performance of feature matching techniques in terms of speed and scalability. Furthermore, these descriptors have traditionally been carefully hand crafted by manually tuning many parameters. In this paper, we tackle both of these problems by formulating descriptor design as a non- parametric dimensionality reduction problem. In contrast to previous approaches that use only the global statistics of the inputs, we adopt a discriminative approach. Starting from a large training set of labelled match/non-match pairs, we pursue lower dimensional embeddings that are optimised for their discriminative power. Extensive comparative experiments demonstrate that we can exceed the performance of the current state of the art techniques such as SIFT with far fewer dimensions, and with virtually no parameters to be tuned by hand.
|Publication status||Published - Oct 2007|
|Event||ICCV 2007: IEEE 11th International Conference on Computer Vision, 2007 - Rio de Janeiro|
Duration: 14 Oct 2007 → 21 Oct 2007
|Conference||ICCV 2007: IEEE 11th International Conference on Computer Vision, 2007|
|City||Rio de Janeiro|
|Period||14/10/07 → 21/10/07|
Hua, G., Brown, M., & Winder, S. (2007). Discriminant Embedding for Local Image Descriptors. Paper presented at ICCV 2007: IEEE 11th International Conference on Computer Vision, 2007, Rio de Janeiro, . https://doi.org/10.1109/ICCV.2007.4408857