Comparison of feature-based algorithms for large-scale satellite image matching

Document Type : Research Paper

Authors

Faculty of Electrical and Computer Engineering, Malek Ashtar University of Technology, Tehran, Iran.

Abstract

Using different algorithms to extract, describe, and match features requires knowing their capabilities and weaknesses in various applications. Therefore, it is a basic need to evaluate algorithms and understand their performance and characteristics in various applications. In this article, classical local feature extraction and description algorithms for large-scale satellite image matching are discussed. Eight algorithms, SIFT, SURF, MINEIGEN, MSER, HARRIS, FAST, BRISK, and KAZE, have been implemented, and the results of their evaluation and comparison have been presented on two types of satellite images. In previous studies, comparisons have been made between local feature algorithms for satellite image matching. However, the difference between the comparison of algorithms in this article and the previous comparisons is in the type of images used, which both reference and query images are large-scale, and the query image covers a small part of the reference image. The experiments were conducted in three criteria: time, repeatability, and accuracy. The results showed that the fastest algorithm was Surf, and in terms of repeatability and accuracy, Surf and Kaze got the first rank, respectively.

Keywords

Main Subjects


  • [1] A. Alahi, R. Ortiz, and P. Vandergheynst, FREAK: Fast Retina Keypoint, IEEE Conference on Computer Vision and Pattern Recognition, (2012), 510–517.
  • [2] M. Agrawal, K. Konolige, and M. R. Blas, CenSurE: Center Surround Extremas for Realtime Feature Detection and Matching, Computer Vision – ECCV, 5305 (2008), 102–115.
  • [3] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, KAZE Features, Computer Vision – ECCV , 7577 (2012), 214–227.
  • [4] H. Bay, T. Tuytelaars, and L. Van Gool, SURF: Speeded Up Robust Features, Computer Vision – ECCV, 3951 (2006), 404–417.
  • [5] S. Belongie, J. Malik, and J. Puzicha, Shape matching and object recognition using shape contexts, IEEE Trans. Pattern Anal. Machine Intell., 24(4) (2002), 509–522.
  • [6] M. Calonder, V. Lepetit, M. Ozuysal, T. Trzcinski, C. Strecha, and P. Fua, BRIEF: Computing a Local Binary Descriptor Very Fast, IEEE Trans. Pattern Anal. Mach. Intell., 34(7) (2012), 1281–1298.
  • [7] M. Dastanpour, Rapid Image Matching in Aerial and Satellite Images, 2014, Master’s Thesis, University of Isfahan, Isfahan.
  • [8] M. Ebrahimi and W. W. Mayol-Cuevas, SUSurE: Speeded Up Surround Extrema feature detector and descriptor for realtime applications, IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, (2009), 9–14.
  • [9] C. Harris and M. Stephens, A Combined Corner and Edge Detector, Procedings of the Alvey Vision Conference, (1988), 23.1-23.6.
  • [10] A. E. Johnson and M. Hebert, Using spin images for efficient object recognition in cluttered 3D scenes, IEEE Trans. Pattern Anal. Machine Intell., 21(5) (1999), 433-449.
  • [11] Yan Ke and R. Sukthankar, PCA-SIFT: a more distinctive representation for local image descriptors, Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, (2004), 506–513.
  • [12] S. Leutenegger, M. Chli, and R. Y. Siegwart, BRISK: Binary Robust invariant scalable keypoints, International Conference on Computer Vision, (2011), 2548–2555.
  • [13] D. G. Lowe, Distinctive Image Features from Scale-Invariant Keypoints, International Journal of Computer Vision, 60(2) (2004), 91–110.
  • [14] J. Matas, O. Chum, M. Urban, and T. Pajdla, Robust wide-baseline stereo from maximally stable extremal regions, Image and Vision Computing, 22(10) (2004), 761–767.
  • [15] S. Na, W. G. Oh, and D. S. Jeong, A Frame-Based Video Signature Method for Very Quick Video Identification and Location, ETRI Journal, 35(2) (2013), 281–291.
  • [16] T. Ojala, M. Pietikainen, and T. Maenpaa, Multiresolution gray-scale and rotation invariant texture classification with local binary patterns, IEEE Trans. Pattern Anal. Machine Intell., 24(7) (2002), 971–987.
  • [17] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, ORB: An efficient alternative to SIFT or SURF, International Conference on Computer Vision, (2011), 2564–2571.
  • [18] A. Sadaqat and H. Ebadi, Performance evaluation of local descriptors in satellite images, Remote Sensing of Iran, 7(4) (2016), 61–84.
  • [19] E. Shechtman and M. Irani, Matching Local Self-Similarities across Images and Videos, IEEE Conference on Computer Vision and Pattern Recognition, (2007), 1–8.
  • [20] Jianbo Shi and Tomasi, Good features to track, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94, (1994), 593–600.
  • [21] E. Tola, V. Lepetit, and P. Fua, DAISY: An Efficient Dense Descriptor Applied to Wide-Baseline Stereo, IEEE Trans. Pattern Anal. Mach. Intell., 32(5) (2010), 815–830.
  • [22] M. Trajkovi´c and M. Hedley, Fast corner detection, Image and Vision Computing, 16(2) (1998), 75–87.
  • [23] T. Tuytelaars and K. Mikolajczyk, Local Invariant Feature Detectors: A Survey, FNT in Computer Graphics and Vision, 3(3) (2007), 177–280.
  • [24] G. Wang, Z. Wang, Y. Chen, and W. Zhao, Robust point matching method for multimodal retinal image registration, Biomedical Signal Processing and Control, 19 (2015), 68–76.
  • [25] Z Wang, B. Fan, and F. Wu, Local Intensity Order Pattern for feature description, International Conference on Computer Vision, (2011), 603–610.
  • [26] H. Yang and Q. Wang, A novel local feature descriptor for image matching, IEEE International Conference on Multimedia and Expo, (2008), 1405-1408.