Abstract
In this paper, we propose a novel hierarchical framework that combines motion and feature information to implement infrared-visible video registration on nearly planar scenes. In contrast to previous approaches, which involve the direct use of feature matching to find the global homography, the framework adds coarse registration based on the motion vectors of targets to estimate scale and rotation prior to matching. In precise registration based on keypoint matching, the scale and rotation are used in re-location to eliminate their impact on targets and keypoints. To strictly match the keypoints, first, we improve the quality of keypoint matching by using normalized location descriptors and descriptors generated by the histogram of edge orientation. Second, we remove most mismatches by counting the matching directions of correspondences. We tested our framework on a public dataset, where our proposed framework outperformed two recently-proposed state-of-the-art global registration methods in almost all tested videos.
Original language | English |
---|---|
Article number | 384 |
Journal | Sensors |
Volume | 17 |
Issue number | 2 |
DOIs | |
Publication status | Published - 16 Feb 2017 |
Keywords
- Edge orientation
- Infrared-visible registration
- Mismatch elimination
- Normalized location
- Objective motion vector