TY - JOUR
T1 - Generalized Maximum Likelihood Estimation for Perspective-n-Point Problem
AU - Zhan, Tian
AU - Xu, Chunfeng
AU - Zhang, Cheng
AU - Zhu, Ke
N1 - Publisher Copyright:
© 2016 IEEE.
PY - 2025
Y1 - 2025
N2 - The Perspective-n-Point (PnP) problem has been widely studied in the literature and applied in various vision-based pose estimation scenarios. However, most existing methods ignore the anisotropy uncertainty of observations, as demonstrated in several real-world datasets in this letter. This oversight may lead to suboptimal and inaccurate estimation, particularly in the presence of noisy observations. To this end, we propose a generalized maximum likelihood PnP solver, named GMLPnP, that minimizes the determinant criterion by iterating the generalized least squares procedure to estimate the pose and uncertainty simultaneously. Further, the proposed method is decoupled from the camera model. Results of synthetic and real experiments show that our method achieves better accuracy in common pose estimation scenarios, GMLPnP improves rotation/translation accuracy by 4.7%/2.0% on TUM-RGBD and 18.6%/18.4% on KITTI-360 dataset compared to the best baseline. It is more accurate under very noisy observations in a vision-based UAV localization task, outperforming the best baseline by 34.4% in translation estimation accuracy.
AB - The Perspective-n-Point (PnP) problem has been widely studied in the literature and applied in various vision-based pose estimation scenarios. However, most existing methods ignore the anisotropy uncertainty of observations, as demonstrated in several real-world datasets in this letter. This oversight may lead to suboptimal and inaccurate estimation, particularly in the presence of noisy observations. To this end, we propose a generalized maximum likelihood PnP solver, named GMLPnP, that minimizes the determinant criterion by iterating the generalized least squares procedure to estimate the pose and uncertainty simultaneously. Further, the proposed method is decoupled from the camera model. Results of synthetic and real experiments show that our method achieves better accuracy in common pose estimation scenarios, GMLPnP improves rotation/translation accuracy by 4.7%/2.0% on TUM-RGBD and 18.6%/18.4% on KITTI-360 dataset compared to the best baseline. It is more accurate under very noisy observations in a vision-based UAV localization task, outperforming the best baseline by 34.4% in translation estimation accuracy.
KW - Localization
KW - probability and statistical methods
KW - vision-based navigation
UR - http://www.scopus.com/inward/record.url?scp=85214085081&partnerID=8YFLogxK
U2 - 10.1109/LRA.2024.3524907
DO - 10.1109/LRA.2024.3524907
M3 - Article
AN - SCOPUS:85214085081
SN - 2377-3766
VL - 10
SP - 1752
EP - 1759
JO - IEEE Robotics and Automation Letters
JF - IEEE Robotics and Automation Letters
IS - 2
ER -