TY - JOUR
T1 - CrossRay3D
T2 - Geometry and Distribution Guidance for Efficient Multimodal 3D Detection
AU - Yang, Huiming
AU - Liu, Wenzhuo
AU - Qiao, Yicheng
AU - Yang, Lei
AU - Zeng, Xianzhu
AU - Wang, Li
AU - Li, Zhiwei
AU - Zeng, Zijian
AU - Jiang, Zhiying
AU - Liu, Huaping
AU - Wang, Kunfeng
N1 - Publisher Copyright:
© 2000-2011 IEEE.
PY - 2026
Y1 - 2026
N2 - The sparse cross-modality detector offers more advantages than its counterpart, the Bird’s-Eye-View (BEV) detector, particularly in terms of adaptability for downstream tasks and computational cost savings. However, existing sparse detectors overlook the quality of token representation, leaving it with a sub-optimal foreground quality and limited performance. In this paper, we identify that the geometric structure preserved and the class distribution are the key to improving the performance of the sparse detector, and propose a Sparse Selector (SS). The core module of SS is Ray-Aware Supervision (RAS), which preserves rich geometric information during the training stage, and Class-Balanced Supervision, which adaptively reweights the salience of class semantics, ensuring that tokens associated with small objects are retained during token sampling. Thereby, outperforming other sparse multi-modal detectors in the representation of tokens. Additionally, we design Ray Positional Encoding (Ray PE) to address the distribution differences between the LiDAR modality and the image. Finally, we integrate the aforementioned module into an end-to-end sparse multi-modality detector, dubbed CrossRay3D. Experiments show that, on the challenging nuScenes benchmark, CrossRay3D achieves state-of-the-art performance with 72.4% mAP and 74.7% NDS, while running 1.84x faster than other leading methods. Moreover, CrossRay3D demonstrates strong robustness even in scenarios where LiDAR or camera data are partially or entirely missing.
AB - The sparse cross-modality detector offers more advantages than its counterpart, the Bird’s-Eye-View (BEV) detector, particularly in terms of adaptability for downstream tasks and computational cost savings. However, existing sparse detectors overlook the quality of token representation, leaving it with a sub-optimal foreground quality and limited performance. In this paper, we identify that the geometric structure preserved and the class distribution are the key to improving the performance of the sparse detector, and propose a Sparse Selector (SS). The core module of SS is Ray-Aware Supervision (RAS), which preserves rich geometric information during the training stage, and Class-Balanced Supervision, which adaptively reweights the salience of class semantics, ensuring that tokens associated with small objects are retained during token sampling. Thereby, outperforming other sparse multi-modal detectors in the representation of tokens. Additionally, we design Ray Positional Encoding (Ray PE) to address the distribution differences between the LiDAR modality and the image. Finally, we integrate the aforementioned module into an end-to-end sparse multi-modality detector, dubbed CrossRay3D. Experiments show that, on the challenging nuScenes benchmark, CrossRay3D achieves state-of-the-art performance with 72.4% mAP and 74.7% NDS, while running 1.84x faster than other leading methods. Moreover, CrossRay3D demonstrates strong robustness even in scenarios where LiDAR or camera data are partially or entirely missing.
KW - 3D object detection
KW - Computer vision
KW - sparse detector
UR - https://www.scopus.com/pages/publications/105027966386
U2 - 10.1109/TITS.2026.3651273
DO - 10.1109/TITS.2026.3651273
M3 - Article
AN - SCOPUS:105027966386
SN - 1524-9050
JO - IEEE Transactions on Intelligent Transportation Systems
JF - IEEE Transactions on Intelligent Transportation Systems
ER -