Cross-attention transformer enables image free target recognition in ghost imaging at ultra-low sampling rates

  • Ayesha Abbas
  • , Jie Cao
  • , Rehmat Iqbal
  • , Qun Hao
  • , Jianhua Liu
  • , Shaotong Liu

Research output: Contribution to journalArticlepeer-review

Abstract

We propose a cross-attention mechanism combined with ghost imaging (GI) for target recognition directly from bucket measurements, eliminating the need for object imaging at extremely low sampling ratios (SR). The sparsity of measurements at ultra-low SR poses a challenge for the traditional recognition methods. Inspired by the cross-attention mechanism’s ability to highlight relevant features within data, our approach focuses on the most relevant input features within bucket measurements while disregarding noisy and irrelevant information. Our cross-attention DL model trained on GI measurements at 0.2 and 0.1 SRs achieves high recognition accuracy of 99% and 96% respectively. The cross-attention model remains robust against data augmentation and capable of classifying multi-class targets without any need for imaging, and hence eliminates the reconstruction time at ultra-low SRs maintained at 0.1 and 0.2. Experimental and simulated results validate the efficacy of our model, demonstrating the importance of image-free target recognition in autonomous vehicles, remote sensing, medical diagnostics, defense, and industrial inspection, where efficient recognition with sparse data is critical.

Original languageEnglish
Pages (from-to)4779-4795
Number of pages17
JournalOptics Express
Volume34
Issue number3
DOIs
Publication statusPublished - 9 Feb 2026

Fingerprint

Dive into the research topics of 'Cross-attention transformer enables image free target recognition in ghost imaging at ultra-low sampling rates'. Together they form a unique fingerprint.

Cite this