Abstract
Deep learning-based automatic modulation recognition (AMR) techniques are particularly well-suited to facilitate the development of non-cooperative communication systems, providing a robust foundation for the automatic processing of complex communication signals. However, existing models for AMR often fail to capture fine-grained temporal features and exhibit limited robustness against noisy or adversarial perturbations. To address these challenges, we introduce Tr-AMR, a robust transformer-based framework designed for high-accuracy AMR. The core of Tr-AMR is an enhanced architecture that integrates gated attention units and a feed-forward network (FFN) equipped with gated linear units activated by Gaussian error linear units, replacing the transformer's original self-attention mechanism and FFN components. These strategies not only significantly enhance the model's ability to capture intricate temporal patterns embedded in signals but also improve its capacity to extract global information through patch segmentation, position embeddings, and class embeddings, thereby enabling accurate recognition of in-phase and quadrature signal modulation types. The results of validation experiments on multiple datasets demonstrate that Tr-AMR outperforms all baseline models across all metrics, highlighting its superior performance.
| Original language | English |
|---|---|
| Article number | e70447 |
| Journal | International Journal of Communication Systems |
| Volume | 39 |
| Issue number | 4 |
| DOIs | |
| Publication status | Published - 10 Mar 2026 |
| Externally published | Yes |
Keywords
- automatic modulation recognition
- deep learning
- signal classification
- temporal information extraction
- transformer
Fingerprint
Dive into the research topics of 'Tr-AMR: A Lightweight Transformer With Enhanced Temporal Modeling for Automatic Modulation Recognition'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver