TY - JOUR
T1 - Semisupervised Representation Contrastive Learning for Massive MIMO Fingerprint Positioning
AU - Gong, Xinrui
AU - Lu, An An
AU - Fu, Xiao
AU - Liu, Xiaofeng
AU - Gao, Xiqi
AU - Xia, Xiang Gen
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2024/4/15
Y1 - 2024/4/15
N2 - Wireless positioning is crucial for Internet of Things (IoT) landscape, enhancing precision and reliability in location-based services. This article addresses the challenges of existing massive multiple-input-multiple-output fingerprint positioning methods, which typically require accurate channel estimation and one-by-one labeled data sets. We propose a semisupervised representation contrastive learning technique that leverages a partially labeled received pilot signal data set readily available from the base station. Our approach employs data augmentation to generate a large number of positive and negative sample pairs, which are then used to pretrain an encoder with a contrastive loss function in the self-supervision way. During pretraining, the encoder learns to encode positive samples close to an anchor, while keeping negative samples far away in the representation space. A fully connected layer is added on top of the encoder for position regression, and the encoder and regression networks are fine-tuned with a small labeled subdataset for the downstream positioning task. Simulation results demonstrate that our pretraining and fine-tuning approach outperforms the previous methods, significantly improving positioning accuracy, avoiding exact channel estimation and achieving labeling efficiency.
AB - Wireless positioning is crucial for Internet of Things (IoT) landscape, enhancing precision and reliability in location-based services. This article addresses the challenges of existing massive multiple-input-multiple-output fingerprint positioning methods, which typically require accurate channel estimation and one-by-one labeled data sets. We propose a semisupervised representation contrastive learning technique that leverages a partially labeled received pilot signal data set readily available from the base station. Our approach employs data augmentation to generate a large number of positive and negative sample pairs, which are then used to pretrain an encoder with a contrastive loss function in the self-supervision way. During pretraining, the encoder learns to encode positive samples close to an anchor, while keeping negative samples far away in the representation space. A fully connected layer is added on top of the encoder for position regression, and the encoder and regression networks are fine-tuned with a small labeled subdataset for the downstream positioning task. Simulation results demonstrate that our pretraining and fine-tuning approach outperforms the previous methods, significantly improving positioning accuracy, avoiding exact channel estimation and achieving labeling efficiency.
KW - Contrastive learning (CL)
KW - massive multiple-input-multiple-output (MIMO)
KW - positioning
KW - pretrain and fine-tune
KW - semisupervised
UR - http://www.scopus.com/inward/record.url?scp=85181555949&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2023.3344800
DO - 10.1109/JIOT.2023.3344800
M3 - Article
AN - SCOPUS:85181555949
SN - 2327-4662
VL - 11
SP - 14870
EP - 14885
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 8
ER -