TY - JOUR
T1 - MyoGPT
T2 - Augmenting and segmenting spatial muscle activation patterns in forearm using generative Pre-Trained Transformers
AU - Chen, Wei
AU - Feng, Lihui
AU - Lu, Jihua
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/7
Y1 - 2025/7
N2 - Finger-muscle modeling in sEMG-based gesture interaction requires not only balancing the constraints of resolution, deployment, and cost on the hardware side, but also addressing the constraints of cross-population compatibility and complexity of calibration on the software side. We propose MyoGPT containing GPAT and AT based on generative pre-trained transformers. The GPAT is capable of spatially augmenting the signal on a sparse sEMG device that is less costly and easy to wear, i.e., enhancing from a 1-dimensional vector to a 2-dimensional sEMG pattern, which provides richer information for functional partitioning of muscles. With the spatially augmented data, the AT then segments the relevant muscle regions for driving finger movements with only two gestures to fulfill the calibration. Results show that the SSIM between the augmented sEMG pattern generated by GPAT and the ground truth in the public dataset reaches 76.28 %, and the SSIM for the segmentation of AT is 74.87 %. In addition, the model trained on the public dataset also achieved a SSIM of 68.16 % on our self-developed 16-channel sEMG armband (subjects not in the dataset), and the actual running time is 8.369 ms, which meets the real-time requirement.
AB - Finger-muscle modeling in sEMG-based gesture interaction requires not only balancing the constraints of resolution, deployment, and cost on the hardware side, but also addressing the constraints of cross-population compatibility and complexity of calibration on the software side. We propose MyoGPT containing GPAT and AT based on generative pre-trained transformers. The GPAT is capable of spatially augmenting the signal on a sparse sEMG device that is less costly and easy to wear, i.e., enhancing from a 1-dimensional vector to a 2-dimensional sEMG pattern, which provides richer information for functional partitioning of muscles. With the spatially augmented data, the AT then segments the relevant muscle regions for driving finger movements with only two gestures to fulfill the calibration. Results show that the SSIM between the augmented sEMG pattern generated by GPAT and the ground truth in the public dataset reaches 76.28 %, and the SSIM for the segmentation of AT is 74.87 %. In addition, the model trained on the public dataset also achieved a SSIM of 68.16 % on our self-developed 16-channel sEMG armband (subjects not in the dataset), and the actual running time is 8.369 ms, which meets the real-time requirement.
KW - Finger tracking
KW - Human-machine interface
KW - Muscle segmentation
KW - Signal enhancement
KW - Surface electromyography
KW - Transformer
UR - https://www.scopus.com/pages/publications/85217656980
U2 - 10.1016/j.bspc.2025.107591
DO - 10.1016/j.bspc.2025.107591
M3 - Article
AN - SCOPUS:85217656980
SN - 1746-8094
VL - 105
JO - Biomedical Signal Processing and Control
JF - Biomedical Signal Processing and Control
M1 - 107591
ER -