MyoGPT: Augmenting and segmenting spatial muscle activation patterns in forearm using generative Pre-Trained Transformers

Wei Chen*, Lihui Feng, Jihua Lu

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Finger-muscle modeling in sEMG-based gesture interaction requires not only balancing the constraints of resolution, deployment, and cost on the hardware side, but also addressing the constraints of cross-population compatibility and complexity of calibration on the software side. We propose MyoGPT containing GPAT and AT based on generative pre-trained transformers. The GPAT is capable of spatially augmenting the signal on a sparse sEMG device that is less costly and easy to wear, i.e., enhancing from a 1-dimensional vector to a 2-dimensional sEMG pattern, which provides richer information for functional partitioning of muscles. With the spatially augmented data, the AT then segments the relevant muscle regions for driving finger movements with only two gestures to fulfill the calibration. Results show that the SSIM between the augmented sEMG pattern generated by GPAT and the ground truth in the public dataset reaches 76.28 %, and the SSIM for the segmentation of AT is 74.87 %. In addition, the model trained on the public dataset also achieved a SSIM of 68.16 % on our self-developed 16-channel sEMG armband (subjects not in the dataset), and the actual running time is 8.369 ms, which meets the real-time requirement.

Original languageEnglish
Article number107591
JournalBiomedical Signal Processing and Control
Volume105
DOIs
Publication statusPublished - Jul 2025

Keywords

  • Finger tracking
  • Human-machine interface
  • Muscle segmentation
  • Signal enhancement
  • Surface electromyography
  • Transformer

Fingerprint

Dive into the research topics of 'MyoGPT: Augmenting and segmenting spatial muscle activation patterns in forearm using generative Pre-Trained Transformers'. Together they form a unique fingerprint.

Cite this