Spatio-temporal Pattern Analysis for EEG Classification in Rapid Serial Visual Presentation Task

Bowen Li, Zhiwen Liu, Xiaorong Gao, Yanfei Lin

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

This study will explore an algorithm of spatio-temporal pattern analysis for electroencephalographic (EEG) classification in the rapid serial visual presentation (RSVP) task. In this algorithm, the spatial low-rank and temporal-frequency sparse priors are exploited to train the supervised spatial and temporal filters. The discriminant features are extracted by the supervised spatio-temporal filters and classified by support vector machine. The EEG signals were recorded from a total of 12 subjects under RSVP task and were used as training and testing data. The average true positive rate of classification is 79%, and the average false positive rate is only 3.4%. The classification results show that the proposed algorithm has better performance in the target detection than HDCA and SWFP.

Original languageEnglish
Title of host publicationICBRA 2019 - Proceedings of 2019 6th International Conference on Bioinformatics Research and Applications
PublisherAssociation for Computing Machinery
Pages91-95
Number of pages5
ISBN (Electronic)9781450372183
DOIs
Publication statusPublished - 19 Dec 2019
Event6th International Conference on Bioinformatics Research and Applications, ICBRA 2019 - Seoul, Korea, Republic of
Duration: 19 Dec 201921 Dec 2019

Publication series

NameACM International Conference Proceeding Series

Conference

Conference6th International Conference on Bioinformatics Research and Applications, ICBRA 2019
Country/TerritoryKorea, Republic of
CitySeoul
Period19/12/1921/12/19

Keywords

  • Discriminant features
  • EEG
  • Low-rank
  • Sparse prior
  • Spatio-temporal pattern

Fingerprint

Dive into the research topics of 'Spatio-temporal Pattern Analysis for EEG Classification in Rapid Serial Visual Presentation Task'. Together they form a unique fingerprint.

Cite this