论文标题

基于受试者自适应脑电图的视觉识别的主体间学习

Inter-subject Contrastive Learning for Subject Adaptive EEG-based Visual Recognition

论文作者

Lee, Pilhyeon, Hwang, Sunhee, Lee, Jewook, Shin, Minjung, Jeon, Seogkyu, Byun, Hyeran

论文摘要

本文解决了受试者基于自适应脑电图的视觉识别的问题。它的目标是根据训练期间仅针对目标对象的少数样本来准确预测基于脑电图信号的视觉刺激类别。关键挑战是如何适当地将从源主题的丰富数据获得的知识转移到感兴趣的主题上。为此,我们引入了一种新颖的方法,该方法可以通过增加共享同一类但来自不同主题的特征的相似性来学习与主题无关的表示。通过专门的抽样原则,我们的模型有效地捕获了跨不同主题共享的共同知识,从而使目标受试者的表现也有希望,即使在严峻的问题设置下,数据有限。具体而言,在EEG-IMAGENET40基准上,我们的模型记录了当每类仅使用五个EEG样本对目标对象使用五个EEG样本时,TOP-1 / TOP-3测试精度为72.6% / 91.6%。我们的代码可在https://github.com/deepbci/deep-bci/tree/master/1_intelligent_bci/inter_subject_contrastive_learning_for_eeg。

This paper tackles the problem of subject adaptive EEG-based visual recognition. Its goal is to accurately predict the categories of visual stimuli based on EEG signals with only a handful of samples for the target subject during training. The key challenge is how to appropriately transfer the knowledge obtained from abundant data of source subjects to the subject of interest. To this end, we introduce a novel method that allows for learning subject-independent representation by increasing the similarity of features sharing the same class but coming from different subjects. With the dedicated sampling principle, our model effectively captures the common knowledge shared across different subjects, thereby achieving promising performance for the target subject even under harsh problem settings with limited data. Specifically, on the EEG-ImageNet40 benchmark, our model records the top-1 / top-3 test accuracy of 72.6% / 91.6% when using only five EEG samples per class for the target subject. Our code is available at https://github.com/DeepBCI/Deep-BCI/tree/master/1_Intelligent_BCI/Inter_Subject_Contrastive_Learning_for_EEG.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源