The brain computer interface (BCI) are used in many applications including medical, environment, education, economy, and social fields. In order to have a high performing BCI classification, the training set must contain variations of high quality subjects which are discriminative. Variations will also drive transferability of training data for generalization purposes. However, if the test subject is unique from the training set variations, BCI performance may suffer. Previously, this problem was solved by introducing transfer learning in the context of spatial filtering on small training set by creating high quality variations within training subjects. In this study however, it was discovered that transfer learning can also be used to compress the training data into an optimal compact size while improving training data performance. The transfer learning framework proposed was on motor imagery BCI-EEG using CUR matrix decomposition algorithm which decomposes data into two components; C and UR which is each subject’s EEG signal and common matrix derived from historical EEG data, respectively. The method is considered transfer learning process because it utilizes historical data as common matrix for the classification purposes. This framework is implemented in the BCI system along with Common Spatial Pattern (CSP) as features extractor and Extreme Learning Machine (ELM) as classifier and this combination exhibits an increase of accuracy to up to 26% with 83% training database compression.
相似文献Recently, extreme learning machine (ELM) has attracted increasing attention due to its successful applications in classification, regression, and ranking. Normally, the desired output of the learning system using these machine learning techniques is a simple scalar output. However, there are many applications in machine learning which require more complex output rather than a simple scalar one. Therefore, structured output is used for such applications where the system is trained to predict structured output instead of simple one. Previously, support vector machine (SVM) has been introduced for structured output learning in various applications. However, from machine learning point of view, ELM is known to offer better generalization performance compared to other learning techniques. In this study, we extend ELM to more generalized framework to handle complex outputs where simple outputs are considered as special cases of it. Besides the good generalization property of ELM, the resulting model will possesses rich internal structure that reflects task-specific relations and constraints. The experimental results show that structured ELM achieves similar (for binary problems) or better (for multi-class problems) generalization performance when compared to ELM. Moreover, as verified by the simulation results, structured ELM has comparable or better precision performance with structured SVM when tested for more complex output such as object localization problem on PASCAL VOC2006. Also, the investigation on parameter selections is presented and discussed for all problems.
相似文献