boosting feature extraction performance on the aspect of representation learning efficiency
abstract
machine learning is famous for its automatic data handling. while there is a
slow growth in the performance of the state-of-the-art models in the most recent
well-known learning frameworks, the number of parameters and training complexity rise unaware. motivated by the present situation, we proposed two efficient
methods to enhance the automation on some manual tasks and the efficiency of
handling data, respectively. emotion is one of the main psychological factors that
affect human behaviour. a neural network model trained with electroencephalography (eeg)-based frequency features have been widely used to recognize human
emotions accurately. however, utilizing eeg-based spatial information with popular two-dimensional kernels of convolutional neural networks (cnn) has rarely been
explored in the extant literature. we address these challenges by proposing an eegbased spatial-frequency-based framework for recognizing human emotion, resulting in
fewer human-interaction parameters with better generalization performance. specifically, we propose a two-stream hierarchical network framework that learns features
from two networks, one trained from the frequency domain while another trained from
the spatial domain. our approach is extensively validated on the seed, seed-v,
and dreamer datasets. the experiments directly support that our motivation of
utilizing the two-stream domain features significantly improves the final recognition
performance. the experimental results show that the proposed spatial feature extraction method obtains valuable spatial features with less human interaction. image
classification is a classic problem in deep learning. as the state-of-the-art models
became more profound and broader, fewer studies were devoted to utilizing data efficiently. inspired by contrastive self-supervised learning frameworks, we proposed
a supervised multi-label contrastive learning framework to improve the backbone
model’s performance further. we verified our procedure on cifar10 and cifar100
datasets. with similar hyperparameters and the number of parameters, our approach
outperformed the backbone and self-supervised learning models.