Computer Science Department Thesis Defense - Haojin Deng

Event Date: 
Monday, October 24, 2022 - 2:00pm to 3:30pm EDT
Event Location: 
online
Event Contact Name: 
Rachael Wang
Event Contact E-mail: 

Please join the Computer Science Department for the upcoming thesis defense:

Presenter: Haojin Deng

Thesis title: Boosting Feature Extraction Performance on the aspect of Representation Learning Efficiency

Abstract: Machine learning is famous for its automation and efficient handling of data. With the slow growth of the state-of-the-art performance in most recent well-known frameworks, the number of parameters and training complexity unawares rose. Motivated by the present situation, we proposed two efficient methods to enhance automation and the efficiency of handling data, respectively. Emotion is one of the main psychological factors that affect human behaviour. A neural network model trained with Electroencephalography (EEG)-based frequency features have been widely used to recognize human emotions accurately. However, utilizing EEG-based spatial information with popular two-dimensional kernels of convolutional neural networks (CNN) has rarely been explored in the extant literature. We address these challenges by proposing an EEG-based Spatial-frequency-based framework for recognizing human emotion, resulting in fewer human-interaction parameters with better generalization performance. Specifically, we propose a two-stream hierarchical network framework that learns features from two networks, one trained from the frequency domain while another trained from the spatial domain. Our approach is extensively validated on the SEED, SEED-V, and DREAMER datasets. The experiments directly support that our motivation of utilizing the two-stream domain features significantly improves the final recognition performance. The experimental results show that the proposed spatial feature extraction method obtains valuable spatial features with less human interaction. Image classification is a classic problem in deep learning. As the state-of-the-art models became more profound and broader, fewer studies were devoted to utilizing data efficiently. Inspired by contrastive self-supervised learning frameworks, we proposed a supervised multi-label contrastive learning framework to improve the backbone model’s performance further. We verified our procedure on CIFAR10 and CIFAR100 datasets. With similar hyperparameters and the number of parameters, our approach outperformed the backbone and self-supervised learning models.


Committee Members:
Dr. Yimin Yang (supervisor, committee chair), Dr. Ruizhong Wei (co-supervisor), Dr. Amin Safaei, Dr. Thangarajah Akilan (Software Engineering)

Please contact grad.compsci@lakeheadu.ca for the Zoom link.
Everyone is welcome.