Signal processing is a broad field which takes into account the analysis, synthesis and modification of signals. Signals are functions that provide information of a particular phenomenon, such as sound, images, and biological measurements. For example, signal processing techniques are used to improve signal transmission, storage efficiency, and signal quality, and to emphasise or detect components of interest in a measured signal.

Application Fields

  • Audio signal processing — for electrical signals representing sound, such as speech or music
  • Digital signal processing — for processing of quantized signals, such as in computers.
  • Speech signal processing — for processing and interpreting spoken words
  • Image processing — in digital cameras, computers and various imaging systems
  • Video processing — for interpreting moving pictures
  • Wireless communication — waveform generations, demodulation, filtering, equalization
  • Array processing — processing signals from arrays of sensors.
  • Financial signal processing — analyzing financial data using signal processing techniques, especially for prediction purposes.
  • Feature extraction, such as image understanding and speech recognition.
  • Quality improvement, such as noise reduction, image enhancement, and echo cancellation.
  • Source coding, including audio compression, image compression, and video compression.
  • Genomics, Genomic signal processing

Application in EEG Signals Processing

EEG signals are electrical signal generated by in the brain. These signals are generated to perform different activities.

In Brain-Computer Interface design, EEG signal processing aims at translating raw EEG signals into the class of these signals, i.e., into the estimated mental state of the user. This translation is usually achieved using a pattern recognition approach, whose two main steps are the following:

  • Feature Extraction: The first signal processing step is known as “feature extraction” and aims at describing the EEG signals by (ideally) a few relevant values called “features”. Such features should capture the information embedded in EEG signals that is relevant to describe the mental states to identify, while rejecting the noise and other non-relevant information. All features extracted are usually arranged into a vector, known as a feature vector.
  • Classification: The second step, denoted as “classification” assigns a class to a set of features (the feature vector) extracted from the signals. This class corresponds to the kind of mental state identified. This step can also be denoted as “feature translation”. Classification algorithms are known as “classifiers”.

As an example, let us consider a Motor Imagery (MI)-based BCI, i.e., a BCI that can recognized imagined movements such left hand or right hand imagined movements (see Figure). In this case, the two mental states to identify are imagined left hand movement on one side and imagined right hand movement on the other side. To identify them from EEG signals, typical features are band power features, i.e., the power of the EEG signal in a specific frequency band. For MI, band power features are usually extracted in the µ (about 8−12 Hz) and β (about 16−24 Hz) frequency bands, for electrode localized over the motor cortex areas of the brain (around locations C3 and C4 for right and left hand movements respectively). Such features are then typically classified using a Linear Discriminant Analysis (LDA) classifier.

It should be mentioned that EEG signal processing is often built using machine learning. This means the classifier and/or the features are automatically tuned, generally for each user, according to examples of EEG signals from this user. These examples of EEG signals are called a training set, and are labeled with their class of belonging (i.e., the corresponding mental state). Based on these training examples, the classifier will be tuned in order to recognize as appropriately as possible the class of the training EEG signals. Features can also be tuned in such a way, e.g., by automatically selecting the most relevant channels or frequency bands to recognize the different mental states. Designing BCI based on machine learning (most current BCI are based on machine learning) therefore consists of 2 phases:

• Calibration (a.k.a., training) phase: This consists in 1) Acquiring training EEG signals (i.e., training examples) and 2) Optimizing the EEG signal processing pipeline by tuning the feature parameters and/or training the classifier.

• Use (a.k.a., test) phase: This consists in using the model (features and classifier) obtained during the calibration phase in order to recognize the mental state of the user from previously unseen EEG signals, in order to operate the BCI.

Previously completed project: https://github.com/nvnavaneeth/EEGBot

)