Understanding a mouse is like smelling the color 9!

Computational Neuroscience Study Project: How to make good use of a mouse neuron response data to different stimuli ?

Mohamed Maher
Analytics Vidhya
25 min readJan 19, 2020

--

I. Introduction

A. Background

The visual cortex is mainly responsible for processing of visual information. It receives neuronal signals from the thalamus and pass these signals through a pipeline that refines and integrates the signals to produce high level concepts. The first stage of this pipeline is called V1 also known as Visual area1. This region receives the very low level & raw neuronal signals from the thalamus. Since V1 is an early stage in the pipeline, its neurons receptive field are sensitive to low abstract features in the image that falls on the retina. V1 contains millions of neurons that fire together in distinct patterns to encode the information about the received signals from the thalamus of different stimuli which is known in neuroscience as population code.

B. MouseLand Dataset
— 1. Technology behind Dataset Collection

In 2019, Stringer et. al. [4] managed to collect simultaneous recordings of ~ 10000 neurons from mice V1 area under different visual stimuli in one of the state of the art datasets. The authors used resonance-scanning two-photon calcium microscopy using 11 imaging planes spaced at 35 𝜇m. They captured these images at 2.5 Hz scan rate, it was shown that high scan rates up to 30 Hz provided no significant improvement in explaining the stimulus related variance. The captured images were processed using Suite2p toolbox [1] to produce the neurons spike rate response.

The pipeline of Suite2p consists of four separate stages: 1) image registration; 2)region-of-interest (ROI) detection; 3) ROI labelling and quality control; 4) activity extraction with neuropil correction and spike deconvolution.

In [2], authors used the same technology to monitor mice neuron activity and showed that despite being rather noisy, the visual cortex reliably encodes the representation of visual stimuli in orthogonal dimensions. The authors also came up with a potentially more reliable way to detect neural activity spikes after the publication of their dataset, and discussed in [3]. Thus it might be possible to obtain a similar but more reliable data in the future.

The initial experiments made on the published data using principal component analysis showed that the data was obeying the power low that n-th principal component variance scaled as 1/n. There seems to be no other research done on analysing this data and receiving any significant results.

— 2. Collected Data

Seven mice were shown carefully some selected images from the ImageNet dataset from classes that they believed made more sense to mice such as cats, dogs, holes and etc. The images were viewed on 3 screens placed at a 90 degree angle from each other where the front screen had the image and the other two side screens had the same image with random rotations and/or mirroring. An example can be shown below:

The authors presented approximately 2800 stimuli images to each mouse twice in succession and they recorded the neuronal response of the mouse. One out of every 20 image was gray or black images that were shown on the screens to record the spontaneous activity in the mice brains representing the rest state response of the neuronal activity.

This process was repeated two times on different days to allow maximum number of images to be analyzed while allowing analysis based on cross-validation. For the cross-validation, the authors used a simple nearest neighbor trained on the first repeat of the experiment to predict the images responsible for the responses in the second repeat. The nearest neighbor model achieved 75.5% accuracy. Although the authors found that the decoding accuracy didn’t saturate at a population code of ~10K neurons and adding more neurons increases the accuracy. This dataset is considered a breakthrough in terms of the number of simultaneously recorded neurons.

C. Challenges in Dataset

There many challenges to deal with such kind of datasets using machine learning techniques. For example, Mouse-Land dataset has small ratio of data-points (instances) to the number of features that reached ~10K features (neuron responses), and ~7K data-points (images) for each mouse.

Additionally, Such kind of data is characterized by low “signal-to-noise” ratio which can lead to what is known as “de-mixing problem”. When we have a series of recordings for neurons and we want to distinguish the signals of two different neurons, the process is usually not straight forward. In other words, scientists usually have different opinions in selecting the ground truth after looking at multiple neuronal recordings for the same stimulus. This means that some neuron responses in the data could be actually mixed up or mislabeled [6].

On the other hand, there are also some specific limitations for mouse-vision based data [5]. One simple issue is that mice depend heavily on their noses (smell) and whiskers, which can make the dependency of mice on vision much less. Mice have a simple visual system, comparing to monkeys for example, and it has been shown that the mouse visual cortex have many other functions other than vision. However, many researchers still think that mice are good and simple models to study, as the resemblances between mice and humans balance or exceed the differences.

Due to such challenges and the complexity of the data, the actual goal can be to extract useful information as much as possible. Training a full predictive model using such data can be a very non-realistic goal. Collecting useful information and stating the limitations can help in initiating further research to make best use out of this data.

D. Motivation:

In this blog we will define multiple interesting tasks to tackle using this dataset and try different data analysis and machine learning techniques to solve these tasks as well as coming up with some interesting observations about mice, how and what they see?

II. Experimental Work

During this project, we have tried out different tools, algorithms, and approaches for the purpose of either Neuronal Encoding or Neuronal Decoding. A chart of the different methods experimented is shown below.

II. A. Neurons Receptive Field:

In this part, we try to find certain parts in the images that activates specific neurons. There are many techniques to get the receptive field of the neurons. One of the popular methods is Reduced-rank receptive field estimation which is a variant of multivariate linear regression. Another more simple approach is weighted averaging of stimuli. We will first discuss Reduced-Rank Regularization and then follow it up with weighted averaging.

— Methods:

II. A. 1. Reduced-Rank Regularization:

In Reduced-rank receptive field estimation, we introduce a bottleneck called rank as a regularization parameter. This rank represents the most important features combinations in the input space that leads to the output of the regressor. The idea behind RRR is fairly simple where we define the following loss function:

This Loss function looks complicated and hard to minimize but practically it can be solved simply by applying Single Value Decomposition (SVD). Some implementations use PCA which results in the same result. We then use the reduced rank weights as the importance of each pixel in generating the corresponding neuronal response. We evaluate the performance of this receptive field by investigating the explained variance of the SVD stage which reflects how much of the neuronal response variance is maintained after this transformation. In the next figure, the rank of the bottleneck was set to 25 and the receptive field of randomly selected neurons is shown below:

— Results:
As seen in the responses, some neurons have very clear parts that contribute to their response. However, the explained variance is 11.4±0.7% which is considerably low and render these results of limited use.

II. A. 2. Weighted Averaging approach

We tried another simpler approach which is to calculate the average of all images weighted with the activation value of a neuron. The results shown in the next figure look somewhat similar with parts in the images to which neurons are blind or very active. We tried to relate these receptive fields to the location of the neurons in the brain tissue but we failed to find any relation between the location of the neurons and their receptive field. This supports that there is no relation between the location of neurons in the tissue and parts of images that activates the neurons.

— Conclusion and future work:
The results show that some neurons indeed are more sensitive to parts in the introduced image while others are equally active for different parts. We also didn’t find any obvious global relation between the neurons locations and their receptive field. However locally around every neuron, this relation might be present. Further investigation in this local relation might reveal some relation between the relative location of the neurons and their receptive field. This locality can also be defined in terms of activation correlations because some neurons might be spatially far from each other but functionally correlated so such analysis on a functional map of the tissue might be worth investigating.

II. B. Intrinsic Dimensionality Estimation:

The Intrinsic Dimensionality (ID) of a representation in a d-dimension space is the minimum number of parameters, or degrees of freedom, required to capture the entire information in the d-dimension representation. This is also known as the dimensionality m of the representation manifold M embedded in the d-dimension space where the representation in the higher dimension is called support and md.

It can be seen that ID is a nonlinear generalization of finding the linear dimensionality of a representation in high dimension space such as a plane in 3D space where we can only use only 2 of the 3 basis of the space. As an example of this nonlinear generalization of ID, consider a swiss roll representation in 3D space in the next figure:

It can be seen that the swiss roll support in 3D is just a 2D plane that is embedded in 3D space but not linearly. Intrinsic dimensionality tries to find the minimum number of parameters (in this case 3) to represent the data (representation) that is embedded in the high dimension space (3D).

This ID estimation is very important since it reveals to us the actual degree of freedom in our data which reflects how many basic neuronal combinations are required to represent all the images in our dataset. This ID estimation can be used to improve the performance of machine learning models by decreasing the total number of input features to the model which solves the curse of dimensionality problem. This reduction if done properly leads to a very small loss in variance of the transformed data and thus preserving most of the information in our data. However, the task of finding a good ID estimation is very hard and complex because the only information available is the density of the data in its support.

— Method:
As mentioned before, most approaches are based on the density of the data points in the d-dimension space. They assume that the number of neighbors of a given point, defined using distance r, on an m-dimensional manifold embedded in d-dimension space scales with the distance r as r^m which is independent of d. This density can be visualized as the number of points neighboring a given point in a sphere of radius r.

A popular measure is called correlation dimension which is defined as

p(r) is the probability distribution of all the pairwise distances between all points in the dataset. Many methods try to estimate m by fitting a line for m(r) and extrapolate the result to 0 in order to get m. The problem with all these methods is that they all need accurate estimate for p(r) for very small r which is where the estimates are most unreliable when the data is limited.

In [7], the authors realized that this problem can be solved by using a graph shortest path between points instead of directly using a distance metric. The graph is defined by connecting each point in the dataset to its nearest k neighbors. They also realized that the distributions of p(r) of different topological geometries are similar as long as they all have the same intrinsic dimensionality regardless of their geometric representation in the high dimensional space. In other words a 2D plane and a swiss roll in 3D will have the same p(r). These observations let them to estimate m by comparing the data derived p(r) to that of a hypersphere where we know the theoretical distribution of it. The authors compare the distributions simply using RMSE. This leads upon simplification to the following optimization function which can solved with any least-squares optimization algorithm:

The latter observation also provides a convenient way to assess the quality of the estimation by plotting the p(r) function for our dataset and a synthetic Hypersphere and a gaussian distribution of the same estimated dimensionality embedded in the original d-dimension. We then compare the plots and they should be similar.

— Results:
We compared the intrinsic dimensionality of the spontaneous state of the neuronal response when showing them grey images to that of the neuronal response when showing them images from the dataset. We tried K=4, 9, 13 nearest neighbor values when building the graph as well as bothCosine, Euclideandistances in the graph. We report the results that were good on the validation metric which is the similarity between the p(r) of our dataset and the synthetic datasets of the same dimensionality (a full list of results can be seen in the GitHub repo). The results are as follows:

For the grey image response:
The results were valid with the Euclidean Distance and were approximately the same for all K values at m=41. We show the plots of K=4

The first plot shows the similarity with the theoretical toy datasets while the second shows the distribution of p(r).

As for all images reponse:
The best results were obtained using Cosine distance and K=9 where m=118. The plots are as follows:

— Conclusion and future work:
The results show that the neuronal responses can be compactly represented with unique combinations approximately 1% of the total number of neurons in the dataset which reflects the high dependence between the neurons activity and the redundancy in the information embedded in these responses. The results also make sense since it can be seen that natural images responses have a bigger manifold dimensionality than that of grey images. This reflects that natural images leads to more active brain regions and more processing since the amount of information to be conveyed is more.

These results also favor the columnar hypothesis brain coding theory which argues that all neurons in a cortical column encode similar information which allow reliable transmission and computation of information in the very noisy and unreliable environment in the brain. Of course we don’t say that this is the correct theory in the brain coding theory but we argue that the brain coding lies somewhere in the spectrum between the Efficient coding theory and The Columnar hypotheses theory. However, it is more in the direction of the Columnar hypothesis.

These estimations can be used with many of the nonlinear dimensionality reduction techniques such as ISO maps or Diffusion maps to convert the data to a lower dimensionality without losing much of the original dataset variance. A new promising framework that can tried also is DeepMDS which uses deep learning to do this nonlinear transformation and was shown to be superior to ISO maps and Variational autoencoders in [8].

II.2. Clustering:

Decoding Neural response can be simply defined as detecting or identifying certain neural response to a specific stimulus. In the case of the mouse data, we had the 2,800 natural images as stimulus which should be decoded from the recordings of 10,000 neurons responses. We conducted a number of experiments using K-means clustering and Biclustering techniques to study the process of decoding the neurons responses.

Clustering is a commonly used technique forunsupervised learning. It is the task of separating unlabeled data points into a number of finite groups or clusters. Data points in the same cluster should be similar and they should be different from the data points in other clusters. The mouse data has multiple neurons responses for the same stimuli (image). The responses to the same image are expected to be in the same cluster. This is the main assumption we used in the experiments related to clustering.

As Distance measuring is an essential part of clustering. These measures are used to calculate the distance or similarity between pairs of data points or between clusters centers and data points. We conducted a number of experiments to evaluate the way we measure the distance between different neurons responses. First we evaluated the data without any change by comparing the average distance to responses to the same image to responses to random different images. The distance was calculated using the euclidean distance between the vectors of the neurons responses. The results were as follows:

Surprisingly, the results of both cases are very similar. These results suggests that using such distance measure won’t differentiate between the responses to the same image and other responses very well.

We tried a different set of experiments using a standardized version of the data and the results were as follows:

The results are slightly better, especially in terms of the maximum distance in the case of random images and the difference in standard deviation. For that reason, we used the standardized version of the data for the rest of the experiments.

— Methods:

II.2.A: K-Means

K-means clustering aims at finding a center object to represent the cluster and the closest data points to each center belong to the cluster represented by the center point. Distance measurement is needed to decide which cluster should each data point belong to. In K-means, the n data points are grouped into k different clusters, where each points in the same cluster should be closer to each other than points in different clusters. Accordingly, the neurons responses for the same image should belong to the same cluster. The problem would be to decide the optimal number K clusters. We conducted a number of experiments using different values of K using the standardized data. The clusters were evaluated by counting the number times responses to the same image were grouped into the same cluster.

II.2.B: Biclustering [9]

In the case of K-means, the algorithm aims at cluster the data according to the features (row) values. Biclustering is a different technique where data is clustered using rows and columns. Now, instead of clustering the whole data, we got “biclusters” or sub-matrix of the data. Biclustering can help identifying similar neural responses using only part of the features. We applied biclustering on the strandrized data, we identified a subset of the data features that we can only use in clustering. In other words we limited the number of neurons in each response when we do clustering using the data. Then we did clustering and evaluated it again.

— Results:
We compared both K-means clustering using all the 10,000 neurons responses, to K-means using only a subset of these neurons identified by biclustering. The evaluation was based on the number of responses to the same image being in the same cluster, and since we have 2,800 unique images , the maximum value is the same number.

In the following graph describes the results:

In both cases with number of k as low as “8” the clustering algorithm grouped about only 25% of the total responses to the same images correct. Using biclustering has significantly improved results. In general, the results show that it is not an easy task to use clustering methods with such data. Techniques such as standardization and biclustering helped with having better results, but still the solution is not very good.

— Conclusion and future work:
Using clustering with the mouse data is not a clear and simple task at all. We had to consider a number of optimization and techniques in order to improve the results. Clustering depends on the features of the dataset, and such complex dataset make is very hard to use such techniques. K-means is a fast and power clustering technique, but it can lead to problems when the data is too complex and the clusters and not evenly distributed. Biclustering shows much more potential in detecting similarities in the dataset. It works with both rows and columns, which allows the model to make use of the data complexity much work. The results show that this area can be explored more, which can lead to better results. Biclustering can lead to discovering areas of the data which has the most important information.

II. 3. Neuron Response Decoding (From Neuron Response to Stimuli):

In this project, we have additionally tried out building a machine learning model that can predict the class of the image shown to the mice based on the neuron response. However, such a task is going to be a challenging one due to different factors.

Dataset Challenges to building a ML model in neuron response decoding:
First, there exists 17 different classes of images shown to the mice as mentioned before. In addition, these classes are very unbalanced as shown in the following image where ratio of some images of mice and mushrooms are much higher than other classes like man-made and holes.

Second, some classes seem to be vague like unknown class, and Man-Made class which imply that the labeling process doesn’t seem the most suitable one for perfect neuron decoding process. For example, class holes has totally different semantic level from class of wild-cats!

Moreover, some classes seem to be similar to others like cats, and wild-cats or mice and hamsters but others are totally different like holes, and snakes!

— Methods:

II. 3. A: Baseline Models:

In order to overcome the unbalanced dataset challenge, we usine SMOTE oversampling to make instances of balanced ratios of classes. We divided the dataset into 80–20% training, and testing splits where the 20% testing split has around 75 neuron response from each class of images.

Different types of models have been fitted on the dataset. For instance, Ensemble models like Random Forest (500 trees in ensemble, max_tree_depth = 5), XGBoosting(max_tree_depth = 5), and Adaboost (50trees in ensemble, max_tree_depth = 5) were used, and achieved accuracies of 16.47%, 15.35%, and 11.93% respectively.

We have also tried Auto-Sklearn for 8 hours time budget which finally returns at the end Passive Aggressive classifier with hinge loss as the best model found with 16.78% accuracy.

In addition, a feed-forward neural network of 4 hidden layers (16384, 4096, 1024, 128 units per layer respectively), ReLU activation function, dropout layer of probability 0.5, and using batch normalization layers has been used to achieve the highest baseline accuracy of 22.12% accuracy.

II. 3. B: Dimensionality Reduction:

Many trials have been made to reduce the dimensionality of the dataset which has features from around 10K neurons.

Although, in most of the trials, no gain achieved by the dimensionality reduction, a great reduction in the number of used features has been achieved without much loss in accuracy. In the following text, we describe the different algorithms used in these trials.

Feature Selection:
Multiple Univariate Feature Selection Methods have been used like getting features of highest information gain from a decision tree model, univariate feature elimination algorithm, and recursive feature elimination algorithm. On the other hand, the multivariate feature selection algorithms haven’t been used due to computational complexity of these approaches.

Only the highest 1000 features were selected from the whole dataset, and the random forest model were fitted again on the reduced dataset. The univariate feature elimination reached the highest accuracy of 15.22%, followed by recursive feature elimination, and highest information gain in a decision tree features by 15.11%, and 13.8% respectively.

Feature Extraction:
Additionally, Trying different linear feature extraction methods don’t show any improvement. For example, we have experimented Truncated Singular Value Decomposition, Factor Analysis, Incremental Principal Component Analysis, Principal Component Analysis. Different number of extracted components have been tried out and the best accuracy measurements were 12.94%, 13.72%, 16.23%, and 15.45% respectively.

The following figure contains the plotted first two extracted components for each of the used methods. The results show that the classes are highly overlapping with these extracted components, and it is still difficult to separate between them.

II. 3. C: Preprocessing Trials:

  1. Average Neuron Response for each image:
    As the dataset has originally 2800 stimuli images, We averaging the neuron response for each image as a trial to reduce the noise from the neuron response for each image. However, the feed-forward neural network performance has been reduced to achieve 19.96% instead of 22.12% previously obtained when using the full dataset. This can be due to the large decrease in the number of instance, and the amount of information reduced by just depending on the average of the neuron response.
  2. Normalize Neuron Response by Grey Image Average Response:
    We have also tried to normalize the neuron response by the average response of the mice when looking at a simple grey image which resembles the spontaneous activity in the mice brain. While training the feed-forward neural network on the new dataset, a slight improvement has been achieved to reach 23.84%. However, we should consider that grey class stimuli images have been removed from our dataset, and now we have 16 classes instead of 17.

II. 3. D: Class Merging:

As a trial to get rid of vague class labels, and merge semantically similar classes, we have tried to merge some classes together as the following:

Using the previous class merge, we finally have only 6 classes of images. Our random-forest, and feed-forward neural network models achieved 39.14%, and 39.84% accuracy respectively.

While using another merge of classes as shown below where mice were merged with hamsters, our models achieved slightly less accuracy of 37.73%, and 38.11% for the same models.

This results because maybe Mice have distinct neuron response when they look at other mice :D.

II. 3. E: Better Class Merge:

As there are hundreds of different combinations of classes, we tried to find a better way to merge classes together.

We fed the dataset of images shown to the mice into a pretrained network on ImageNet. Then, this network was fine-tuned. The list of high level features were extracted from this network, and clustered using KMeans clustering. Finally, classes labels were merged together depending on their presence into the final clusters (ie: if majority of cats images, and mice images were at the same cluster. Then, they are going to be merged together).

Top performance achieved when using ResNet 50 Pre-trained on ImageNet and fine-tuned with our dataset. Label-Smoothing with factor (0.1) was used to avoid over-fitting. The final network has a validation accuracy 77.74% on our dataset. High Level Semantic Features were extracted before the first fully connected layer after flattening the convolutional network output.

5 Clusters were found to be the best number according to Elbow Rule using K-Means Clustering with Davies-Boldin Score[10].

The final class merge can be found below:

We have also used the trick before where the neuron response is normalized with the grey image average response. Then, a feed-forward network has been trained to predict these new list of classes.

The final model achieved 42.21% accuracy on these 5 classes which doesn’t seem to be also a significant improvement from the previous class merging trials.

— Conclusion and Future Work:

Building a machine learning model to encode neuron response to a stimulus class doesn’t seem to be an easy task. We managed to train a MLP that achieves 22.12% accuracy on 13 different stimuli classes. Different preprocessing approaches have been tried like normalizing neuron response by grey image average response which improved model performance to 23.84%.
Feature Selection and Extraction methods managed to reduce the dimensionality of the data to a large extent without much loss in accuracy performance. For example, incremental PCA extracted 100 components out of the 10K neurons response with 0.2% accuracy loss on the random-forest model.

We propose different merging ways for the stimuli classes that can enhance the labeling of the data collected and makes it more meaningful. Best performance achieved when merging the 13 classes in 5 different classes with 42.21% accuracy using a feed-forward neural network.

As a future work, We suggest trying to label the images based on some lower level features (Lines / Circles / Simple Shapes) as Mice have a simple visual system and depend heavily on their noses (smell) and whiskers, which can make the dependency of mice on vision much less. So, it makes more sense to label the stimuli images based on simple labels more than high semantic meaning like (Animals / Man-Made / Mushrooms / etc.).

Additionally, for the feed-forward neural networks used, we could enhance their performance by using AutoML tools for Neural Architecture search which could find better models achieving better accuracy on the dataset.

II. 4. Spatial Data Approach For Neural Response Classification:

As the data also contained coordinates of each neuron in 3D space, we assumed that there might have been some spatial relationship between the neurons that could have helped us decoding the signal. One of the better ways to handle spatial data is using convolutional neural networks (CNNs).

— Methods:
Convolutional Neural Networks are neural networks consisting of convolutional Layer, pooling layer and fully connected dense layer. They are often used for working with spatial data such as images. Given we have spatial relation of the observed neurons in the brain, CNNs should be able to extract any information from spatial relation of neurons if there is any.

In order to make the available data useful for the convolutional neural networks, we needed to transform the list of neurons into a 3D matrix

— Results:

Transforming data to 3D matrix was not as straightforward as it seems in theory. In order to fit all the data into a 3D matrix we needed to have a matrix of dimension 1000x1000x400 for each sample in the data. That being array of x elements, while we had only around 10,000 elements in each one of them. The matrix was extremely sparse > 0.999 and huge in size. Given we had around 7000 samples, we couldn’t fit the entire dataset in the memory this way, let alone train a neural network. We had to reduce the data in some way. Looking at the data showed that there were only 11 distinct coordinates for z-axis, that being that we had essentially 11 slices of 2D space stacked on top of each other. That allowed us to reduce the matrix size to 1000x1000x11, but it was still too large and too sparse. So we decided to apply block reduction to the data, by summing blocks of the data and thus reducing the size. This meant that we were losing some of the data, but hopefully enough would be retained to get some insightful results.

We tried multiple block sizes and decided to stick with the block size of 16x16, which resulted in a final sample of size 63x63x11. The resulting matrix was still quite sparse, in fact even with the matrix reduced to 11x11x11, we still had almost 40% sparsity. But rudicng the sample this much would result in loss of too much data and the experiments conducted on this size showed slightly worse results as well. On the other hand matrixes with the size larger than 63x63x11 resulted in computational issues. So we were able to obtain the best results with the selected size.

We trained 2D convolutional networks on the transformed data with input shape 63x63 and 11 channels. Highest accuracy obtained on the test data after trying different variations of the model was 35%. This result was achieved by 2D CNN with 3 Convolutional Layers and Relu activation function. From the experiments we conducted, adding new layers didn’t provide any improvement in terms of results.

We also tried using 3D convolutional network on the data with input shape 63x63x11 and 1 channel, but the results were slightly worse than that of 2D CNN with accuracy of around 31% and longer training times. In theory 3D CNN should be able to perform as well as 2D network if not better, but due to limited experience with 3D convolutional networks, more effort was put on experiments on 2D CNN.

— Conclusion and Future Work:
While the results were not terrible, and clearly some information was preserved even after all the transformations done to the data, the best results archived using this approach was still worse than that archived with Random Forest and Feed Forward Neural Networks. Perhaps with more computational resources and larger matrix sizes, better results can be obtained. It is also possible to retrain some of the well known architectures for image recognition and see if it improves the results. Finally, as mentioned there is a lot of room for experimentation when applying 3D CNN-s, which in theory should be able to at least match results of 2D CNN-s.

IV. Authors Contribution:

Our Github Repository with the different methods and implementations can be found [HERE]

This work has been done as a collaborative work with Abdelrhman Eldallal, Mohamed Maher, Shota Amashukeli, and Youssef Sherif as a course project for Introduction to Computational NeuroScience Course at University of Tartu under supervision of Prof. Raul Vicente.

Abdelrahman El-Dallal was the one who investigated the Clustering approaches. Mohamed Maher was responsible of building models for stimuli classes prediction. Shota made the experiments related to Spatial Data Approach for neuronal response classification, and Youssef was responsible of Intirinsic Dimensionality investigation, and neurons receptive field. All members participated in preparing presentation slides, blog content, and scripts used during the project.

V. References

  1. Pachitariu, M., Stringer, C., Dipoppa, M., Schröder, S., Rossi, L. F., Dalgleish, & Harris, K. D. (2017). Suite2p: beyond 10,000 neurons with standard two-photon microscopy. Biorxiv, 061507.
    https://www.biorxiv.org/content/10.1101/061507v2.full.pdf
    Github Repository: https://github.com/cortex-lab/Suite2P
  2. Stringer, C., Pachitariu, M., Steinmetz, N., Reddy, C. B., Carandini, M., & Harris, K. D. (2018). Spontaneous behaviors drive multidimensional, brain-wide population activity. BioRxiv, 306019.
    https://www.biorxiv.org/content/10.1101/306019v2.full.pdf
  3. Pachitariu, M., Stringer, C., & Harris, K. D. (2018). Robustness of spike deconvolution for neuronal calcium imaging. Journal of Neuroscience, 38(37), 7976–7985.
    https://www.jneurosci.org/content/38/37/7976.abstract
  4. Stringer, C., Pachitariu, M., Steinmetz, N., Carandini, M., & Harris, K. D. (2019). High-dimensional geometry of population responses in visual cortex. Nature, 1.
    https://www.biorxiv.org/content/10.1101/374090v1.full.pdf
    Github Repository: https://github.com/MouseLand/stringer-pachitariu-et-al-2018b
  5. Baker, Monya. “Neuroscience: through the eyes of a mouse.” Nature News 502.7470 (2013): 156.
    https://www.nature.com/articles/502156a
  6. Paulus, Martin P., Rayus Kuplicki, and Hung-Wen Yeh. “Machine Learning and Brain Imaging: Opportunities and Challenges.” Trends in neurosciences 42.10 (2019): 659–661. https://www.cell.com/trends/neurosciences/fulltext/S0166-2236(19)30130-4
  7. Granata, D., & Carnevale, V. (2016). Accurate estimation of the intrinsic dimension using graph distances: Unraveling the geometric complexity of datasets. Scientific reports, 6, 31377.
  8. Gong, S., Boddeti, V. N., & Jain, A. K. (2019). On the intrinsic dimensionality of image representations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3987–3996).
  9. Bi-Clustering: https://www.cs.princeton.edu/courses/archive/spr05/cos598E/Biclustering.pdf
  10. Davies, D. L., & Bouldin, D. W. (1979). A cluster separation measure. IEEE transactions on pattern analysis and machine intelligence, (2), 224–227.

--

--