Week [2–3] Sound of the City

çağdaş çaylı
bbm406f17
Published in
1 min readDec 4, 2017

Abstract

In our project we decided to use the data set in https://serv.cusp.nyu.edu/projects/urbansounddataset/ contains 27 hours of audio with 18.5 hours of annotated sound event occurrences across 10 sound classes.We may change or add more sound data to our training data later but we first use this data to get a baseline for our project.

We decided to try all methods we learned to get some results and we will continue with the most accurate one.(Methods are K-NN, Regression, Neural Networks, SVM’s, Decision Trees)

We are doing researches about sound and its features, and gathering information so that we can decide which tools and libraries to use to extract the features of sounds for training.In some previous related works MFCC is used to extract sound features.(MFCC Mel Frequency Cepstral Coefficents are a feature widely used in automatic speech and speaker recognition.)

Also another challenge for us is to extract specific sounds from mixed data for example in a park there are many different sounds like child screams, car horns, dog barking… We first pick car horns from this mixed data and put it into our classification algorithm, so it is another topic us to research.

What we will do is simply shown below:

--

--