Programming Etude #2

Quincy
2 min readFeb 7, 2023

--

Feature Artist (Stanford CS 470 / MUS 356)

Code and content in this drive folder (Stanford login): https://drive.google.com/drive/folders/1LxKOZmZcx4gjQ1ogXChKY6SGH6krX1UK?usp=share_link

Description:

This assignment is about learning how to use feature extraction for classification and synthesis. In the first part we use KNN to make a simple genre classifier and play around with which features give the best results. In the second part we build a synthesizer that plays sound snippets that most closely match features of mic input. More details here: https://ccrma.stanford.edu/wiki/356-winter-2023/hw2

Part 1: Classifier

In this part we use KNN to make a simple genre classifier. As you would expect, passing the model more information results in higher performance. When we used every piece of information we got the highest fold-4 cross validation number and when we used the fewest datapoints (no MFCCs) we got the lowest. Each of the models outperforms guessing randomly but falls short of correctly categorizing test data.

Genre Classifier Configurations

Part 2: Musical Mosaic

In this part we use KNN to modulate mic input. We extract features from input and find the closest match in a set of prerecorded audio. My prerecorded audio set comprises vocal recordings of all 44 phonemes in the English language. The intention is that anyone can modulate their voice to sound like me. In practice, this doesn’t work at all.. The phrase “nice to meet you” sounds a lot more like “th ch ch ee ee th ch ch ch ch” than anything else.

Run database-query.ck to play with the musical mosaic

More to come soon…

--

--