Music and Computers: Pushing Forward in Python
In my previous post, I discussed the types of music software that exist and are commonly used, and confessed to my latent desire to implement music theory logic in Python, with the hope that I can make some interesting sounding procedural music using my own program.
…And now we’re here — second blog post time! Time to discuss:
A) What have I accomplished?
B) How does it make me feel?
A) A lot! I’ve accomplished so much. But not as much as I thought I would have accomplished by this time.
B) I feel great! And stressed. But great! But… also quite stressed… but, you know, pretty great, overall. It doesn’t feel good to fall short of your own expectations, but I do feel growth and progress. Management of time and expectations are skills to be constantly cultivated, as is the will to persevere.
Moving The Goal Posts (But not too far…)
My original goal for this blog post was to discuss the following fully-implemented ideas:
- Musical Durations
- Chord Progressions
Damn. Point by point, I’m a about 50% towards this goal. I’m still fixing bugs in durations, tunings, and scales.
But progress isn’t always so linear. In a personal quest to capture these abstractions in a particular way, I’ve gained deeper insight into the underlying concepts, and a greater appreciation for some of Python’s capabilities. I’m achieving success by redefining it!
Let’s first look at the fruits, and then we’ll talk about the labors.
Musical durations are fractions. But fractions of what? Without thinking, a lot of musicians might initially respond that they are fractions of a measure — but that is not true unless the meter happens to be 4/4 (‘Common Time’). Brought to their senses, a musician might correct themselves and say that a duration is a fraction representing a multiple or subdivision of the beat, as defined by the standard ‘♩=100’ nomenclature, meaning there are 100 quarter (♩) notes per minute, or 100 beats per minute, or 100 beats/minute.
That is absolutely true, and this is the concept used most commonly by performers to understand rhythm — relative to a metronome click. However, if you are trying to sequence notes in a computer program, what you may find is that
beats / minute is a less helpful measurement than
number_of_ticks / resolution, where resolution is the denominator of the smallest allowable subdivision of the note. That is to say, it is more helpful to know a note’s length relative to other notes than a note’s length relative to a minute of absolute time. The choice to use this measurement establishes the following concepts:
- You are counting parts of a whole. Logically, this whole should be reflected by a whole note. Logically, this whole should be represented as the integer
- You are establishing a maximum amount of subdivision (or ‘minimum length’)
Both are problematic. We know intuitively that a musical duration can be arbitrarily long and arbitrarily small, but it is conceptually undesirable to reflect them as improper fractions, compound fractions, or fractions with a decimal point in the numerator. From this, I arrived at the following conclusions:
- A duration should be represented as an array of values in order to accommodate values greater than one whole note.
- A duration should be able to upscale its own resolution in order to allow arbitrarily small subdivisions of the beat without compromising the conceptual model of
- Duration values should be represented as powers of 2 reflecting the denominator of subdivision where the numerator is 1 (ex: 1/4, 1/8, 1/16, 1/32, etc.) so that errors can be raised in the event of invalid inputs without constricting the possible values that can be rendered.
from muse.durations.durations import Duration
# --- Example of an quarter note --- #
>>> # A quarter note duration
# There are 128 512th notes contained
# within a single quarter note.
# --- Example of an eighth note --- #
>>> # The same example, but with an eighth note
# --- Greater than one whole note --- #
>>> # A compound example, greater than one whole note.
>>> # 512 is a whole note.
>>> # 672 is whole note + quarter note + sixteenth note.
>>> Duration(1, 4, 16)
# --- Errors --- #
>>> # 3 is not a valid input, because there is
>>> # no such thing as a 'third' note.
ValueError: Value: 3
Value is not a valid beat subdivision. Value must be a positive integer which is a power of 2.(Example: [2, 4, 8, 16, 32, ...,])
# --- Addition and Equality--- #
>>> # Two eighth notes equal a quarter note
>>> d = Duration(8) + Duration(8)
>>> d == Duration(4)
# --- Subtraction With Up-scaled Resolution--- #
>>> # A quarter minus an eighth equals an eighth.
>>> # The resulting duration object takes on the higher.
>>> # resolution of either operand.
>>> d = Duration(4, resolution=512) - Duration(8, resolution=1024)
So far, so good! We are able to create a wide variety of rhythmic durations from a limited set of valid input. But there is still more: Tuplets.
Tuplets are tough. We want to preserve our abilities to raise errors against invalid numerical inputs, but without restricting our possible valid duration values. Depending on the resolution, there are likely between thousands and hundreds of thousands of possible subdivisions of a given beat, so it isn’t like you can just check if the value is in some kind of list or database all that efficiently.
In this context, I think they are best viewed as a compound fractions that distort our normal duration values (expressed as powers of 2).
>>> # The tuplet '3 against 4 where the 8th note gets the beat'
>>> # distorts the value of three eighth notes
>>> # (1.5 quarter notes) to one (1) quarter note.
>>> d = Duration(beats=[8,8,8], tuplet='8/3/4')
>>> d = Duration(beats=[8,8,8])
>>> # But we continue to raise errors with invalid inputs.
>>> d = Duration(beats=[8,8,3], tuplet='8/3/4')
ValueError: Value: 3
Value is not a valid beat subdivision.Value must be a positive integer which is a power of 2.(Example: [2, 4, 8, 16, 32, ...,])
Overall, I’m fairly happy with this. There are some minor bugs to work out when doing math with tuplets, and I’d like to allow for nested tuplets as well, but this is a good start. By assigning duration objects to attributes of notes and other musical events, I should be able to easily identify objects whose durations intersect at a specified point or range of position within a measure in the context of the given meter, and I have a simple, familiar way of expressing these values (as lists of powers of 2 ), even at very high degrees of precision.
One major challenge that arose was how to implement Tuplets. I initially created a Tuplet type that inherited from Duration, which seemed logical at the time but resulted in some really difficult to understand code. My mentor helped me to explore other design possibilities, and I wound up passing an instance my Tuplet object to a normal Duration object instead. Essentially, rather than representing a duration value, it represents a factor by which a duration value is distorted. In the end, it resulted in more comprehensible code.
Alternative tuning systems (alternative relative to the standard 12-TET ‘equal temperament’ used by western music, that is) pose unique challenges to a lot of traditional instruments and software. It was important to me to explore how these tuning systems and their unique scales are derived.
I wrote some early code exploring tunings and scales in the past, and despite my best efforts, it always comes out a bit complicated and gnarly. When generating a list of pitch frequencies, it also proved difficult to handle iterating across multiple octaves depending on the particular range that is generated.
I decided to try to design the tuning system as callable iterator this time. In Python, fortunately that basically just meant defining
__next__. I intended it to work pretty much just like the
range() built in works. This has a number of nice aesthetic benefits, such as allowing a user to generate lists of pitches of arbitrary length in either direction of the reference pitch just by using the iterator in a list comprehension.
from muse.scales.tunings import EqualTuning, JustTuning
# Equal temperament, starting from 440hz and descending the octave.
>>> et = EqualTuning(12)
>>> equal_tune = [pitch for pitch in et(0, 13, -1)]
[440 hz, 415.3 hz, 392.0 hz, 369.99 hz, 349.23 hz, 329.63 hz, 311.13 hz, 293.66 hz, 277.18 hz, 261.63 hz, 246.94 hz, 233.08 hz, 220.0 hz]
# Just tuning based on intervals from the harmonic series.
>>> t = JustTuning('harmonic')
>>> equal_tune = [pitch for pitch in et(0, 13, 1)]
[[440 hz, 495.0 hz, 528.0 hz, 550.0 hz, 586.67 hz, 618.75 hz, 660.0 hz, 704.0 hz, 733.33 hz, 792.0 hz, 825.0 hz, 880.0 hz,]
Full disclosure: I’m still working out the non-equal tunings. They aren’t strictly necessary for my fugue generation goal, so I’ve de-prioritized them. There is an implementation of these Just tuning types, but they generate some erroneous output currently.
It is worth mentioning as well that the
JustTuning started out in life as simply
Tuning, and had their logic mixed together. Though I haven’t quite harvested the fruits of the separation, I wound up implementing a
Tuning metaclass that defines the basic behaviors of these objects as callable iterators, providing a blue print for future tuning implementations and separating the buggy
JustTuning from the perfectly functional
EqualTuning. Even just separating the logic has made me feel both less afraid of modifying/fixing the
JustTuning code, and also, more comfortable with de-prioritizing it now that the buggy code has been totally separated. This was a suggestion from my mentor that really helped, and exposed me to the useful
abc.ABCMeta for the first time.
Overall, I like the callable iterator syntax for this purpose. It really freed me from worrying about the inflexibility of a static list-type data structure affecting my ability to cleanly generate these pitches once I moved on to generating chromatic scales. It also has the interesting property of being able to iterate (mostly) infinitely, while being constrained only by the limitations of
Pitch is not the same as frequency. While frequencies are an infinite continuum, a frequency is a pitch only if it can be heard. Therefore,
Pitch is limited to the range of human hearing, roughly 20 Hz — 20,000 Hz. A
Tuning is designed to be an infinite iterator, but will raise
StopIteration if a
PitchRangeException occurs. Though it is a relatively minor and simple detail, I felt that there is beauty in that design, which is very naturally facilitated by Python’s error handling system.
Notes and Scales
Can you believe that with all of this talk about music, we have yet to define what a
Note even is?! The reason is, of course, because notes are fairly complex. The way I reason it, they are basically logical constructs that have a pitch attribute, and dynamic name and degree attributes that change depending on the context of scales, chords, etc.
It turns out, there is a lot of contextual information in a note. This code is quite new and has many changes upcoming in the immediate future, so I’ll leave it at a brief example:
from muse.scales.scales import ChromaticScale, DiatonicScale
>>> cs = ChromaticScale('C#')
[C#4, D4, D#4, E4, F4, F#4, G4, G#4, A4, A#4, B4, C4]
>>> note = cs.ascending_octave
>>> ds = DiatonicScale('melodic_minor', cs)
['C#', 'D#', 'E', 'F#', 'G#', 'A#', 'B#']
['C#', 'B', 'A', 'G#', 'F#', 'E', 'D#']
I have a lot of goals in mind: Meet programmers; have other people look at my code; expose myself to some small amount of stress and mid-difficulty types of situations in programming in order to condition myself for bigger challenges; learn about and conform to common practices of professional programmers; learn to communicate effectively about code in-person; learn to code more effectively; write a musical fugue generator using my own Python library.
I’m having fun, and based on the above, it’s going well! I’m getting a lot of value out of the mentorship program.
On the other hand, while the progress is real, it is also behind schedule. I’ve expended a significant portion of my total project time designing these fundamentals, and there are real and significant challenges ahead in terms of generating actual music.
It will be a careful balancing act moving forward, but for now, I am maintaining my original objectives and holding on to my original concept of how this thing should be made. If time constraints catch up to me and push me to make difficult decisions, may it be all the more interesting to see the results!