Generative music using Reason part 1

An autonomous system

richard bultitude
Generative music using Reason
12 min readNov 15, 2013

--

1.1 Introduction

In this blog series I aim to inspire users of any level by explaining the techniques used to create generative music in Reason. I will mainly focus on the generation and arrangement of sound purely using CV/gate, employing Reason as a kind of modular synthesizer.

Expect some tips and tricks that I have learned over the years, which can be applied to other production styles and which might spark a new idea for combining outputs and devices.

I have provided Reason files (version 6.5.1) where possible so that you can follow the process easily and develop your own arrangements using them as templates.

The series starts in baby steps so that newcomers to Reason don’t get confused. Part 1 introduces CV/Gate and the basic arrangements which allow us to create rhythms without the use of any sequencers. More experienced users might want to skim read this part and await Part 2.

I will continue to write more pieces beyond this series and should I find any more techniques or tools that can take this thinking further I will be sure to blog about them. I also plan to release a series of generative tracks as free and open source downloads replete with notes and rendered audio.

Comments are welcome and I will do my best to answer any questions.

1.2 Principles

In order to create generative music in Reason we first need to get to grips with the basic principles of an autonomous system. In programming an autonomous system is something where different elements interact without external intervention by performing actions which are part of their own behaviour. In a visual program, for example, facets of these elements might change over time, such as their position, colour or distance from each other. What causes these changes might be part of their behaviour or the consequence of an interaction with other elements in the system. In this way the system evolves and this process of evolution is what can be used to generate something we might find pleasing or interesting or both.

What makes autonomy in creative technology interesting is the marriage of automated tasks and unpredictability. Artists such as Brian Eno, Keith Fullerton Whitman and Autechre have successfully managed to strike a balance between order and chaos by creating systems that generate sounds but that can be fine-tuned to deliver something sonically interesting and at times very beautiful.

I wouldn't mind if everyone had to use the same three tools to make music because ultimately it’s down to your imagination.

Sean Booth, Autechre

1.3 Our tools

It is not possible to program our own behaviours in Reason (as it is in Max MSP, SuperCollider or Bidule) but there is lot more to Reason than at first meets the eye. One very important feature it offers under the bonnet is CV/Gate and with this we can use all the devices available to us to turn Reason’s rack into a modular synthesiser.

1.3.1 CV/Gate

CV, or Control Voltage and Gate, in Reason is modelled on the analogue system of the same name that was used in keyboards and drum machines from around 1960 to 1980. CV is used to control things like pitch and velocity; Gate is used to control note on/off.

This technology pre-dates MIDI and understandably has its own limitations, but it also offers some very neat benefits too. For our purposes CV and Gate signals will be used to trigger sounds or automate other aspects of the device it is assigned to.

1.3.2 LFO

Our ally in this will be Low Frequency Oscillators (LFOs). These components output a wave or pulse at particular intervals and do so independently of the sequencer. This means we can create a system that lives and breathes on its own, without human interaction, and avoid the traditional method of sequencing MIDI. Let us begin…

1.4 Basic setup

Let’s get started by generating a simple beat using an LFO CV signal and a drum machine. Create a device that has LFO CV & gate outputs, such as the NN-19, and a Kong Drum Designer. Name the NN-19 “controller one” for clarity and in the Kong add some drums sounds. For the sake of this demo I’m going to add some synth drums by selecting each drum channel in turn, clicking ‘Show Drum and FX’ and then selecting one of the drum synths from the drop down list next to the ON button in the drum module.

I'm going to add a bass drum to channel 1, a snare to channel 2 and a hi-hat to channel 3. Name the channels appropriately to avoid confusion later on.

Fig 1.1

Fig 1.1 Create drums sounds Kong

Now I have my drums sounds ready I'm going to turn my attention to the NN-19 whose LFO will be the trigger for our sounds. On the back of the NN-19 (press tab on your keyboard to see the back of the equipment rack) at the top right you will a socket called LFO in the Modulation Output area. If you can’t easily see this because of all the cable clutter, then press K — this will make all but the selected device’s cables translucent. Click and drag from the LFO socket and drag the cable connector down to the Gate in on Kong’s channel 3 (see Fig 1.2). I have also provided a Reason file which contains this set up.

DL: LFO-CV-to-Kong.reason

Fig 1.2

1.2 LFO-CV-to-Kong

You should now hear the hi-hat sound being triggered repeatedly. What tempo this hat sound plays at depends on the rate of the LFO, so if you flip back to the front rack view and play with that dial you’ll hear the tempo drastically change. The reason the change is so great is because the LFO by default is not in sync with the tempo of the track and so offers a huge range, with 0 being the equivalent of around 4 BPM and 127 being something beyond 250 BPM. So, in order to make create something with a solid tempo I will turn on LFO Sync using the button next to the rate dial.

Hopefully you can already see what is so interesting about this approach: that the program is generating sound without any user interaction from external devices or the playing the sequencer. We are entering a world where sequencers are not necessary.

1.5 A note about CV and LFO waveforms

At this stage it is worth explaining some basics about how LFOs work with CV. As an LFO signal is essentially a wave its shape affects the velocity of the sound it triggers. If you play around with the waveforms of the LFO you will hear significant differences in the volume of the sound that is triggered, so if I change from the default triangle waveform to the square wave form example the hi-hat sound gets significantly louder. This can be used to our advantage in that it gives us more control over the accent of the triggered sound.

1.6 LFO synch

As you may have found playing around with LFOs in Reason they are not automatically in perfect synchronicity even if they are set to tempo synch, as they may well not have been started at the same queue point. To demonstrate this problem I set up two separate NN-19s with their LFOs both set to tempo synch and the same rate (1/8). The problem arises when I toggle the tempo synch for either one of them. By interfering with the regularity of the pulse I effectively move the start point of the trigger thus making the two beats slip out of phase. I have uploaded a Reason document to illustrate this (two-cv-signals.reason). You will notice that on launch they will be in perfect synch as both waves start from the same point (or angle), so uncheck the synch button on one of them and click it again and you should see what I mean — it is very difficult to get them back in perfect synch once they are out of phase.

DL: two-cv-signals.reason

1.7 Building rhythms

A solution to the synch issue and a great way of building rhythms is to use one master LFO which controls all the various devices and sounds. The easiest way to make this happen is to use the Spider CV merger/splitter. This handy utility device allows us to duplicate the signal many times and apply it to anything we like. So, in our original Reason file, create a Spider CV merger/splitter, disconnect the CV from the Kong and connect the LFO from the NN-19 to the Spider’s Split A socket, then drag a cable from the adjacent socket back to Channel 3 on the Kong.

Now you’re back as you were before, only now you have all the other outputs from the Spider at your disposal and this offers us duplicate signals to plug in to other sockets. Drag a cable from the next one along in the Spider to Channel 2 on the Kong and you’ll hear the snare sound at the same time as the hi-hat. Spider also offers us an inverted signal so we can use this to trigger the snare on the off-beat. See Fig 1.3 below and the downloadable reason file:

DL: LFO-to-Spider-to-Kong.reason

Fig 1.3

1.3 LFO-CV-to-Spider-to-Kong

As you can hear in the example the snare sound is very quiet. This is because I am using the sawtooth waveform which starts with a peak and ends with a trough. If you choose the inverted sawtooth waveform on the NN-19 you’ll hear a louder snare sound and much quieter hi-hat.

It’s worth noting at this stage that we can duplicate these signals as many times as we like by chaining together Spiders. Just keep taking a spare line out and plugging it into one of the split sockets. Within the Spiders themselves you can take a line from the Split A and plug it in to the Split B giving you another two regular signals and one inverted one. So take one of the spider outputs and plug it into Channel 1 and use the inverted on for Channel 2 on Kong. See Fig 1.4 and Reason file

Fig 1.4

1.4 LFO-CV-to-Spider-split-to-Kong

DL: LFO-to-Spider-split-to-Kong.reason

Now we should have a steady beat which uses all three synth drums and is generated by the LFO form the NN-19. However, this isn't a very nice beats, so to take this a little further I'm going to use another device to generate a different rhythm, again using an LFO but from a different device.

So, back to our new device: this time I’ll set the synch to 1/16 and assign it to the hi-hat. Rename the original controller ¼ controller and the new one 1/16 controller, then rename to the spider to 1/16 spider, so you know which beat it is associated with. Now we have two different rhythmic sources and a slightly more complex set up.

DL: two-controllers.reason

As mentioned earlier any new signal we create is likely to be out of synch with our main controller to begin with, so press play (and then stop) or re-launch the Song to ensure they are all in synch.

Try turning the volume of the hi-hat down a little and controlling (or automating) the snare decay. It should start to sound like a steady techno beat albeit one that’s not very dynamic, so let’s work on that.

1.7 Mixing in a little chaos

There are lots of interesting ways you can add more natural variance to what we have using CV from LFOs. One of the most exciting involves harnessing the power of the Random (or sample and hold) LFO shape. This is either of the bottom two shapes listed on the NN-19 (see orange highlight in Fig 1.5 below).

Fig. 1.5

1.5 lfo-random-highlighted

Applied to something like the hi-hat sound it gives us a seemingly unpredictable and wonderfully varied rhythm that is still in time with the other percussion sounds. Now if I create another controller (an NN-19 with the synch set to an 8 beat) and assign that to a new drum sound (say another hi-hat on Channel 4 of our Kong) and set that to random we can start to layer a nice beat. Let’s add another drum machine but this time playing around with the different properties of the drums sounds will really bring it to life. Listen to my example here or down the Reason file.

DL: simple-cv-techno.reason

You will notice I have used a duplicate signal of the random LFO to automate one of the Scream FX parameters in the mixer’s auxiliary Effects. Hopefully you’re starting to see the possibilities opening up.

So far we have just been using the NN-19 to supply our control voltage but there are other devices with more wave shapes on offer that we can take advantage of. In the above example I substituted a NN-19 for a Maelstrom and had a play around with the wave types. You’ll notice that it offers a lot more variety.

1.7 CV merge

As you may have noticed the Spider CV device also allows you to merge CV signals. This allows us to more efficiently produce rhythms using one sound or channel. In the examples below (Fig. 1.6 and reason file cv-merge.reason) I have hooked up two independent CV sources — two NN-19s, one using a ¼ beat LFO and one using an 1/8 beat LFO — and plugged them into the first two inputs in the Spider’s merge section.

The LFO wave shape is really important here: two signals which have the same phase are required else you will experience cancellation. This happens when two wave shapes are merged that use opposite start points — they each cancel each other out and create a neutral signal which doesn't trigger anything.

Fig. 1.6

1.6 CV-merge

DL: CV-merge.reason

By now you should have a decent understanding of how the CV signal from any device’s LFO can be used to trigger sounds on other devices and for the system to be configured to make regular beats that have some variance. Next we’ll take the idea much further and explore ways in which CV signals can be used to automate others aspects of our Reason studio simultaneously whilst avoiding the use of the sequencer.

1.8 Moving on

In part 2 I will show how we can incorporate other devices and building on our current knowledge. We can create really interesting music using modular synthesis but before we move on there are a couple of things I want to cover:

1.8.1 CV signals

So far we have only used CV Gate to trigger sounds but what we will need to do in order to take us further sonically is the use of a modulation signal. CV is what we will use to control aspects of our chosen sounds in the same way you might using automation in the sequencer. An example of this would be using a CV signal to control the filter frequency of a target device.

1.8.2 Trim

CV trim allows us to control the amount of modulation the CV signal applies to our destination parameter. In many cases you will want to alter the amount of modulation your CV signal applies. So, taking one of the example above we might want to modulate the filter frequency of a synth sound (in Subtractor for example) but we might only want to apply this sweeping effect gently. To alter the amount of the signal that is applied, flip the rack and use the dial next to the parameter you want to control. The tooltip will tell you what the parameter is called as depicted in Fig 1.7 below.

1.7 CV-trim

1.9 Conclusion

What we should have at the end of Part 1 is a good grasp of how CV/Gate signals can be used to trigger sounds and automate parameters and how, by using this system, we can avoid the sequencer and let the machines do the work for us.

I hope all of you Reason users new to these ideas have found this a straight forward introduction to an interesting set up and that those more advanced users are bearing with me before I move on to the real juice.

Part 2, Building Complexity, will reveal some techniques which give us more control over our sounds, the use of audio in our system and some of the ideas we can apply to create truly generative music.

--

--