Max MSP Basics

Contents (click to jump to section):

What is Max/MSP?
Working with audio
Numbers, messages and lists
Programme flow and control
Making patches simpler
Playing audio samples
Simple MIDI control
Enveloping and cross-fading
Designing the user interface
Soundfile playback and timing.
Refining soundfile playback control and timing.
More elegant approaches to additive synthesis.
A better way to create polyphony: poly~
Breaking out; physical I/O and Arduino.
Working with acoustic instruments
Audio processing
Further audio manipulation
Jitter
Jitter II
Gen~
OSC
Javascript in Max
Algorithmic composition

 

What is Max/MSP?

 

Max/MSP (often just called ‘Max’) is a ‘multimedia programming environment’ which will allow you to create pretty much any kind of music or audio software you can think of. It can also handle video using a built-in extension called ‘Jitter’.

To get more of an idea of what Max can do, visit the website www.cycling74.com and click on the ‘projects made with Max’ link.

 

Making your first ‘Patch’

 

A programme in Max is a called a ‘Patch’ (or ‘Patcher’). This is because it is made by connecting (or ‘patching’) graphical objects together on the screen. To create a new patch, select File>New Patcher( ⌘n). This creates the window in which you will make your patch. The patcher has two modes, EDIT MODE for editing the patch (creating objects, making connections), and LOCKED MODE for actually using the patch. If you want to press buttons, move sliders and so on, you need to lock the patch by ⌘e Now double-click anywhere in the window. The object pallet appears. If you hover over each object, you will see its name. (There are many more objects than these – but these are the most common, basic ones.) When you’ve made an object, you can resize it, drag it around the screen, cut, copy and paste it. If you drag it while holding alt, you also get a copy. As well as the objects shown in the palette, there are many more objects. To create these you must use an object box, which is a ‘blank’ box into which you type an object name. Objects are really small programs which you put together to make larger programs.

 

Elements

There are several different elements that go to make up a patch. The message box is simply a container for any piece of text or a number which gets sent when clicked on.

 

A max object carries out some sort of function on data going through it.

An msp object works similarly to a max object but at a far higher processing speed and is therefore more suited to direct handling of audio data. It looks different and has a ~ sign in the name to remind you that it is msp and not max.

 

Arguments can be added to an object to tell it how to behave, for instance a cycle~ object will generate a sine wave but the addition of the argument 440 will make the sine wave oscillate at 440 Hz.

F:\Max\maxed\images aut\wk1 comment.pngA comment box lets you comment on your patch. This is useful in that it can tell you or other people working on it how the patch is put together, which is very helpful for fault finding, and revision.

Objects can be connected together with patch cords. Each object has inlets at the top, and outlets at the bottom. You make a patch cord by dragging from the outlet of an object. When you stretch the cord to nearby the inlet of another object, a comment appears telling you about that inlet. When you let go (of the mouse) the connection is made. You can only connect outlets to inlets (i.e. bottom to top). Different objects have different functions. They also have different numbers of inlets and outlets depending on their function.

F:\Max\maxed\images aut\wk1 patch chords.png

 

Order of Execution

 

By default Max will work from right to left and top to bottom across the patcher, in many cases this can only mean fractions of milliseconds difference, but in some cases this can be crucial.

 

HELP!

 

All objects have a built in help file. To see this alt-click on the object. The help files are not just manual pages, but working patches in themselves. You can unlock them, see what’s going on, copy the contents and paste it into your own patch. THIS IS VERY USEFUL!

Each help file also has an open object reference link at the top right, which takes you to a more comprehensive manual page for that object. It also has a See Also link at the bottom right, which shows objects that do similar things to the one you’re looking at.

 

Keeping things neat

 

It is important to try to keep your patch neat. Things can get very messy and tangled if you’re not careful, and then finding and fixing problems can be a real nightmare. To align objects and patch cords nicely, select them and type ⌘y Commenting your patch is also very important.

 

SAVE!

 

Always save your work! Always do this often!

 

Have a go at building this and hit the toggle….

F:\Max\maxed\images aut\wk1 unlocked.png

Above, this week’s patch unlocked with examples at bottom.

Below the same patch locked and formatted in presentation view.

F:\Max\maxed\images aut\wk1 tidy.png

* * *

Working with audio

 

Max and MSP

Max/MSP is really two parts. Max is the part that handles numbers, messages, MIDI information and other data. MSP handles audio signals. (There is also a third part called Jitter which handles video signals, not covered in this module. Note: Max/MSP is often just referred to as ‘Max’ for short!)

Max and MSP are used together seamlessly in ‘Max/MSP’, but it’s often helpful to understand the distinction. For example, the manuals for Max and MSP are separate. Also, MSP objects use a lot more CPU (computing power) than Max objects, and knowing that can help you write programs that don’t make the computer work as hard.

The most obvious difference is in making connections. Max connections carry numbers and other data, whereas MSP connections carry audio signals. Max number and messages go at a slower rate intended for MIDI notes (the ‘scheduler rate’) whereas MSP audio signal numbers go at the much faster audio sample rate.)

You can easily tell the difference between Max and MSP connections when building your patch. Max connections are simple black lines (which you can colour) but MSP connections are thicker stripy lines.

You can also tell the difference between Max and MSP objects. MSP objects always have the symbol ‘~’ at the end of their name. Sometimes that distinction is crucial to avoid confusion. For example, the Max object cycle is completely different from and unrelated to the MSP object cycle~. However, Max/MSP helps you get it right, because it only lets you make the right kind of connections. For example, you can’t connect an signal cable to cycle, because it is not an audio object.

F:\Max\maxed\images aut\wk2 patch chords.png

Some MSP audio objects

cycle~ a sine-wave oscillator

scope~ an oscilloscope for looking at signals

EZDAC~ a simple audio output object, with graphic on/off button

gain~ a graphic-based signal level control

*~ a multiplication object for audio signals (NB: * is for numbers)

spectroscope~ a spectral signal scope

 

Some Max objects

message a simple container for any kind of data

int an integer number

float a floating point number

slider a graphic fader control for numbers

line a ramp (or envelope) generator. (Also line~ for signals)

F:\Max\maxed\images aut\wk2 ex.png

A simple patcher using cycle to generate a sine wave. Pitch can be controlled by clicking and or dragging the three objects at the top of the patch.

Numbers, messages and lists

A lot of what Max/MSP does is to do with numbers. Numbers can be musical parameters, MIDI notes, control messages, audio signals – in fact, almost anything. So understanding how to work with these numbers is really important.

F:\Max\maxed\images aut\wk3.png

Number boxes

Number boxes are the work horse of Max/MSP. When we send a number into a number box, it is displayed in the box, and also sent out of the outlet. This allows us to see what is going on in our patch. You can also use number boxes as controls for the user, as their contents can be changed by scrolling with the mouse, or typing in values directly.

 

Ints and floats

There are two kinds of number boxes, int and float. Int uses integers and float uses floating point numbers. If we send a floating point number into an int box, the number will be rounded to the nearest integer.

 

Messages

Messages are pieces of text. A number box doesn’t respond to most messages. However, if we send

the message ‘set 5’ to a number box, the box will be set to the number 5 BUT THE NUMBER WILL NOT COME OUT OF THE OUTLET. This will be very important to remember later on!

Variables

We can use variable in messages using the $ sign. When we include the symbol $1 in a message, then send a number into the message, the number replaces the $1 symbol, and the whole message comes out of the message outlet. For example, if we make a message box containing the text “My house has $1 mice” and send the number 5 into it, the message “My house has 5 mice” will be produced. (We can check this using the print object, which sends its output to the Max error window.)

Symbols

A ‘symbol’ in Max/MSP is one ‘nugget’ of information, that may have a number of different elements within it. This is a really, really important idea to grasp. The following example should help: If we make the message “My house has $1 mice and $2 bats” we can substitute two different numbers. BUT if we connect two number boxes, it won’t work. Each number box will substitute only the $1 argument, because each number is a different symbol (a symbol containing one number). To make the two numbers substitute $1 and $2 resepctively, we need to make a single symbol containing the two numbers. We do this by putting the two numbers into message box. So, A MESSAGE CONTAINING TWO OR MORE NUMBERS IS ONE SYMBOL.

Lists

It is possible for a message to contain more than one symbol, if we put a comma. A comma separates one symbol from another, and creates what is called a ‘list’. The elements in a list are separate symbols, and get sent sequentially, one after the other, whereas elements in a single symbol get sent together.

Pack

Sometimes we have two or more number boxes, but we want them to behave like a single symbol (for example, to substitute for the arguments $1 and $2 in a message). We can make the two numbers into a symbol by using the object pack. We can put many numbers into pack if we want – the argument to pack is the number of inputs we want. If we want pack to handle floating point numbers, we should supply floats as the initial arguments. (NOTE: a feature of pack is that the symbol we are creating only gets output when the leftmost input is received. If we want to create an output when any input is received, we must use the related object pak.)

Unpack

There is a corresponding object unpack which can do the opposite of pack. Its input is a single symbol containing a number of elements, and its output is those elements coming out of separate outputs. Using a combination of pack and unpack can be very convenient, because we can send many different values down one single connection. We pack them together at the start, send them down one connection, then unpack them again at the end. Pack and unpack can handle messages as well as numbers.

Avoiding ‘Stack overflow’

Sometimes we might want to connect objects in a loop. For example you might have a slider connected to a number box, and you want the user to be able to use the slider or the number box to control the patch. You want moving the slider to cause the number to change, and you want changing the number to cause the slider to move. BUT when we connect objects in a loop, we get what is called ‘stack overflow’. When this happens Max/MSP will stop working until we get it going again. The way to avoid this is to use the ‘set’ message. When we put ‘set’ before a number, we change the value of an object WITHOUT CAUSING THE VALUE TO BE OUTPUT. This little fact avoids the continuous loop problem.

Send and receive

Sometimes patches get very tangled with lots of cables. We can avoid this by using the objects send and receive (or s and r). For example, if we create the object ‘send gaga’ and another object ‘receive gaga’, we create an invisible connection. Anything we send input to ‘send gaga’ will be output from ‘receive gaga’. (NOTE: there are also signal versions of send and receive. These are called send~ and receive~, but CANNOT be abbreviated to ‘s~’ and ‘r~’.)

Continuous controllers and pack

Pack can be used for continuous controller data. (This is very fast sequences of number that create the impression of a continuously changing shape, such as the loudness envelope of a sound.) So, for example, if we were using line to create a continuous loudness curve, we could pack this data along with other data, and then unpack it again to separate it out from that other data. Lists as input to line Because elements in a list get sent one after the other, line can take a list as input and to create a continuous controller which has a more complex shape in time. For example, we can say “0, 100 1000, 50 1000, 200 1000”, which will create a continuous line that goes from 0 to 100 in 1 second, then 100 to 50 in 1 second, then from 50 to 200 in 1 second. Note carefully where the commas come. The message is a list of four symbols, and three of the symbols contain pairs of numbers. The pair is a value and a time.

* * *

Programme flow and control

In a computer programme (which is what a Max/MSP patch is), events and actions are connected together in particular orders, and in cause-and-effect relationship – one thing leads to another. This is called ‘flow of control’ (or ‘control flow’). Max/MSP’s visual metaphor gives a very clear picture of the flow of control. An important control object is trigger (which can be abbreviated to t). Trigger can make a number of events happen in response to any event arriving at its inlet, and it allows you to determine the order of the triggered events. A kind of opposite of trigger is select (or sel). This looks for specific events, and produces a bang when it sees one. (select can look for multiple events!)

 

Debugging your flow of control

Debugging means finding and fixing problems. Sometimes it is not completely clear where the flow of control in a patch is – which events are happening in what order. This can make finding problems difficult. To get round this we can use ‘watchpoints’ and ‘breakpoints’. We add these to connections using the right button menu. Watchpoints allow us to see what is going on in a connection. There is a special Watchpoints window we can use to see this information. There is also a Debugger window that will show us more information – but to use this we need to enable debugging (in the Debug menu). Breakpoints are similar, but they actually stop the patch working, freezing it in time so we can look at what’s going on. You have to use ⌘u to step the patch forward to the next breakpoint. Breakpoints also appear in the Watchpoints and Debugger windows.

F:\Max\maxed\images aut\wk4 breakpoints.png

F:\Max\maxed\images aut\wk4 watchpoints.png

Event order revisited

Many problems in Max patches are to do with event order. For example, in maths the sum (2 * 3) -1 is not the same as 2 * (3 – 1): the order of the calculation makes a big difference. Similarly in Max/MSP event order often makes a big difference. In Max/MSP events happen from right to left, and from top to bottom. And when events are connected in a chain, the whole chain of events is executed right to the end, before any other events are executed. This is the case even if the chain wanders around on the screen from left/right or up/down. What counts is the position on the screen of the first object. Once that object has been triggered, the whole chain will execute. If there are branches in the chain, the rightmost branch is executed first. For this reason it is important to keep your patch well organised.

Using line~ to make envelopes

In this example we need to give all the sine tones the same attack/decay ‘envelope’. You do this using line~, which produces a signal ramp. This ramp is a signal going smoothly from 1.0 to 0.0, representing the loudest and quietest parts of the sound. We can control the shape of the ramp by using a message box to line~. For example, the message 0, 1 500 0 2000 means ‘Start at 0.0, then go gradually to 1.0 over a period of 500ms, then go back to 0.0 over 2000ms’. This whole shape is called the ‘envelope’, and it is generated as soon as we click on the message box. Now, if we take the outout of line and multiply it with our audio signal (using the *~ object), then the loudness of our signal will change. It will start off silent, then get louder over 500ms, then get quieter again over 2000ms, finishing as silence. We call 500ms the attack time and 2000ms the decay or release time. If we want the attack and decay times to be variable, we could change the message to 0, 1 $1 0 $2. Then we could put two number boxes into pak, and send this into the message box. One problem, though, is that the ramp will be triggered every time we change the number boxes. To prevent this, we could send the output of pak into the right inlet of another message box so that the two numbers are ‘stored’ there. Then we can send this message into the first message box (the one with $1 and $2). Now when we click on the second message box (the one with the two numbers) the envelope will be triggered.F:\Max\maxed\images aut\Wk4 line.png

and in presentation mode…

F:\Max\maxed\images aut\Wk4 ex.png

 

Making patches simpler

Patches can quickly become very complex. An important way to control this is encapsulation. Encapsulation allows us to place complex parts of our inside another object called patcher or just p. For example, suppose we want a sine generator with frequency and amplitude controls, and stereo phasing. This has quite a few bits to it.

F:\Max\maxed\images aut\Wk5 enc.png

We do this by selecting the bits of our patcher we want to encapsulate and typing shift-⌘-e. This puts everything into a new object, to which we can add a name if we wish (we don’t have to, but it makes things clearer). If we encapsulate a group of objects which are connected to objects which we are not encapsulating, the new p object will automatically have inputs and outputs which allow these connections. If we double click the p (in locked mode) then we can see inside it. You will notice that the inputs and outputs are represented by special input and output objects. These are how the stuff inside the p communicates with the rest of the patch. You can use the info page of the send and receive to label them using the ‘comment’ field, so that when you hover the mouse over them you get a message telling you what kind of data you should connect. You can also use send and receive to get stuff into and out of a p object, without using input or output objects.

Object-Oriented Programming (OOP)

If we think things through properly, we can make p objects that represent a complete ‘chunk’ of our programme function. This makes things simple, because we can then forget about what’s inside the p object – we just have to know that it works. (This is an example of ‘Object-Oriented Programming’ or OOP.) Let’s take the example of the sine generator we made for Exercise 2. Each sine can be thought of as a simple task: we supply a frequency and amplitude, and we get out a sine wave as a stereo signal, with the right side 1Hz higher in frequency than the left. So we can encapsulate the whole process of generating the sine wave, ending up with a p object that has just two inputs (frequency and amplitude faders) and two outputs (the stereo signal). We could call the p object p sine_machine

Before and after encapsulation

In our sine generator object, we needed 16 of these objects. We could do this by simply copying the p sine_machine object 16 times. NOTE: If we do this, any changes we make in one p sine_machine object will not affect any other p sine_machine objects, even though they have the same name.

Abstractions

This p sine_machine object is actually quite useful. We can see how we might want to use it again and again in other patches. To do this, we can save it as a separate patcher, which we can use in other patchers just as if it was a standard Max object. An object like this is known as an abstraction. Simply open the p sine_machine patcher, then choose save, and give it a unique name (for example, EW_sine_machine). Provided we save it in the Max search path (more on that below) we can then use this new object in any patch simply by creating a new object box called EW_sine_machine.

Abstractions with arguments

With standard Max objects, we can use arguments (for example, cycle~ 400 is the cycle~ object with the argument 440). Abstractions can also take arguments, just like normal objects. To do this we put the symbols #1, #2 etc. in our abstraction wherever we want the arguments to be used (so #1 will take argument 1, #2 argument 2, and so on). These symbols will then be replaced with the arguments supplied when using the abstraction. For example, we could make it so that the initial frequency could be specified using an argument. We could do this by supplying the symbol #1 as the argument to cycle~ inside our abstraction (so, we make the object cycle~ #1 inside our abstraction, then save it). When we use the abstraction in our patch, we could supply the argument 440 (so, we type EW_sine_machine 440). Inside the abstraction, the #1 automatically gets replaced by 440, so the cycle~ object now says cycle~ 440. (You can double-click on the abstraction within your patch to check if this is the case.) In this way you can build up your own personal library of useful objects. You can also download other people’s abstractions from the internet.

VERY IMPORTANT: File preferences and the search path Abstractions are separate files from our main patch, so when we use them Max needs to know where to find them so that it can load them properly. To do this Max will look in certain places on the compute, and these places are specified in the ‘search path’. You can check the search patch by choosing Options > File Preferences… You can edit the search path by adding folders in this window. It is a really, really good idea to add your own personal Max folder (where you keep your work) into the search path.F:\Max\maxed\images aut\Wk5 abs arg.png

 

VERY, VERY, VERY IMPORTANT: Include dependencies when you build your patch If you make your own abstractions, you need to remember that your patch will not work without them!

* * *

Playing audio samples

F:\Max\maxed\images aut\wk 6.png

groove~ and buffer~

 

There are a number of objects designed for playing back sound files and audio samples. Each one works differently, and is designed with different purposes in mind. The best one for playing back shorter samples on a keyboard is called groove~.

 

Like many of the other sampling and sound file objects, groove~ always needs another object called buffer~. Buffer~ is where the sound is actually stored, groove~ is the object that plays the sound back by accessing buffer~. In a way buffer~ represents a place in the computer’s memory (RAM) where the sound is stored.

 

Buffer~ takes two main arguments. The first is the name of the buffer~. This name can be anything (for example drum_loop007). We will use the name to link the buffer~ and the groove~ – they do not need to be joined by a patch cord. The second argument to buffer~ is the name of the sound file we want to load. This can be any AIFF or WAV file, up to four channels. But it must be in the Max/MSP search path (which you can set by going to Options->File Preferences). You can click on the buffer~ object to see the waveform of the sound it contains. To use groove~, we simply supply as an argument the name of the buffer~ we want to access (for example, veggie_sausage). The second argument to groove~ is the number of output channels we want (typically two for stereo!).

 

Making groove~ work

 

Groove~ has two main inputs which both go into the first inlet.The first input is an int or float which starts the playback. This number represents where in the audio file to begin playing back. 0 means ‘play from the beginning’. 1000 means ‘play the sound from 1second onwards (i.e. 1000 milliseconds thus missing out the first 1second)’.

 

The second input to groove~ is a signal. This represents the speed of playback. 1 means normal speed, 2 means double speed, 0.5 means half speed, and so on. You can vary this signal in real time. (NOTE: it is the fact that the two inputs are different types – signal and float/int – which means we can send them to the same inlet to do different things.)

 

Playing backwards

 

If we supply -1 as the speed signal (or any negative number) the sound will play backwards. BUT to do that we can’t start the playback at the beginning of the file i.e. 0 ms – that would mean ‘start at the beginning and play backwards’ which clearly isn’t going to work! To make it work, we need to specify the end of the sound as the play to start (in ms). To do this, we need to know how long the sound is, and we can find this out using info~.

 

Using info~ to get buffer~ information

 

The argument to info~ is the name of a buffer~ object (in this case, drum_loop007). If we bang info~ it gives us all sorts of information about the sound contained in the buffer~, including how long it is. Once we have obtained this number, we can use it as an input to groove~ to start the playback, with a negative speed signal. Then the sound will play from the end to the beginning.

 

Looping

 

Because groove~ is intended mainly for sampling-keyboard type applications, it has a built-in loop function. You can set the loop start and end points (in ms) by supplying them to the second and third inlets. Looping is turned on and off by sending the message loop 1 or loop 0 to the first inlet. (1 means ON, 0 means OFF – SO…… you can also use loop $1 with a toggle).

 

Changing buffer~s

 

If we want access to several samples at once (for example, to have different sounds on different tracks, or at different parts of the keyboard), we just use several buffer~s. We can change which buffer~ a groove~ object refers to by using the message set drum_loop009 We can do the same with the info~ object, to get information on different buffer~s.

 

One nice trick is to use an on screen menu to change sounds. To do this we use a umenu object. In the info window of umenu you can type a list of menu messages separated by commas (for example drum_loop007, drum_loop009). When you use the menu, this message gets sent out of the MIDDLE outlet of umenu. We could then send this output to a message box containing set $1, and send this to both groove~ and info~. Then by changing the menu selection, we can change the buffer~ these objects are referring to.

 

Loading new samples

 

There is no fixed relationship between the name of the buffer~ and the sound file it contains. We can change the sound file at any time. To do this we send the message replace. This will open a file selector so we can change the sound file. For example, we could load the sound file big_explosion007.aif into the buffer~ drum_loop009. NOTE: this means when we choose the menu item drum_loop009, we will now hear the sound big_explosion007. This is an important point – the name of the buffer~ is NOT necessarily the name of the sound file it contains.

* * *

Simple MIDI control
/span>

 

What is MIDI?

 

MIDI is a communications protocol for music. It enables music devices, such as sound modules, music keyboards and computers, to ‘talk’ to each other. It was originally developed to send information about a musician’s performance on a piano-style keyboard, and the protocol reflects this: for example, the loudness of a note is called ‘velocity’, because it represents how fast the key is pressed down. MIDI is now used much more widely, however – for example, for communication between a software sequencer and a software sampler, or for conveying fader movements from a hardware mixer control surface to a computer-based Digital Audio Workstation (DAW). MIDI is beginning to be replaced by other, more powerful protocols, such as OSC (Open Sound Control), but is still the most widely used at the moment.

 

VERY IMPORANT: MIDI does not ‘contain’ any sound, it is just information about ‘notes’ and other musical data. MIDI is like a musical score – useless without an instrument to play it on. The ‘sound’ of MIDI comes from a MIDI device, which may be hardware (a sound module, for example), or software (a software-based sampler on the computer). You therefore cannot ‘listen’ to a MIDI file without some device to play it on. It may seem that you can listen to a MIDI file (for example, MIDI files that you can find on the internet) but in reality these files are using your computer as the instrument on which to play their information. SO: MIDI DATA DOES NOT REPRESENT SOUND!

 

midiin and midiout

 

STILL IMPORTANT: MIDI information is individual numbers, not audio signals, so it is handled using Max objects, not MSP objects. Max provides a number of objects for handling MIDI. The simplest way to get MIDI into and out of Max is to use the objects midiin and midiout.

 

To use these objects we have to specify how the MIDI is actually getting in and out of the computer. If you double click on midiin or midiout you will see a list of possible ‘ports’ (ways into and out of the computer) and MIDI ‘devices’ (bits of hardware that send or receive MIDI). What appears in this list depends on how the computer has been set up using the Audio MIDI Setup application (Mac).

 

AU DLS

 

For simple testing of MIDI patches we don’t need an external sound module, since there is one built in to the computer. On the Mac this is called AU DLS. Its sounds are not very good, but it’s ok for quick testing. Select AU DLS as your output device to use it.

MIDI channels

 

MIDI is designed so we can have a complex set-up with different instruments playing on different tracks independently. To stop the data for the different instruments getting mixed up, each data stream is assigned to a different channel. You can think of this like a TV channel: all the TV channels arrive at your house at the same time, but you can choose which channel to actually watch. In the same way each MIDI device can be set to ‘watch’ MIDI data on a particular channel, and ignore the rest.

 

Handling MIDI data

 

If we select an input port for midiin, then play an attached MIDI keyboard, a complex series of numbers appears at midiin’s output. This is a MIDI data stream, and it is made up of 8-bit numbers (that is, numbers in the range 0 to 127). This stream contains all the MIDI data being sent, on all channels, including things like pitch wheels and volume pedals, and so can be complex to manage. To help us do this there are special objects which just deal with specific MIDI messages. We would normally use these instead of midiin and midiout.

 

midiin and midiout

 

Two such objects are notein and noteout. These are very similar to midiin and midiout, and like them we have to double click them to set their ports. But they only respond to MIDI note information, and ignore everything else. notein has three outlets, all producing numbers in the range 0 – 127. These are the pitch of the note (also called ‘note number’), the loudness of the note (also called the ‘velocity’) and the MIDI channel.

 

kslider and nslider

 

It can be hard to make sense even of this limited set of data, but there are more objects to help us. One of these is kslider, which is a keyboard-like display. This can take input directly from notein and display it. You can also ‘play’ it using the mouse. Another is nslider, which shows a simple form of musical notation.

 

noteout works like notein in reverse: it has three inlets, for pitch (note number), velocity and channel. The leftmost input (pitch) triggers the note, which is not always convenient. To make things easier, you can instead send both pitch and velocity as a list to the leftmost inlet.

 

Note-off nightmare

 

You will notice that there is no way to specify the length of a note. This is because of the way MIDI works – remember it was originally intended to be used while playing a keyboard, so it works in terms of note-on events (a key is pressed, the sound starts) and note-off events (the key is released, the note stops). The length of the note would be determined by the keyboard player, holding down the key for a certain length of time. Therefore you cannot specify the length of a note in the note data.

 

A note-off event is simply a note-on event, but with velocity zero. If we start a note C3 with velocity 64 on channel 1, that note will play for ever until we send C3, velocity 0, channel 1. If we send velocity 0 for note D3 instead of C3, or send it on channel 2 instead of 1, that’s no good – our original note will stay ‘stuck’ on. This makes managing MIDI notes really hard when we are doing ourselves from within Max, because we have to have a way of remembering which notes are on, and turning them off at the right moment.

 

To help us with this Max has a makenote object. This object is allows us to specify a note duration, and it takes care of sending the note-off messages itself. We can supply velocity and duration arguments to makenote, and just send it pitches, or you can send velocity and duration to its inlets, or you can send a pitch velocity duration list to the left inlet.

 

Program change messages

 

Another very useful object is pgmout. This sends a programme change message, which tells a MIDI device to play a different sound. Like midiout and noteout, pgmout has to be set to a certain port or device. We send pgmout a number to change the sound played by that device. Most devices can play more than one sound at a time, on different channels, and we can send pgmout a number saying which channel we want to change the sound on. So, we could use sound 5 on channel 1, sound 14 on channel 2, and so on. Then when we send note information on these different channels, they will play with different sounds.

 

Using umenu

 

WE have used umenu before to select different samples to play. In the same way we can us umenu to select input and output devices, and to set programs and sounds. This is much more convenient for the user. We can connect the first outlet of these umenus to our midi objects (such as notein and pgmout).

 

For selecting programs and sound, we have to populate the umenu manually: that is, we have to find out that sound number 1 on the device is ‘piano’ (for example) and write ‘piano’ into the menu, then put ‘strings’ as the second umenu item, and so on.

 

For selecting MIDI devices and ports, we can populate our umenu automatically. To do this we use the midiinfo object, which we connect to the input of umenu. If we then send midiinfo a 1, it will fill our umenu with a list of input devices and ports, and if we send it a 0 or -1, it will give us a menu list of output devices.

F:\Max\maxed\images aut\wk8 locked.png

 

* * *

Enveloping and cross-fading

 

As we have seen before, an envelope describes the way a sound changes over time. Up to now we have used loudness envelopes, but you can have envelopes controlling other things as well.

 

Line~

 

The objects for creating envelopes are line and line~. line~ has the advantage that it can create multi-stage envelopes – that is, the line can co up and down several times in different way (up to 128 times).

 

line~ is triggered with a message box, which in turn could be triggered by note information from a keyboard or sequencer – so every time we play a note, we also get an envelope. The message gives a starting value followed by a comma. After that, we put pairs of numbers – a value and a time in milliseconds. The value is ramped to gradually over the given time. In order to make the parts of the envelope variable, we can use $1, $2, $3 etc., instead of fixed values, and supply variable values sent through a pak object. One problem with this approach is that the line~ gets triggered whenever we adjust one of the variables. To prevent this, we can store the values until we are ready to trigger them. We do this by sending the output of pak into the right inlet of a message box, then banging the message to trigger line~.

Curve~

 

Sometimes we don’t want straight lines, but curves. This is especially true with loudness envelopes. Our perception of loudness is not linear, but logarithmic. (Every 6dB of extra gain doubles the Sound Pressure Level or voltage, while every 10dB extra gain doubles the perceived loudness.) This means that to achieve an even fade in or fade out, we have to use a logarithmic curve. We also need curves if we want to crossfade one sound with another – if we used straight lines, we would hear a dip in loudness in the middle of the crossfade (this is called an ‘equal power crossfade’).

 

The curve~ object is like line~, except that it can generate curves. Instead of two points (value and time) we must supply three (value, time and curve shape). The last value is between +1.0 and -1.0, with a minus value giving a positive curve, and a plus value a positive curve. For example, to do a 500ms crossfade, we would fade out the old sound using 1, 0 500, 0.5, and at the same time fade in the new one with 0, 1 500 -0.5

Designing the user interface

 

The User Interface or UI

 

Up to now we have been making patches that we have been using ourselves. But Max/MSP is a great way to make software for others to use. When we do that, we need to consider how we design the ‘look and feel’ of the User Interface or ‘UI’ (that is, the various buttons and sliders on the screen, and how they all work together to control the patch).


Because Max/MSP is graphical, it is easy to change the appearance of many aspects, and arrange the UI just how we want it.

 

Some hints and tips on UI design

 

  • Don’t assume a knowledge of Max/MSP – your patch should be usable by someone who has never worked with Max/MSP. It should also be usable in ‘locked’ mode only.

 

  • Be consistent – for example, if you use a graphic button for one switch, don’t use a toggle for another

 

  • Word your messages and labels clearly – we all know how frustrating it is when a program says ‘Exception raised line x20df45, error -3878’. Say what you mean! Use the correct but simplest music technology terms (‘output level’ is better than ‘amplitude’)

 

  • Use color appropriately – don’t go crazy with rainbow colours. Choose a colour scheme and stick to it. Look at commercial software to get some ideas. Always put light coloured text on a dark background, and vice versa.

 

  • Make the interface intuitive – the user should be able to guess how to do things a lot of the time

 

  • Don’t clutter up the screen – a simple screen is more attractive and easier to use than a cluttered one

 

  • Group functions together – functions that belong together should be near each other on the screen (for example, ‘play’ and ‘record’)

 

  • Give the user feedback – the user should be able to see what is going on when they use the software, through things like number displays, signal meters and the appearance of buttons (pressed/unpressed)

 

  • Get someone else to test your patch – things may seem obvious to you, but what about someone who doesn’t know how you have made the patch. Can they use it easily? Ask them to tell you how they found it.

‘Color’ messages

 

Nearly all objects can be coloured using the standard Max/MSP color messages (NOTE AMERICAN SPELLING!). Color messages give values for Red, Green, Blue and Alpha. The first three is the amount of each colour in the mix, and you can make any colour by varying those amounts. Alpha is the transparency or ‘see-through’ value of the colour. This format for colours is known as RGBA, and you can use either floating point values (between 0 and 1) or integers (between 0 and 255). Using floating point gives you more precise colour variation (technically, it is 24 bit colour rather than 8 bit).

 

There are various messages to set the colour of various components of your objects. For example bgcolor followed by RGBA values sets the colour of the background, bordercolor the colour of the border, knobcolor… well, you get the idea. Check the object reference for details on each object’s colour messages.

 

Textbutton

 

Nearly all objects can be coloured using the standard Max/MSP color messages (NOTE AMERECAN The textbutton object is very useful for building user interface controls. Unlike a simple message box or ‘bang’ button, it can change colour when it is clicked on, and even when the mouse is hovering over it. You will often use a textbutton to bang a (hidden) message box. A textbutton can also act as a toggle (and it can change its appearance when toggled).

 

Panel

 

A panel is just a rectangle that can be coloured in various ways, and given shadows on its edges so it appears raised or depressed. Panels are very useful for grouping controls together and making a professional looking interface.

 

Emulating real hardware – pictslider and pictcntrl

 

Software is often easier to use if graphic elements look like real studio gear. For example, if your output control looks like a real fader on a mixing desk, the user is more likely to guess what it is for. We can do this by using the pictcntrl and pictslider objects. These can be linked to picture files showing a particular fader or knob design, and you can even design these yourself using graphic software such as Photoshop of GraphicConverter. You need to know the right format for these pictures though – consult the help files for pictcntrl and pictslider. You can even base your images on photos of real gear taken from the Web. (hhb.co.uk is a great site for large images of studio gear – click on ‘catalogue’.)

 

Note: pictslider has two axes, so you can use it like a 2d joystick control. But it is also the object you need to make a fader. To do this you would turn of ‘Horizontal tracking movement’ in the info, then you would have a fader that goes up and down, but not left to right.

 

Locking the background

 

Another useful tip is to include some objects in the background. For example, often you want to edit elements which lie on top of a panel, without moving the panel itself. To do this, select the panel, and choose Arrange->Include in background. Now when you choose View->Lock background the position of the panel will be locked. Choose View->Unlock background to free it up.

Presentation Mode

 

Software is often easier to use if graphic elements look like real studio gear. For example, if your Max/MSP has a special mode called ‘Presentation Mode’. The idea of this is that you put objects in different positions when the patch is locked from where they will be when unlocked. This is useful because sometimes you need things more spread out when editing your patch, and closer together when the user is using it. You also sometimes need to lay things one way for the user, and another way to make the working of the patch clear when you are building it.

 

To go into Presentation Mode, click the small blackboard icon at the bottom of the patcher window. At first all your objects will disappear, but that’s because you don’t yet have any objects in the presentation. To place objects in the presentation, choose an object and select Object->Add to presentation. The object will now appear with a pink border. If you move it in Normal mode, then change to Presentation mode, the object will jump back to its original position. The same applies of you move it in Presentation mode. Working like this can be very powerful – but you need to remember which mode you are in when moving things around!

User interfaces: colours, keyboards and mice.

In order to create successful patches it is important that it is as simple and intuitive as possible for the end user. This is can be understood as a dialogue between what comes out of the computer and what goes into the computer. At this stage we will be using the monitor as a means of getting information from the computer.

The background of a patch is defined by the bgcolour object. It has 4 inlets accepting RGBA inputs integers between 0-255 or a list of values 0-1. In this patcher loadmess is sending an initial value of 1 to all the number boxes (this is redundant due to the arguments in the bgcolour object but is included for versatility). pak then combines the outputs of the number boxes into a list of values for bgcolour.

The panel object works in a superficially similar way. The main difference is that it has more parameters and is therefore controlled by messages as well as numbers. The background colour of the panel is defined with a pak list but then requires that list to be prepended with the message bgcolour. Other parameters are defined by a message followed by an integer variable signified by $1 as in border $1.

Colour and shape are very useful tools especially when they are reactive to input and can feed back information to the user.

The keyboard is a simple controller found on almost every computer. It can be monitored with the key object.

 

Likewise the mouse can be monitored with the mousestate object. mousestate requires a bang to report the position of the mouse. In the case of this patcher the metro object is sending a bang every 200ms so mouse position is reported 5 times a second. The mousestate object has a verity of modes in this case we are using mode 2 where position is measured relative to the top level window. The outputs are position and delta values as well as button on/off

It is now possible to know where the user is clicking; however for some situations a ubutton may be more appropriate. The ubutton sends data mouse location and button state when clicked making simple click control a lot easier. Here it is used in combination with the spacebar (key 32) to change and restore the background colour of our window.

 

Class exercise:

Create a patcher with a red panel. The panel will turn blue when the spacebar is pressed and turn red when the letter r is pressed. When the mouse is clicked the background of the patcher will change colour depending on the horizontal position of the mouse and opacity (alpha) depending on its vertical position.

Soundfile playback and timing.

Encapsulation, abstraction and pianos

 

When constructing a patch there may be many areas of duplication. In some cases it may be simple and effective to copy and paste areas of your patch into other configurations. However this is not always very practical. Parts of your patch can be encapsulated and as a subpatch with the patcher command, these can be copied and pasted within the main patch. These can be altered and reworked as separate entities but remain inside the main patch. Further to this patches can be saved as abstractions. These objects are available to use inside any patch, as long as they are in Max’s search path. Using them is almost indistinguishable from using standard Max objects. But editing an abstraction changes all the instances of it in all your patches, so you need to be careful not to break something. You must be aware which are abstractions you have created and which come with max as this can lead to obvious issues when you transfer your file to another machine if you do not copy all the information across.

 

This can be easily visualised in the case of a piano. The strings are activated by hammers. Under normal playing conditions we clearly never need more than ten hammers (one per finger). A normal piano essentially consists of copied hammers all the way across the stings working in parallel. It would be technically possible to build a working piano that used only ten (or maybe less) hammers providing each hammer could be called into place as the appropriate key was struck.

 

bpatcher

 

Sometimes, we want to make something like a patcher or an abstraction, but that has visible controls, like buttons or faders. We want the whole thing – including the buttons and faders, to be available to use in all our patches. This is done with bpatcher.

 

To make a batcher, we simpy make a normal Max patch, hiding the elements we don’t want to see, and arranging the buttons and controls how we want them. If you want to get data into and out of the bpatcher, such as messages from ‘play’ or ‘stop’ buttons, or values of faders, we can do this either with input and output objects, or by using send and receive (or send~ and receive~). Then save this patch in your Max search path.

 

Now, to use the bpatcher we have created, we make a new patch and create a bpatcher object. In the info inspector we enter the pathname of our previously saved patch (or use the ‘choose’ button). Now we can see the previous patch ‘inside’ the bpatcher window in our new patch. You can drag it around inside the bpatcher by using <shift><command><click-drag>.

 

Playing soundfiles

 

MAKE SURE DSP IS TURNED ON!

 

Up to now you have used groove~ with buffer~ to play back sounds. This is fine for shorter sounds such as samples. To playback longer sounds (like whole albums) we use sfplay~. Whereas buffer~ loads the whole sound into the computer’s memory for groove~ to play back, sfplay~ plays the sound continuously from the hard disk, so we are not limited by the computer’s memory. If we want sfplay~ to be stereo, we need to supply the argument 2. For four channel interleaved files it would be 4 and so on.

D:\Max\images\wk2\Screen shot 2012-01-31 at 16.55.21.png

 

We could use the open message to load a file, but in fact most users want something simpler and more instant. To do this we can use dropfile. This is an invisible rectangle – when the user drags a file onto it, it returns the full pathname of that file. We can then just send this to sfplay~, putting ‘open’ on the beginning of the message. We do this by passing it through the prepend object with the argument open. (So, prepend open.)

 

To control playback of sfplay~ we use the messages 1 (play) 0 (stop) pause and resume (resume playing after pause). We just have to bang these messages with whatever control buttons we have created. These could come from inside a bpatcher using send/receive objects or output objects in the bpatcher.

 

Timing

 

It would be nice to know where in our soundfile we are. Sfplay~ has an output for this, which is a ‘timing signal’ or ‘sync signal’. This gives the playback position in milliseconds, but note that it is a signal, not an int or float, hence the coloured cable. This means that the timing is sent continuously. However, for a time display, we don’t really need it to be continuous – changing it every 10th of a second will do. To do this we can use snapshot~. When we bang snapshot~ it gives us a readout of the signal, so if we use a metro to do this (say metro 100) we get timing on a regular basis.

 

Having a timer in ms. is not very intuitive so here is the conversion from first principals.

 

• The first thing we want to know is how many hours have elapsed. 1 hour = 60 minutes, = 60×60 seconds x 1000ms = 3600000ms. So if we divide the signal by 3600000 we get hours.

 

• The bit after the decimal point is minutes (or fractions of an hour). To get rid of integer and be left with decimal, we need to subtract the integer from float. There is a clever way to do this: expr $f1-$i1 We send the same number into both inlets, but the right hand one is forced to be an int. So when we do the sum, only the bit after the decimal point is left. This give tenths of hour, so we x60 to get minutes.

 

• Now we have another leftover bit after the decimal point, so we do the same (expr $f1=$i1) to get seconds.

 

• Finally, we take the bit after the decimal again, and x10 to get 10ths of a second.

 

Now we can send all these results to our display. To do this we need to make the numbers into a formatted message. We can use sprintf, which comes from C and C++ programming. It allows us to assemble a message just as we want to be formatted (so it’s like printing). We use sprintf %i : %i . %i (which means print 3 ints separated by colons). Now we send that to a textedit object for our display. We could colour this to look like a neon CD display, or whatever style we want.

* * *

Refining soundfile playback

control and timing.

 

sfinfo~

 

With groove~ and buffer~ you were able to use info~ to get information about the buffer (for example, its duration). When we are working with sfplay~ to play sound files from disk, we can use the similar object sfinfo~. To do this we send sfinfo~ the message open <filename>, which is the same message we send to sfplay~. It can be helpful to use sfinfo~ to display the current file’s number of channels and duration to the user, possibly using message boxes.

 

FF and REW

 

With most playback devices (CD players, mp3 players, sequencers etc.) we have fast forward and rewind functions for moving quickly through the sound file or track. We can achieve this with sfplay~ by using the seek message. The message: seek followed by an argument means ‘go to x ms in the file and start playing’. So seek 0 will play from the beginning. To skip forwards or backwards while playing, we just need to get the current playback position, add or subtract a constant value (say, 500ms), and use that new value in a seek message. The current playback position is output from the sync outlet in sfplay~ (provided that we supply 3 arguments to sfplay~, which are number of channels two if working in stereo, buffersize = 0 and number of sync outlets = 1 as covered last week). The playback position is a signal, which we convert to a float using number~ (right hand outlet).

 

Scrubbing control

 

Sometimes it can be useful to have a ‘scrubbing’ control, whereby we can drag a pointer through the file to move to any point in it. This is handy both for practical and more creative applications. To do this we could use a horizontal slider as the control. We need the length of the slider to represent the whole soundfile, so we divide the current playback position (obtained from the sync output) by the total file duration (obtained from sfinfo~). That gives the playback position as a fraction of the total (0.0=beginning, 1.0=end). Then we multiply this by the range we have given the slider (say 1000). When we’ve worked out this value, we can feed this value directly to the slider using the set $1 message, and the slider position will then reflect the current playback position. If we want to grab the slider to change the position, we simply reverse the process: the slider value divided by 1000.0 (or whatever the slider range is) gives us a new fraction of the total duration. We times this by the actual total duration, and send the new value to sfplay~ as a seek message.

* * *

More elegant approaches to additive synthesis.

 

Harmonics and partials

 

All sounds can be thought of as being made up of mixtures of sine tones. We call this the ‘spectrum’ of the sound, because it is a bit like splitting white light into its component colours. In theory, we can simulate any sound by adding sine tones together, although in practice we need very many sine tones (at least several hundred), and very complex control over their loudness over time. But we can still do quite a lot with just a few sine tones.

 

As we saw last semester, adding sine tones together to build up a spectrum is called additive synthesis. Each sine tone is called a ‘partial’, with the lowest frequency partial called the ‘fundamental’. In harmonic sounds (like instruments or voices) the frequencies of the partials relate to that of the fundamental by whole numbers (2* fundamental, 3* etc.) This is called a ‘harmonic spectrum’, and the partials are sometimes called ‘harmonics’. If we don’t use whole numbers we have an ‘inharmonic’ spectrum, rather like a bell. We should not call the individual sine tones ‘harmonics’ in this case – because they are not!

 

Early experiments with recorded sound showed that when we hear real-world sounds it is not so much the mixture of partials that gives the sound its characteristic ‘timbre’, but the way those partials change over time. So to make realistic, vibrant and ‘alive’ sounds we need to control not just the frequency of each partial, but also its amplitude over time.

 

Generating partials

H:\Max\maxed\images\wk 4\Screen shot 2012-02-15 at 14.01.36.png

 

We now know how to make a subpatch and save it as an ‘abstraction’, which can take arguments, and which we can use over and over again. Since we want to make sounds with many partials, it makes sense to design an abstraction to generate one partial, then re-use this for however many partials we want. (NOTE: when making this abstraction, it’s fine to end the name with ~ if we want. This reminds us that it is a signal based object.)

 

The parameters we will supply to our abstraction are:

  • Fundamental frequency in Hz.

  • Multiplier (how many times our current partial is higher than the fundamental i.e. a multiplier of 2 would make a fundamental of 440Hz. have a partial of 880Hz.)

  • Amplitude envelope (a series of line break point pairs.)

 

The fundamental frequency will simply come from the note we want to play.

 

The multiplier can be supplied as a variable, which can be supplied by a inlet to the object. We can also use a number argument in this case 1. until a new variable in supplied. To achieve this effect we can use a * object, and give it the argument 1. but we also attach the inlet to the right inlet of the * object. So the #1 will be replaced with whatever argument we supply, but if a new value appears at the inlet later on, the 1 value will be overridden with the new value.

 

The amplitude will come from an envelope generator which will provide a list of value/time pairs. We can feed these into a line~ object.

 

The output of the abstraction will be a sine tone. We could send this out using a send~ object. If all abstractions use the same send~ and receive~ arguments, they will be mixed together (which is what we want). If we want to, we can make the abstraction stereo, with two sine generators in it. In this case we would send the outputs via two send~ objects with different names.

 

When we have made our abstraction, we can put however many we want in a parent patch, and supply them with arguments. If we give them simple numbers to start off with (say 1,2,3,4,5,6 etc.). we will generate a harmonic sound. If we use complex numbers (1 2.3 4.5 6.6 6.9 etc.) we will get an inharmonic sound, rather like a bell.

 

Drawing envelopes

 

To control the amplitude, we can use a function object. This allows us to draw a trajectory, and use this to send values to the line~ object in our partial generator. In our function object the x-axis of the function will be the length of our note in ms. We can set this with the setrange $1 message ($1 being the variable representing how many ms. we want the function to run for . Now if we bang the envelope, it will send out x y pairs to line~ and thus control the amplitude of our sine.

 

Playing

 

Now all we need to do is add the ability to ‘play’ the sounds. To do simply we can use notein then the object mtof can convert MIDI note numbers into frequencies. Functionality can be added by including an on screen keyboard and midi selection.

H:\Max\maxed\images\wk 4\Screen shot 2012-02-15 at 14.01.07.png

FM synthesis.

 

FM synthesis

 

As we have seen, additive synthesis is achieved by adding sine tones together. But we need many sine tones for sounds to be interesting. Slightly paradoxically FM synthesis can create complex sounds in a much simpler way. The basic idea is take a sine wave oscillator, and use another sine wave to continuously change (or modulate) the frequency of the first (hence ‘frequency modulation’). Because the controlling oscillator is also at audio frequency, we get ‘interference patterns’ between the two signal that create extra partials, and these patterns can be heard as additional partials. The pattern of partials created depends on the ratio between the original two frequencies. A simple ratio (e.g. 1:2) creates a more simple, harmonic sound. A complex ratio creates a complex inharmonic sound.

 

FM terminology

 

There are two sine waves. The first is called the ‘carrier’ (the basic pitch of our note) and the second is referred to as the ‘modulator’. This changes the frequency of the carrier wave, and determines the timbre or tone colour). The third parameter is the ‘modulation index’. This is the amount by which the second sine wave changes the frequency of the first.

 

Making an FM patch

 

To make an FM patch, first we create a cycle~ for the carrier, with a gain~ to control the level, and and a dac~ for output. This is the basic frequency of our note. But we are going to modulate this frequency, which means we add or take away a small amount from the frequency. NOTE: we are not adding or taking away from the signal output by the cycle~, rather we are adding or taking away from the frequency of the cycle~ not its amplitude.

 

To do this we create a second cycle~. This is the modulator. This will be summed with the frequency of the first, so the freq of the first will vibrate up and down. But we need to control how much of the modulator is added, because this makes a big difference. This is the ‘modulation index’ (basically the input volume of the modulator). To do this we need a *~ object at the output of the modulator cycle~, and a floanum to control it.

 

When we change the carrier frequency, the ‘note’ changes, but do we not want the timbre to change. We want the ratio between the carrier and modulator to remain constant. We do this by multiplying the carrier by a ‘harmonicity ratio’. If we use (say) a harmonicity ration of 2, we guarantee that the modulator is always double the frequency of the carrier, no matter what note the carrier is playing.

 

In order to change the values continuously without clicks or glitches, we need to use signals rather than numbers. So we can put sig~ after all the numbers, to turn the numbers into signals.

 

We may also need one more modification. To get an even timbre, we need more modulation at high frequencies, and less at low. To do this we times the modulation index by the modulation frequency. This means we can use small numbers for the mod index, rather than huge ones. It also means we get more modulation as we play higher notes. This is not strictly essential for FM synthesis but can be very useful.

 

Envelopes and shaping

 

Last time we used function and line~ to create envelopes. We can do the same here, to control the timbre. With many real instruments, the timbre changes over time. So we can use another function to control the modulation index.

Studio 1:Users:ed:Desktop:Screen shot 2012-02-22 at 10.49.47.pngStudio 1:Users:ed:Desktop:Screen shot 2012-02-22 at 10.49.38.png

First steps in polyphonic synthesis.

 

FM synthesis

 

It would be great if our FM patch could play more than one note at a time. It would also be good if it could play loud and quiet, and if the timbre changed depending on whether we play loud or quiet (i.e. if we could have more or less modulation index depending on how hard we play the keys).

 

Making the subpatch

 

First, we need to make our monophonic FM synthesiser it into a subpatch. We will need one subpatch for each simultaneously sounding note. The subpatch input will be a MIDI note pair (pitch, vel). We follow this with unpack 0 0 so that we can get the pitch and velocity separately. When a pitch arrives, we need to bang the envelopes to make the envelope trigger, so we use trigger i bang or t i b. The bang comes first, and bangs both envelopes, then, the int goes through and connects to mtof, so that the note number becomes a frequency.

 

The envelope duration and harmonicity ratio will come from our main patch, so we can use send and receive objects for these values. And instead of a DAC for output, we will use a patch output (the DAC will be in the main patch). Also we don’t need a gain fader (that will be in the main patch).

 

As before, we will have two envelopes using function objects, one for the amplitude envelope and one for the modulation index. The note duration (received from the main patch) will feed a setdomain message to the amplitude envelope function. But we don’t want to have to draw these envelopes for each instance of our subpatch. Say we had 32-note polyphony, with 32 subpatches – it would take a long time to draw 64 envelopes in all those objects! It would be much better if we could have amplitude and mod-index functions in the main patch, and feed these through to all our subpatches. We need two receive objects, receive amp-env to receive amplitude envelope data and feed it to the subpatch amplitude function, and receive index-env for the modulation index data going to the modulation index function. Would it be possible to just feed the values through without the function objects?

 

Velocity

 

The final element in the subpatch is velocity control. We want greater amplitude and more modulation index when we play louder. The way to do this is divide the velocity by 127. (so that it becomes a value between 0 and 1, note the decimal place after the 127 to prevent rounding). Then we multiply this value by the output of the line~ obects which are driven by the functions.

Making the main patch

 

The main patch is relatively easy. We just need as many instances of our subpatch as we want to play simultaneous notes (or ‘voices’). Let’s say 4 in this case (but it could be 32, or 128, but probably less than 10 as you only have a maximum of 10 fingers!) Our input will be notein – from an external keyboard or MIDI sequencer. We can also have a kslider object connected to makenote to use for on-screen playing (mainly for testing). Both of these will give us a pitch, velocity list.

 

The main difficulty of triggering polyphonically is keeping track of which notes are on and which are off. New notes might begin before old ones have finished. How do we remember which notes are on, so we can be sure to turn them off when they are supposed to finish? Luckily there is an object that can do this for us – poly. Arguments are the number of voices we want (that is simultaneous notes), and a voice stealing flag. We can just use 1 as the flag for now, so we put poly 4 1. (IMPORTANT: this is poly without a ~)

 

The inputs to poly are our pitch and velocity. The outputs are voice number, pitch and velocity.

We need to turn this into a list, by using pack 0 0 0. We can now use an object called route. This looks at a list and matches the first element with its arguments. If there is a match the rest of the list comes out of that output. Since the first element in each list is going to be the voice number, we can give route arguments of 1 2 3 4. Route will then look at the voice number of each note, and send the rest of the list (the pitch and velocity) out of the appropriate output. Therefore the final argument of route and the first argument of poly should be the same in this case. We then can send each output to a different instance of our subpatch (it will be unpacked inside the subpatch).

All the outputs of our subpatches can just go to a gain fader, and then a DAC.

 

Setting the amplitude and mod-index envelopes

 

We send one global harmonicity ratio to all the subpatches., We also want to draw our amplitude and mod-index envelopes in the main patch, and get this copied to all the functions in the subpatches. We do this by using the message dump sent to our main functions. This causes all the data to come out of the dump outlet of the function (the third outlet), and we can send this to send amp-env (and the other one to send index-env).

 

In order to get the dump to happen, we actually need to send two messages. First, we need to send the message clear to clear all the data from the subpatch function. We can send this through the send. Next, we need to send the message dump to the main patch function. Finally, we need to connect the dump outlet to the send object as well.

 

What we want is for any change in the main function to cause an identical change in the subpatch function. So we use the bang outlet of the main function (this just bangs whenever we change the main function). We use this bang with a trigger object, using the arguments dump and clear. When we edit the main function, a bang is created, and this triggers FIRST the clear message, THEN the dump message.

 

A better way to create polyphony: poly~

F:\Max\maxed\images\wk8\Screen shot 2012-03-15 at 11.51.45.png

The power of poly~

 

Last time we looked at using poly (without a ~) to distribute notes to separate subpatches, each of which created one ‘voice’ of sound. This is fine if we want a few notes at a time, but if we want lots (say, 16 or 32), it will quickly become unwieldy.

 

To solve this, we have the poly~ object. This is similar to poly, in that it takes care of playing many notes at a time. But unlike poly, poly~ actually contains the sound making subpatch within it.

 

Making the subpatch

 

To use poly~ we need a subpatch – poly~ won’t do anything on its own. We can use our monophonic FM patch from last time, with a few modifications.

 

The first difference is that instead of and inlet and outlet, we need the objects in and out~. (in does not have a ~, because it will receive note information. out~ DOES have a ~ because it will output the signal. Both in and out have versions for data and signals, with or without ~.) Our in and out~ objects need to be numbered. We use in 1 and out~ 1.

 

The second modification is that we can make the patch stereo. We do this by adding (say) 1 to the frequency of the final note, and using this as the left hand channel. Because we now have two signal outlets, we need out~ 1 AND out~ 2.F:\Max\maxed\images\wk8\Screen shot 2012-03-15 at 11.56.58.png

 

Voices and instances

 

poly~ works by creating instances of our subpatch (an instance is a ‘copy’ of the subpatch that exists within poly~). The first argument to poly~ is the name of our subpatch, and the second argument is the number of voices we want. To make extra voices, we simply increase this number.

 

thispoly~

 

One of the big advanteges of poly~ is that each instance is only active when playing a note. This helps to save CPU power. To tell each instance of our subpatch whether or not it is playing, we use the thispoly~ object, and connect the output signal of the subpatch to it. If the signal is present, poly~ knows that this voice is busy playing a sound. But if the signal is zero, poly~ knows that this voice is silent, and therefore is available to play any new notes that come along.

 

The main patch

 

poly~ takes care of voice stealing – that is, it keeps track of which instances of our subpatch are currently sounding, and which are available to play new notes. poly~ will do this if we precede our note information with the word ‘note’, which tells poly~ it is a note event. We do this by passing the note list through a prepend object (so, prepend note puts the word ‘note’ in front of the note list).

 

As long as we do this, we need only one poly~ object to play as many notes as we like. N.B. poly~ has one inlet and two outlets. This is because we have in 1,out~ 1 and out~ 2 in our subpatch. Different numbers of inlets and outlets in the subpatch mean different numbers in the poly~ object.

F:\Max\maxed\images\wk8\Screen shot 2012-03-15 at 11.50.43.png

Breaking out; physical I/O and Arduino.

Introduction

 

Eventually you will find that you want to control events in the real world from Max. Max/MSP is very useful in that it responds to standard computer input systems such as the keyboard, mouse and midi very well and can output via the display, print window or audio. However this can feel rather limited when it comes to performance or interaction.

 

There are a number of interfaces/controlers available many of which are quite expensive and inflexible. The Arduino series of boards provide a cheap (£22) way of creating interfaces to mediate between the real world and Max.

http://www.arduino.cc/

Arduino is an open-source electronics prototyping platform based on flexible, easy-to-use hardware and software. It’s intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments. The board can sense the environment by receiving input from a variety of sensors and can affect its surroundings by controlling lights, motors etc. The micro-controller on the board is programmed using the Arduino programming language (similar to C++) and based in the ‘Processing’ IDE. Arduino projects can be stand-alone or they can communicate with software running on a computer.

The Arduino (Uno in this case) is a micro-controller board based on the ATmega328. It has 14 digital input/output pins (of which 6 can be used as PWM outputs), 6 analog inputs, a 16 MHz crystal oscillator, a USB connection, a power jack, an ICSP header, and a reset button. This implies that you can program it to function autonomously from its own chip, or you can use the usb link into Max/MSP.

 

Getting things talking

Screen shot 2012-04-18 at 12.10.17

 

The Arduino board relies upon a routine sent to the main chip to tell it what to do, this can be coded in the Arduino IDE, complied and then sent via usb to the board. Arduino sketches work in two parts a setup() section and a loop() section. Roughly put the setup section defines initial conditions and then the loop runs (in a loop!) until you tell it to stop. This simple piece of code will make an LED blink if connected to pin 13 and the ground.

Having fun

 

Once the board and interface has been set-up it is possible to monitor the pins in a digital (1 or 0) or analogue (contiuous) manner as well as sending to them. From this point most things are possible depending upon how good you are with hardware electronics and programming. A lot can be achieved in creating different inputs from thermostats, accelerometers dynamos etc. as well as outputs to stepper motors, lights fans and many many less predicatable things!

 

  • DROP!

 

DROP! is an interactive sound toy I put together for a dance festival over easter. When someone drops a marble (in this case a steel bearing) down the run, it triggers audio samples from the computer. The pieces which trigger the audio are wired into the system so that when the setup of the run is changed, so is the order and timing of the sound, thus making a reconfigurable piece of music and fun! The music is a mix of 11 short audio extracts. These range from close microphone recordings of marbles doing things in the studio; such as rattling in a bowl, or going down the marble run. One percussive sound has been made by taking the sound of a marble and dropping it on a board, slowing it down four times and then modelling the reverberant acoustic of York Minster around it. Other sounds are more mainstream ranging from FM synthesis (the default setting for all good sci-fi film scores!) as well as a more dance-y sounding loop created in Reactor. The audio was balanced and mixed in 4-channel surround sound to enhance the motion and movement of the tracks. The sounds of a bicycle bell and a duck have been thrown in for good measure. Things happen in the work when the marble rolls over two pieces of aluminium tape and completes a circuit, effectively closing a switch. With the aid of some home made electronics and an Arduino circuit board-chip housed in an old Chinese takeaway box these messages are sent via USB, luck and fairy power, to the computer. In the computer the interface which de-codes all these signals, brings up pictures of the active part of the marble run and plays back the audio has been written in Max MSP.

http://www.virtual440.com/images/DropArdPlusboard.JPG

 

  1. Set up:E:\arudino lecture\button_sch.png

 

The board needs to be wired up so that the 5V supply goes to the switch i.e. the sides of the marble run. This then connects to pin 2.

 

Pin 2 then connects to a 10kΩ resistor to the gound, thus making a pull down circuit. When the marble closes the switch the chip should read a signal from pin 2.

 

This is then duplicated across the board.

 

I have avoided pins 0 and 1 purely for ease of programming in that 1 and 0 will come up a lot with the signals!

  1. Code:

Using the Arduino IDE the code can be uploaded to the chip:

 

/*

Marble Run Prog v1 26/3/13

 

Reads inputs from digital pins 2-12. Setup requires 10k resistors with one leg to ground the other leg to one digital pin per resistor. A switch connects the digital side of each resistor to the 5v pin. Output is sent to the serial (usb) port ONLY when a switch is closed. If a switch is closed a message (pin2 if it is the pin two switch which has been closed) will be sent to the serial output.

 

*/

 

// variables will change:

int pin2 = 0; // variable for reading the pin status

int pin3 = 0;

int pin4 = 0;

int pin5 = 0;

int pin6 = 0;

int pin7 = 0;

int pin8 = 0;

int pin9 = 0;

int pin10 = 0;

int pin11 = 0;

int pin12 = 0;

 

void setup() {

 

// open serial usb connection

Serial.begin(9600);

 

// initialize the pins as inputs:

pinMode(2, INPUT);

pinMode(3, INPUT);

pinMode(4, INPUT);

pinMode(5, INPUT);

pinMode(6, INPUT);

pinMode(7, INPUT);

pinMode(8, INPUT);

pinMode(9, INPUT);

pinMode(10, INPUT);

pinMode(11, INPUT);

pinMode(12, INPUT);

}

 

void loop(){

// read the state of the switch:

pin2 = digitalRead(2);

pin3 = digitalRead(3);

pin4 = digitalRead(4);

pin5 = digitalRead(5);

pin6 = digitalRead(6);

pin7 = digitalRead(7);

pin8 = digitalRead(8);

pin9 = digitalRead(9);

pin10 = digitalRead(10);

pin11 = digitalRead(11);

pin12 = digitalRead(12);

 

// check if the pin has connected if so print to usb.

if (pin2 == HIGH) {

Serial.println(“pin2”);

}

if (pin3 == HIGH) {

Serial.println(“pin3”);

}

if (pin4 == HIGH) {

Serial.println(“pin4”);

}

if (pin5 == HIGH) {

Serial.println(“pin5”);

}

if (pin6 == HIGH) {

Serial.println(“pin6”);

}

if (pin7 == HIGH) {

Serial.println(“pin7”);

}

if (pin8 == HIGH) {

Serial.println(“pin8”);

}

if (pin9 == HIGH) {

Serial.println(“pin9”);

}

if (pin10 == HIGH) {

Serial.println(“pin10”);

}

if (pin11 == HIGH) {

Serial.println(“pin11”);

}

if (pin12 == HIGH) {

Serial.println(“pin12”);

}

  1. Max interface:

 

Using the max serial object we can then easily communicate with the board and make a visually useful interface for members of the public. E:\arudino lecture\Screen shot 2013-04-16 at 16.27.26.pngE:\arudino lecture\Screen shot 2013-04-16 at 16.29.10.png

Working with acoustic instruments

Acoustic instruments can be used in a variety of ways within Max/MSP. The first and most obvious is to use Max to extend the possibilities of a given instrument or ensemble such as in terms of pitch, timbre or cannonic ideas. Another method of working is to make the max patch a dynamic system and change parameters as part of the piece, be that through simple triggering of events with the space bar or mouse or more sophisticated analysis of the audio input. To draw this concept out even further it is even possible to (almost) completely ignore the sonic properties of the instruments and simply use the incoming data to drive other processes within the computer.

Audio Input

The first concern therefore must be getting audio into the system. Audio settings can be verified or changed by opening the DSP status window. As we saw last year this can be done with ezadc~ but (rather unsurprisingly) the adc~ object can do the job rather better albeit in a slightly more complex way.

The adc~ object can be supplied with arguments defining which audio inputs are sent to its outlets. Generally these will be as they appear on the sound card but if in doubt check the I/O Mappings dialogue in the DSP Status window.

System latency

F:\Max\max2\dsp.png

A smaller I/O vector size may reduce the inherent delay between audio input and audio output, because MSP has to perform calculations for a smaller chunk of time. On the other hand, there is an additional computational burden each time MSP prepares to calculate another vector (the next chunk of audio), so it is easier over-all for the processor to compute a larger vector. In practicality you will often find that there is a trade off to be had between a very responsive live audio input and a very slow computer. This becomes problematic when using a mouse as part of a performance or sourcing from hard disk as well. In short there are many interrelated factors that can be improved by the correct I/O size but in many cases the systems are so diverse trial and error often seem to be the quickest way to get it right!

Practicalities

Audio processing is computationally expensive, so it can be very useful to turn it off when not needed. In addition to this it can be very useful to be able to mute the input in the rough and tumble of live events. This can be done simply by sending a 1 or 0 message to the adc~ object there are various methods of doing this such as with a toggle, an int or message box spring to mind. These can be linked into more intuitive methods such as connecting the toggle object to the space bar of the qwerty keyboard.F:\Max\max2\adc keyup.png

Getting more information

  • adstatus

The adstatus object can quickly and easily give accurate readings of the current audio setup. There are many augments that can be used such as driver. This gives us the current audio driver and permits it to be changed. switch allows use to switch (!) dsp on and off.F:\Max\max2\Pretty patch.png

  • meter~

Meter~ is perhaps one of the most underrated objects. In this context it is incredibly useful. If one instance is placed directly after adc~ in the signal chain, audio in can be verified even in speakers etc are turned off, or if there happens to be a fault further on in your patch. It can also give you an indication for input levels. Other instances of meter~ can be placed throughout the signal chain, the most obvious place being just before (in the parallel with) the dac~ object.

Processing

Audio can then be processed just like any other information. The only real limit is your imagination.

Output

Sound can be sent out via the dac~ object. N.B. Max can be rather temperamental about having more than one instance of the dac~ object in a patch, if in doubt use send~ objects rather than attempt to have one sound card in two simultaneous states!

Monitoring

When using Max in a creative sense monitoring what is going on is often very important. If working with a performer consider what may be most helpful for them. Would a headphone feed be most appropriate? This would also eliminate some of the potential problems of audio feedback. In which case would you need the unprocessed sound or the processed sound, a mix of both or perhaps processed in the left ear and unprocessed in the right? Would it be helpful for the performer to see the screen and if so, how should that information be displayed…?

Class exercise:

Create a max patch that will take an audio input, display the incoming audio with the spectroscope~ object and then delays the signal using the tapin~ tapout~ objects, before sending it out to headphones.

Audio processing

Audio information can be processed in many creative ways in Max, both offline and in a performance environment. There are many combinations and manners in which this can be done.

A few processing ideas

(many of the more technical details have be appropriated from http://www.cycling74.com/docs/max5/refpages/msp-ref due to simply being statements of fact.)

  • Volume: On the most simplistic level an incoming sound can be boosted or attenuated using *~ or gain~ . This can be done rather simplistically with a number box or fader. Alternatively this could be automated with a list or a function object.

 

  • Delay: A simple delay line can be made with the tapin~ and tapout~ objects. Tapin~ receives a signal in and copies into a delay line. Using tapout~ objects, you can read from the delay line at various delay times. You must connect the outlet of a tapin~ object to the tapout~ objects you want to use with the delay line. Note that this is not a signal connection, since no signal travels between the objects. It is merely a way to indicate that the objects share the same delay memory.

  • Sampling, recording and playback: Recording of live audio can be achieved in a number of ways. For saving small amounts of audio objects such as buffer~ can be very useful and when used in conjunction with objects such as groove~ or play~ offer a lot of control over parameters such as play back speed direction etc.

When dealing with larger amounts of audio data or pre-existing audio files objects such as sfrecord~ and sfplay~ may be more useful. sfplay~ plays AIFF, NeXT/SUN(.au), WAVE, and Raw Data files of 1-32 tracks from disk. To play a file, you send sfplay~ the open message, then send it a 1 to start and a 0 to stop. open takes an argument to specify a filename in the search path. You can also create additional cues with the preload message. These can reference other files, all of which are simultaneously accessible. The open message sets the “current” file: the one that plays back from the beginning when 1 is sent and is used as the default for the preload message. sfplay~ can also connect to the cues defined in an sflist~ object. Since multiple sfplay~ objects can reference the same sflist~, this allows you to store a global list of cues.

  • More exotic things such as comb filtering: There are many objects that take on specific roles which could be programmed from the ground up but are built in and thus save a lot of time. comb~ is one such object.

 

comb~ mixes the current input sample with earlier input and/or output samples, according to the formula:

yn = axn + bxn-(DR/1000) + cyn-(DR/1000)

where R is the sampling rate and D is a delay time in milliseconds.

 

  • Externals: External objects such as fiddle~ analysis object or the yafr reverb object. Provided these are in Max’s search path they can be placed like any other object.

 

  • Other processors: Max can make use of other processing plug-ins as a result of the vst~ object. vst~ loads a real-time VST plug-in and uses its audio processing in MSP. Some plug-ins have their own editing window, which is visible when you double-click on the object. Otherwise, double-clicking on the object displays a default parameter editing window. The number of signal inputs and outputs default to 2, but the number required by the plug-in may be less than that. If you want to specify a larger number of inputs and outputs, you can supply them as optional arguments.
    Audio plug-ins loaded into a vst~ object can be synchronized by enabling the global transport (choose
    GlobalTransport from the Extras menu and click on the global transport’s Activate button).

A final thought

The use of processing on live audio can be incredibly powerful. The objects and ideas shown above are only the tip of a very large iceberg. What needs careful consideration from this point is how you creatively deploy these processes. It is not enough to simply switch something on and leave it in the manner of a guitar ‘stomp box’ otherwise we would just buy one!

Further audio manipulation

 

I am sitting in a room: Alvin Lucier 196

First recorded at Electronic Music Studio Brandeis University in 1969.

 

I am sitting in a room is one of composer Alvin Lucier’s best known works, it features Lucier recording himself narrating a text, and then playing the recording back into the room, re-recording it. The new recording is then played back and re-recorded, and this process is repeated. Since all rooms have characteristic resonant frequencies, the effect is that certain frequencies are emphasized as they resonate in the room, until eventually the words become unintelligible, replaced by the pure resonant harmonies and tones of the room itself.

‘In its repetition and limited means, I am sitting in a room ranks with the finest achievements of Minimal tape music. Furthermore, in its ambient conversion of speech modules into drone frequencies, it unites the two principal structural components of Minimal music in general.’ —Strickland (2000),

I am sitting in a room different from the one you are in now. I am recording the sound of my speaking voice and I am going to play it back into the room again and again until the resonant frequencies of the room reinforce themselves so that any semblance of my speech, with perhaps the exception of rhythm, is destroyed. What you will hear, then, are the natural resonant frequencies of the room articulated by speech. I regard this activity not so much as a demonstration of a physical fact, but more as a way to smooth out any irregularities my speech might have.

 

Using Max/MSP a computer a microphone and a speaker build a system capable of delivering the work described above. In performance the system needs to be as transparent as possible i.e. you should aim for the least possible intervention from the user.

Jitter

The third element of the Max/MSP/Jitter environment is Jitter(!) The Jitter section is mainly focused towards handling and manipulating video in a similar manner to the relationship between MSP and audio. Jitter objects can be easily recognised by the prefix jit. in the object name and the patch chords which are green and black by default. At its heart is the idea of a visual matrix. Within the matrix there are a number of cells, the diagram below shows a 6x4x1 matrix in that it has 6 cells across 4 down and 1 layer, hence very low resolution black and white. F:\Max\max2\wk3\Matrix.png

The next patcher shows working in different resolutions with 4 layers and expands the matrix concept. The object is named jit.matrix the arguments label the instance, describe the number of layers, file type and matrix size (x,y). It is fed from a jit.qt.movie object which steps through each frame in sequence each time it receives a bang. F:\Max\max2\wk3\resolution.png

When allocating memory for the numbers in a matrix, Jitter needs to know the extent of each dimension—for example, 320×240—and also the number of values to be held in each cell. In order to keep track of the different values in a cell, Jitter uses the idea of each one existing on a separate plane. Each of the values in a cell exists on a particular plane, so we can think of a video frame as being a two-dimensional matrix of four interleaved planes of data.

Using this conceptual framework, we can treat each plane (and thus each channel of the colour information) individually when we need to. For example, if we want to increase the redness of an image, we can simply increase all the values in the red plane of the matrix, and leave the others unchanged.

The normal case for representing video in Jitter is to have a 2D matrix with four planes of data—alpha, red, green, and blue. The planes are numbered from 0 to 3, so the alpha channel is in plane 0, and the RGB channels are in

planes 1, 2, and 3.

F:\Max\max2\wk3\planes.png

So far we have seen jitter work in combination with max but it can also be linked to MSP for some very creative effects. The patch below uses Max/MSP/Jitter in two ways. On the left a drum loop is fed through the sigmnud~ external to provide a spectral analysis. This analysis is used to feed the jit.matrix. This information is then rendered in two main ways firstly as a 4 plane matrix of AHSL (alpha, hue, saturation, lightness) data and secondly as ARGB (alpha, red, green, blue) data, which is then split into its component layers. The right hand side of the patcher uses the amplitude of the audio file to define the file position on the movie.

F:\Max\max2\wk3\audio to jit.png

Class exercise: Build a jitter patch where audio can be synthesised from a midi keyboard with a 4 dimensional jitter matrix visualisation of the sound

Jitter II

This patcher uses movies to create and edit audio ‘on the fly’.

The user can choose a video file, and either step through it frame by frame or alternatively play it back at anywhere between 1-99 frames per second. Using the green slider the user can specify which area of the moving image is used for synthesis, a display of which is shown on the right of the patcher.Output can be specified by a pull down menu, this can be either using the image to control an oscillator bank, or to filter a user defined audio file.

Key Elements

  • jit.submatirxreferences a sub region of the input matrix without copying data, arguments such as @dim or @offset define dimesions.

  • jit.iter iterates (!) through all the cells of a matrix, sending a max message or list for each cell out the object’s left outlet. If the input matrix has only one plane of data, the message is a number. Otherwise, it is a list containing one list item per plane of data. The jit.iter object also sends a list of ints out its middle outlet that contains the current cell coordinates.

  • oscbank~ is a non-interpolating oscillator bank with signal inputs to set oscillator frequency and magnitude. There are possible arguments: the number of oscillators. the number of samples across which frequency smoothing is done, the number of samples across which amplitude smoothing is done, the size, in samples, of the sinewave lookup table used by the oscbank~ object. the default is 4096. since oscbank~ uses uninterpolated oscillators, you can choose to use a sinetable of larger size at the expense of cpu.

  • pfft~ is designed to simplify spectral audio processing using the Fast Fourier Transform (FFT). In addition to performing the FFT and the Inverse Fast Fourier Transform (IFFT), pfft~ (with the help of its companion fftin~ and fftout~ objects) manages the necessary signal windowing, overlapping and adding needed to create a real-time Short Term Fourier Transform (STFT) analysis/resynthesis system. The number of inlets on the pfft~ object is determined by the number of fftin~ and/or in objects in the enclosed subpatch. Patchers loaded into a pfft~ object can only be given signal inlets by fftin~ objects within the patch. fftin~ provides an signal input to a patcher loaded by a pfft~ object; it won’t do anything if you try to use it anywhere other than inside a patcher loaded by the pfft~ object. Where the pfft~ object manages the windowing and overlap of the incoming signal, fftin~ applies the windowing function (the envelope) and performs the Fast Fourier Transform.

 

Outputs: left outlet: This output contains the real-values resulting from the Fast Fourier transform performed on the corresponding inlet of the pfft~. This output frame is only half the size of the parent pfft~ object’s FFT size because the spectrum of a real input signal is symmetrical and therefore half of it is redundant. The real and imaginary pairs for one spectrum are called a spectral frame.


middle outlet: This output contains the imaginary-values resulting from the Fast Fourier transform performed on the corresponding inlet of the pfft~. This output frame is only half the size of the parent pfft~ object’s FFT size because the spectrum of a real input signal is symmetrical and therefore half of it is redundant. The real and imaginary pairs for one spectrum are called a spectral frame.

right outlet: A stream of samples corresponding to the index of the current bin whose data is being sent out the first two outlets. This is a number from 0 – (frame size – 1). The spectral frame size inside a pfft~ object’s subpatch is equal to half the FFT window size.

  • cartopol~ will take any given signal as a cartesian coordinate and output the polar conversion of that signal. signal: The left outlet gives the magnitude (amplitude) of the frequency bin represented by the current input signals. The right outlet gives the phase, expressed in radians, of the frequency bin represented by the current input signals. topolcar~ is the inverse of this function.

The pfft~ paxers.maxpat subpatch

Gen~

Gen

Why Use Gen?

General

  • You want to create processes that can’t be efficiently achieved with ordinary Max/MSP/Jitter objects

  • You want to program visually at a low level while getting the performance of compiled C or GLSL code

  • You want to use a concise text based expression language (codebox) rather than visual programming or coding in GLSL

  • You want to avoid having to compile separate windows and macintosh versions (and in the future, 64-bit application binaries)

  • You want to design new algorithms and see or hear them immediately

  • You want to design an algorithm that can run on the CPU or GPU, on Windows and Mac

Examples

  • arbitrary new oscillator and filter designs using single-sample feedback loops with gen~

  • reverbs and physical models using networks of short feedback delays with gen~

  • sample-accurate buffer~ processing such as waveset distortions with gen~

  • efficient frequency-domain processing such as spectral delays using gen~ inside pfft~

  • custom video processing filters as fast as C compiled externals with jit.pix, and graphics card accelerated with jit.gl.pix

  • geometry manipulation and generation with jit.gen

  • particle system design with jit.gen

  • iso-surface generation with distance fields in jit.gen

Performance improvements

  • consolidation of chained MSP operators or jit.ops and other MSP/Jitter objects that can be combined into one meta-object

  • replacement for jit.expr with performance and interface improvements

  • You want to be able to have a simple way to make use of the GPU for image processing both in visual and textual form

Gen refers to a technology in Max representing a new approach to the relationship between patchers and code. The patcher is the traditional Max environment – a graphical interface for linking bits of functionality together. With embedded scripting such as the js object text-based coding became an important part of working with Max as it was no longer confined to simply writing Max externals in C. Scripting however still didn’t alter the logic of the Max patcher in any fundamental way because the boundary between patcher and code was still the object box. Gen represents a fundamental change in that relationship.

The Gen patcher is a new kind of Max patcher where Gen technology is accessed. Gen patchers are specialized for specific domains such as audio (MSP) and matrix and texture processing (Jitter). The MSP Gen object is called gen~. The Jitter Gen objects are jit.gen, jit.pix and jit.gl.pix. Each of these Gen objects contains within it a Gen patcher. While gen patchers share many of the same capabilities, each Gen object has functionality specific to its domain. For example, Gen patchers in gen~ have delay lines while Gen patchers in jit.gen have vector types.

Gen patchers describe the calculations a Gen object performs. When you’re editing a Gen patcher, you’re editing the internal calculations of the Gen object. In order to make use of the computations described in its Gen patcher, a Gen object compiles the patcher into a language called GenExpr . GenExpr bridges the patcher and code worlds with a common representation, which a Gen object turns into target code necessary to perform its calculations. gen~, jit.gen, and jit.pix transparently generate and compile native CPU machine code on-the-fly, while jit.gl.pix does the same for GPU code (GLSL). When working with Gen objects, you’re writing your own custom pre-compiled MSP and Jitter objects without having to leave Max.

Creating a Gen Patch

  • Click in a blank space in your unlocked patcher window and type “n” (new) to create a new object box with a cursor. Type in the name of the Gen object you want to create – gen~ , jit.gen , jit.pix or jit.gl.pix . The object will appear.

  • Double-click on the object you just created to open its Gen patcher window. You’ll see that your patch includes two inlets and one outlet by default. http://cycling74.com/docs/max6/vignettes/gen/images/gen-patcher-window.png

 
 

 

The Gen Patcher Window

Like the regular Max patcher window, the Gen patcher window contains a number of buttons on the toolbar that you can use to perform regular patching tasks. You will recognize some of them from the Max patcher window.

The Lock/unlock button toggles the locked state of the patcher window.

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar1.png*
 

The Patcher Windows button lets you open a new view of the patcher window .

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar2.png*
 

The New Object button duplicates the act of typing an “n” – it creates a new blank object box with a cursor, ready to be named.

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar3.png*
 

The Show Grid/Hide Grid button shows or hides the grid.

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar4.png*
 

The Reset will reset the current Gen code compilation to its default values.

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar5.png*
 

The Compile button is used to manually compile the patch in the Gen window. The button is greyed out unless you have disabled Auto-compilation and then added an operator or a new connection to your patch.

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar6.png*
 

The Disable Auto-Compile/Enable Auto-Compile button is used to toggle autocompilation of the patch in the Gen patcher window. By default, autocompilation is on so that you can hear and see the results of your patching as you work.

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar7.png*
 

The Show Status Bar/Hide Status Bar button shows or hides the Status Bar.

http://cycling74.com/docs/max6/vignettes/gen/images/gen_window_toolbar8.png*
 

Patching in Gen

Gen patchers look similar to Max patchers, but there are a few important differences:

  • Although they share a collection of common operators , the set of objects (or “operators”) available in a Gen patcher in the gen~ (Gen audio) and Gen Jitter domains are different. This is also true of GenExpr , as well.

  • There are no messages. All operations are synchronous, rather like MSP patching. Because of this, there are no UI objects (sliders, buttons etc.). However the param operator can be used to receive message-rate controls from the normal Max world. There is no need to differentiate hot and cold inlets, or the order in which outlets ‘fire’, since all objects and outlets always fire at the same time.

  • There are no send and receive operators in Gen patcher. Gen patchers are connected to the outside world through the in , out , and param operators. In gen~ , there are some additional operators such as history, data and buffer that are controllable with messages to gen~ . See the gen~ section for the details.

  • The usual distinction between int and float numbers does not apply to Gen patchers. At the Gen patcher level, everything is a 64-bit floating point number.

  • The codebox is a special operator for Gen patchers, in which more complex expressions can be written using the GenExpr language.

Gen patchers can be embedded within the gen~ , jit.gen , etc. object, or can be loaded from external files (with .gendsp or .genjit file extensions respectively) using the @gen attribute of gen~ , jit.gen , etc. objects.

Auto-compile

By default, the compilation process occurs in the background while you are editing, so that you can see or hear the results immediately. This auto-compilation process can be disabled using the ‘Auto-Compile’ toggle in the Gen patcher toolbar. Compilation can also be triggered using the hammer icon in the Gen patcher toolbar or any codebox toolbar.

Enabling and Disabline Auto-compilation in a Gen Patcher

  • Click on the Auto-compile button in the Gen patcher window to disable autocompilation. When you do, the circle in the button will turn white and change to an Enable Auto-Compile button.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-disable-autocompile.png*
 
  • When you add a new operator to your patch or make a new connection, the Compile button will become active in the Gen patcher toolbar.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-compile-button.png*
 
  • Click on the Compile button to compile the current version of the patch. You’ll see/hear the results, and the Compile button will be greyed out until you add another operator or connection.

 

Gen Operators

Gen operators represent the functionality involved in a Gen patcher. They can exist as object boxes in a patcher or as functions or variables in GenExpr code. They are the link between the patcher and code worlds.

Gen operators take arguments and attributes just like Max objects, but these are purely declarative. Since there is no messaging in Gen patchers, the attribute value set when the operator is created does not change. Attributes are most often used to specialize the implementation of the process the operator represents (such as setting a maximum value for param using the @max attribute.) http://cycling74.com/docs/max6/vignettes/gen/images/gen-01.png

In many cases, the specification of an object’s argument effectively replaces the corresponding inlet. This is possible in Gen because there is no messaging and all processing is synchronous. For example, the + operator takes two inputs, but if an argument is given only one input needs to be specified as an inlet: http://cycling74.com/docs/max6/vignettes/gen/images/gen-02.png

 
 
 
 

An inlet with no connected patchcord uses a default value instead (often zero, but check the inlet assist strings for each operator). An inlet with multiple connections adds them all together, just as with MSP signal patchcords:

Standard Operators

Many standard objects behave like the corresponding Max or MSP object, such as all arithmetic operators (including the reverse operators like !- , !/ etc.), trigonometric operators ( sin , cosh , atan2 etc.), standard math operators ( abs , floor , pow , log , etc.), boolean operators ( > , == , && (also known as and ) etc.) and other operators such as min , max , clip (also known as clamp ), scale , fold , wrap , cartopol , poltocar etc. In addition there are some operators in common with GLSL ( fract , mix , smoothstep , degrees , radians etc.) and some drawn from the jit.op operator list ( >p , ==p , absdiff etc.).

There are several predefined constants available ( pi , twopi , halfpi , invpi , degtorad , radtodeg , e , ln2 , ln10 , log10e , log2e , sqrt2 , sqrt1_2 and the same in capitalized form as PI , TWOPI etc), which can be used in place of a numeric argument to any operator: http://cycling74.com/docs/max6/vignettes/gen/images/gen-03.png

 
 

Argument Expressions

For all objects that accept numeric arguments (e.g. [+ 2.] or [max 1.]) argument expressions can be used in their place. Argument expressions are simple statements that evaluate to a constant value. Many gen operators can be used as argument expressions, particularly the math operators (sqrt, cos, …). Argument expressions can help simplify gen patchers where all that is needed is the calculation of a constant that isn’t pre-defined such as 3*pi/2. For example, in the patch below:

http://cycling74.com/docs/max6/vignettes/gen/images/gen-14.png*
 

there is a multiple operator with an argument of 10*pi. In the code sidebar, we can see that it’s evaluated to the proper constant value. Similarly, the scale operator has four arguments one of which is an argument expression sqrt(2).

Send and Receive

send and receive within gen patchers can be used to connect objects without patchcords. In gen patchers, send and receive can only be used locally. They will not connect to send and receive objects in other gen patchers or gen subpatchers. send and receive take a name argument that determines connectivity.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-13.png
 

There can be multiple send and receive objects with the same name without issue. If there are multiple send objects with the same name, they will be summed just as if multiple patchcords were connected to the same inlet. If there are multiple receive objects with the same name, they will all receive identical input from their corresponding send objects.

Subpatchers and Abstractions

Subpatchers and abstraction in gen objects behave practically identically to standard Max subpatchers and abstractions. In gen objects, subpatchers are created with the gen operator. If the gen operator is given the name of a gen patcher as an argument, it will use it to set the titlebar of the subpatcher.

http://cycling74.com/docs/max6/vignettes/gen/images/gen.subpatcher-01.png*
 

Abstractions, as with standard max abstractions, are instantiated by creating an object with the name of the gen file to load as the abstraction. For example, if an operator named differential is created, gen will look for the file differential.gendsp with gen~ and differential.genjit with the jitter gen objects. Instantiating abstractions this way is shorthand for setting the file attribute on the gen operator. For example, creating an operator differential is equivalent to gen @file differential.

http://cycling74.com/docs/max6/vignettes/gen/images/gen.subpatcher-02.png*
 

Subpatcher/Abstractions and Parameters

Just like normal gen patchers, gen subpatchers and abstractions can also contain parameters. When used in subpatchers and abstractions, parameters behave like named inlets with default values. If nothing is connected to a parameter in a subpatcher or abstraction, the parameter will be a constant and its value will be its default.

http://cycling74.com/docs/max6/vignettes/gen/images/gen.subpatcher-03.png*
 

In the above example, the subpatcher has a parameter scale with a default of 1. In the subpatcher’s sidebar, we see this represented in the GenExpr code as

 

Param scale(1.);

However, in the parent gen patcher, the parameter gets converted into a constant because nothing is connected to the parameter. The first line in the parent patcher’s GenExpr sidebar reads:

 

scale_1 = 1.;

which is the default value of the scale parameter.

Since subpatcher and abstraction parameters don’t create their own inlets to connect objects to, there is a special operator called setparam that can be connected to any inlet for this specific purposes. setparam connects all of its inputs to a named parameter in a subpatcher or abstraction. It requires an argument specifying the name of the parameter to connect to.

When setparam is connected to a parameter, the parameter changes from being a constant to a dynamic variable equivalent to the value at the input of the setparam object.

http://cycling74.com/docs/max6/vignettes/gen/images/gen.subpatcher-04.png*
 

Notice that the code in the parent subpatcher has changed from a constant to:

 

setparam_1 = in2;

in is conected to the inlet of the setparam object so the scale parameter takes on that value.

The gen~ Object

The gen~ object is specifically for operating on MSP audio signals. Unlike MSP patching however, operations in a Gen patcher are combined into a single chunk of machine code, making possible many more optimizations that can make complex processes more efficient, and allow you to design processes which must operate on a per-sample level, even with feedback loops.

Working in gen~ opens up scope to design signal processes at a lower level, even per-sample. Because of this, many operators take duration arguments in terms of samples (where the equivalent MSP objects would use milliseconds).

gen~ Operators

In addition to the standard Gen operators , which are often similar to the equivalent MSP objects (such as clip , scale , minimum , maximum , etc.), many of the operators specific to the gen~ domain mirror existing MSP objects to make the transition to gen~ easier. There are familiar converters ( dbtoa , atodb , mtof , ftom , mstosamps , sampstoms ), oscillators ( phasor , train , cycle , noise ), and modifiers ( delta , change , sah , triangle). In addition there are some lower-level operators to avoid invalid or inaudible outputs ( isnan , fixnan , isdenorm , fixdenorm , dcblock ).

A global value of samplerate is available both as an object, and as a valid value for an argument of any object.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-04.png*
 

History

In general, the Gen patcher will not allow a feedback loop (since it represents a synchronous process). To create a feedback loop in gen~ , the history operator can be used. This represents a single-sample delay (a Z-1 operation). Thus the inlet to the history operator will set the outlet value for the next sample (put another way, the outlet value of the history operator is the inlet value from the previous sample). Multiple history operators can be chained to create Z-2, Z-3 delays, but for longer and more flexible delay operators, use the delay operator.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-05.png*
 

A history operator in a Gen patcher can also be named, making it available for external control, just like a param parameter.

Delay

The delay operator delays a signal by a certain amount of time, specified in samples. The maximum delay time is specified as an argument to the delay object. You can also have a multi-tap delay by specifying the number of taps in the second argument. Each tap will have an inlet to set the delay time, and a corresponding outlet for the delayed signal.

Note that the delay operator is not currently supported in GenExpr.

The delay operator can be used for feedback loops, like the history operator, if the @feedback attribute is set to 1 (the default). The @interp attribute specifies which kind of interpolation is used:

  • none or step: No interpolation.

  • linear: Linear interpolation.

  • cosine: Cosine interpolation.

  • cubic: Cubic interpolation.

  • spline: Catmull-Rom spline interpolation.

Data and Buffer

For more complex persistent storage of audio (or any numeric) data, gen~ offers two objects: data and buffer, which are in some ways similar to MSP’s buffer~ object. A data or buffer object has a local name, which is used by various operators in the Gen patcher to read and write the data or buffer contents, or get its properties.

Note that the data and buffer objects are not currently supported in GenExpr, including all associated operators.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-06.png*
 

Reading the contents of a data or buffer can be done using the peek , lookup , wave , sample or nearest operators. The first argument for all of these operators is the local name of a data or buffer. They all support single- or multi-channel reading (the second argument specifies the number of channels, and the last inlet the channel offset, where zero is the default).

All of these operators are essentially the same, differing only in defaults of their attributes. The attributes are:

@index specifies the meaning of the first inlet:

  • samples: The first inlet is a sample index into the data or buffer.

  • phase: Maps the range 0..1 to the whole data or buffer contents.

  • lookup or signal: Maps the range -1..1 to the whole data or buffer contents, like the MSP lookup~ object.

  • wave: Adds extra inlets for start/end (in samples), driven by a phase signal between these boundaries (0..1, similar to MSP’s wave~ object).

@boundmode specifies what to do if the index is out of range:

  • ignore: Indices out of bounds are ignored (return zero).

  • wrap: Indices out of bounds repeat at the opposite boundary.

  • fold or mirror: Indices wrap with palindrome behavior.

  • clip or clamp: Indices out of bounds use the value at the bound.

@channelmode specifies what to do if the channel is out of range. It has the same options as the @boundmode attribute.

@interp specifies what kind of interpolation is used:

  • none or step: No interpolation.

  • linear: Linear interpolation.

  • cosine: Cosine interpolation.

  • cubic: Cubic interpolation.

  • spline: Catmull-Rom spline interpolation.

The nearest operator defaults to @index phase @interp none @boundmode ignore @channelmode ignore.

The sample operator defaults to @index phase @interp linear @boundmode ignore @channelmode ignore.

The peek operator defaults to @index samples @interp none @boundmode ignore @channelmode ignore.

The lookup operator defaults to @index lookup @interp linear @boundmode clamp @channelmode clamp.

The wave operator defaults to @index wave @interp linear @boundmode wrap @channelmode clamp.

Accessing the spatial properties of a data or buffer objects is done using the dim and channels operators (or the outlets of the data or buffer object itself), and writing is done using poke (non-interpolating replace) or splat (interpolating overdub).

Briefly, data should be thought of as a 64-bit buffer internal to the gen~ patcher, even though it can be copied to, and buffer should be thought of as an object which can read and write external buffer~ data. The full differences between data and buffer are:

  • A data object is local to the Gen patcher, and cannot be read outside of it. On the other hand, a buffer object is a shared reference to an external MSP buffer~ object. Modifying the contents in a Gen buffer is directly modifying the contents of the MSP buffer~ object it references.

  • The data object takes three arguments to set its local name, its length (in samples) and number of channels. The buffer object takes an argument to set its local name, and an optional argument to specify the name of an MSP buffer~ object to reference (instead of using the local name).

  • Setting the gen~ attribute corresponding to a named data object copies in values from the corresponding MSP buffer~ , while for a named buffer object it changes the MSP buffer~ referenced. The buffer object always has the size of the buffer~ object it references (which may change). The data object has the size of its initial definition, or the size of the buffer~ object which was copied to it (whichever is smaller).

  • The data object always uses 64-bit doubles, while The buffer object converts from the bit resolution of the MSP buffer~ object (currently 32-bit floats) for all read and write operations, and may be less efficient.

Technical notes

All operations in gen~ use 64-bit doubles.

The compilation process for gen~ Gen patchers and GenExprs includes an optimization that takes into account the update rate of each operator, so that any calculations that do not need to occur at sample rate (such as arithmetic on the outputs of param operators) instead process at a slower rate (determined by the host patcher vector size) for efficiency.

Jitter Gen Objects

There are three Gen objects in jitter: jit.gen , jit.pix , and jit.gl.pix . The jit.gen and jit.pix objects process Jitter matrices similar to jit.expr . The jit.gl.pix object processes textures and matrices just like jit.gl.slab . The jit.gen object is a generic matrix processing object that can handle matrices with any planecount, type and dimension. jit.pix and jit.gl.pix ,on the other hand, are specifically designed for working with pixel data. They can handle data of any type, but it must be two dimensional or less and have at most four planes.

Jitter Operators

Coordinates

Jitter Gen patchers describe the processing kernel for each cell in a matrix or texture. As the kernel is processing the input matrices, a set of coordinates is generated describing the location of the current cell being processed. The objects are just like the operators in jit.expr . They are norm , snorm , and cell with the dim operator giving the dimensions of the input matrix. norm ranges from [0, 1] across all matrix dimensions and is defined as norm = cell/dim . snorm ranges from [-1, 1] across all matrix dimensions and is defined as snorm = cell/dim*2-1. cell gives the current cell index.

Vectors

Since Jitter matrices represent arrays of vector (more than one plane) data, all Gen operators in Jitter can process vectors of any size, so Gen patchers once created work equally on any vector size. The basic binary operators + , – , * , / , and % can take vector arguments as in [+ 0.5 0.25 0.15] , which will create an addition operator adding a vector with the three components to its input. Also, the param operator can take vector default values as in [param 1 2 3 4]. Parameters can have up to 32 values in jit.gen and 4 values in jit.pix and jit.gl.pix.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-09.png*
 

The vec operator creates vector constants and packs values together in a vector. It takes default arguments for its components and casts all of its inputs to scalar values before packing them together. http://cycling74.com/docs/max6/vignettes/gen/images/gen-10.png

 
 

The swiz operator applies a swizzle operation to vectors. In GLSL and similar shading languages, vector components can be accessed by indexing the vector with named planes. For example in GLSL you might see

 

red = color.r

or

 

redalpha = color.ra

or even

 

val = color.rbbg

This type of operation is referred to as swizzling. The swiz operator can take named arguments using the letters r , g , b , a , as well as x , y , z , w in addition to numeric indices starting at 0. The letters are convenient for vectors with four or less planes, but for larger vectors numeric indices must be used. The compilation process automatically checks any swiz operation so arguments indexing components larger than the vector being processed will be clamped to the size of the vector.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-11.png*
 

In addition, there are the basic vector operations for spatial calculations. These are length , normalize , cross , dot , and reflect .

Sampling

Sampling operators are one of the most powerful features of Jitter Gen patchers. Sampling operators take an input and a coordinate in the range [0, 1] as an argument, returning the data at the coordinate’s position in the matrix or texture. The first argument always has to be a Gen patcher input while the second argument is an N-dimensional vector whose size is equal to the dimensionality of the input it is processing. If the coordinate argument is outside of the range [0, 1], it will be converted to a value within the range [0, 1] according to its boundmode function. Possible boundmodes are wrap , mirror , and clamp , where wrap is the default.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-12.png*
 

The two sampling operators in Jitter Gen patchers are sample and nearest. The sample operator samples values form a matrix using N-dimensional linear interpolation. The nearest operator will simply grab the value from the closest cell.

Geometry

Jitter Gen patchers include a suite of objects for generating surfaces. These include most of the shapes available in the jit.gl.gridshape object. Each surface function returns two values: the vertex position and the vertex normal. The geometry operators are sphere , torus , circle , plane , cone , and cylinder .

Color

There are two color operators in Jitter Gen patchers. They are rgb2hsl and hsl2rgb . They convert between the Red Green Blue color space and the Hue Saturation Luminance color space. If the input to these objects has an alpha component, the alpha will be passed through untouched.

jit.gen

The jit.gen object is a general purpose matrix processing object. It compiles Gen patchers into native machine code representing the kernel of an N-dimensional matrix processing routine. It follows the Jitter matrix planemapping conventions for pixel data with planes [0-4] as the ARGB channels. jit.gen can have any number of inlets and outlets, but the matrix format for the different inputs and outputs is always linked. In other words, the matrix format (planecount, type, dimensions) of the first inlet determines the matrix format for all other inputs and outputs. jit.gen makes use of parallel processing just like other parallel aware objects in Jitter for maximum performance with large matrices.

How a matrix is processed by jit.gen is dependent on the input planecount, type, and dimension of the input matrices. In addition, there is a precision attribute that sets the type of the processing kernel. The default value for precision is auto. Auto precision automatically adapts the type of the kernel dependent upon the matrix input type. In auto mode, the following mapping between input matrix type and kernel processing type is used:

  • char maps to fixed

  • long maps to float64

  • float32 maps to float32

  • float64 maps to float64

Other possible values for the precision attribute are fixed, float32, and float64. Fixed precision is the only setting that doesn’t correspond to a Jitter matrix type. Fixed precision specifies a kernel type that performs a type of floating point calculation with integers using a technique called fixed-point arithmetic. It’s very fast and provides more precision than 8-bit char operations without incurring the cost of converting to a true floating-point type. However, fixed-point arithmetic calculations have more error that can sometimes be visible when using the sampling operators. If there are noticeable artifacts, simply increase the internal precision to float32.

jit.pix

The jit.pix object is a matrix processing object specifically for pixel data. When processing matrices representing video and images, jit.pix is the best object. Internally, data is in RGBA format always. If the input has less than four planes, jit.pix will convert it to RGBA format according to the following rules:

  • 1-plane, Luminance format, L to LLL1 (Luminance for RGB and 1 for Alpha)

  • 2-plane Lumalpha format, LA to LLLA (Luminance for RGB and Alpha for Alpha)

  • 3-plane RGB format, RGB to RGB1 (RGB for RGB and 1 for Alpha)

  • 4-plane, ARGB format, ARGB to RGBA (changes the order of the channels internally)

The output of jit.pix is always a 4-plane matrix in ARGB format, which is the standard Jitter pixel planemapping. Like jit.gen , jit.pix compiles Gen patchers into C++ and makes use of Jitter’s parallel processing system. jit.pix also has a precision attribute that operates exactly the same was as it does in jit.gen .

jit.gl.pix

The jit.gl.pix object is a matrix and texture processing object specifically for pixel data that operates just like jit.gl.slab . The only difference between the two is that jit.gl.pix compiles its patcher into GLSL while jit.gl.slab reads it from a shader file. Like jit.pix , jit.gl.pix uses an internal RGBA pixel format.

Technical notes

Numerical Values in the Kernel

All numerical values in Jitter Gen patches are conceptually floating point values. This is the case even for fixed precision kernels. It is particularly important to remember this when dealing with char matrices. All char matrices are converted to the range [0, 1] internally. On output, this range is mapped back out to [0, 255] in the char type. A char value of 1 is equivalent to the floating point value of 1/255.

http://cycling74.com/docs/max6/vignettes/gen/images/gen-jit-01.png*
 

When using the comparison operators (==, !=, <, <=, >, >=), it’s particularly important to keep in mind the floating point nature of Gen patcher internal values because of their inherent imprecision. Instead of directly testing for equality for example , it’s more effective to test for whther a value falls within a certain small range (epsilon). In the example above, the absdiff operator calculates how far a value is from 1/255 and then the < op tests to see if it’s within the threshold of error.

jit.pix vs. jit.gl.pix

For the most part jit.pix and jit.gl.pix will behave identically despite one being CPU-oriented and the other GPU-oriented. The differences have to do with differences in behavior between how matrix inputs are handled with jit.pix and how texture inputs are handled with jit.gl.pix . All of the inputs to jit.pix will adapt in size, type, and dimension to the left-most input. As a result, all input matrices within a jit.pix processing kernel will have the same values for the cell and dim operators. In jit.gl.pix , inputs can have different sizes. In jit.gl.pix , the values for the cell and dim operators are calculated from the properties of the left-most input (in1). A future version may include per-input cell and dim operators, but for now this is not the case.

Since the sampling operators take normalized coordinates in the range [0, 1], differently sized input textures will still be properly sampled using the norm operator since its value is independent of varying input sizes. However, in jit.gl.pix the sample and nearest operators behave differently than with jit.pix . How a texture is sampled is determined by the properties of the texture. As a consequence, sample and nearest behave the same in jit.gl.pix . To enable nearest sampling, set the @filter attribute to nearest. For linear interpolation, set @filter to linear (the default).

OSC

An Introduction

Open Sound Control (OSC) is a protocol for communication among computers, sound synthesizers, and other multimedia devices that is optimized for modern networking technology. Bringing the benefits of modern networking technology to the world of electronic musical instruments, OSC’s advantages include interoperability, accuracy, flexibility, and enhanced organization and documentation.

This simple yet powerful protocol provides everything needed for real-time control of sound and other media processing while remaining flexible and easy to implement. Features include:

  • Open-ended, dynamic, URL-style symbolic naming scheme

  • Symbolic and high-resolution numeric argument data

  • Pattern matching language to specify multiple recipients of a single message

  • High resolution time tags

  • “Bundles” of messages whose effects must occur simultaneously

  • Query system to dynamically find out the capabilities of an OSC server and get documentation

There are dozens of implementations of OSC, including real-time sound and media processing environments, web interactivity tools, software synthesizers, a large variety programming languages, and hardware devices for sensor measurement. OSC has achieved wide use in fields including computer-based new interfaces for musical expression, wide-area and local-area networked distributed music systems, inter-process communication, and even within a single application. Uses include:

  • Sensor/Gesture-Based Electronic Musical Instruments

  • Mapping non-musical data to sound

  • Multiple-User Shared Musical Control

  • Web interfaces

  • Networked LAN Musical Performance

  • WAN performance and Telepresence

  • Virtual Reality

For more background information see http://opensoundcontrol.org

An example of ‘raw’ OSC code

This example opens a connection, which listens for messages on port 7770, Sent from port 7779. Then use midi to play middle C at mezzo forte.

 

send (“osc:open”, 7779, 7770)

 

send( “osc:message”, :m, {128 60 127 64})

 

Practical uses with Max/MSP/Jitter

 

OSC can be used to transmit information to and from Max in a variety of ways. As shown by the list of applications above the possible uses are almost endless, however on a practical level OSC can be thought of in a manner similar to a massively improved MIDI protocol facilitating communication between vast numbers of digital devices. So… use your imagination!

iPhone, OSC and Arduino.

Useful resources for interfacing OSC to Max

http://cycling74.com/?toolbox-tag=osc-2 Cycling 74’s OSC toolbox

http://cnmat.berkeley.edu/downloads University of California CNMAT resource page.

http://www.deecerecords.com/projects#kinectsynapse A Kinect to Max interface (thrown in as a bit of fun to see where you can start to take these ideas)

A Practical Demonstration

In the following example an iPhone is used to control the inputs to max. This is purely because it was the technology available (i.e. the phone which I happen to own), many other interfaces are available ranging from wii controllers to weather stations.

  • Interfacing

There are many ways to link between iOS and Max. Some of the easiest solutions are simple ‘client – server’ programs which deal with most of the under the bonnet issues for you such as TouchOSC, MRMR or Cycling 74’s own interface c74. The majority of these systems rely upon a wifi link and dynamic I/P address, which in many cases is the most complex part of the user end of the system!

The example below was implemented via TouchOSC

Firstly upload the app to the phone. This is available for iOS and Android at http://hexler.net/software/touchosc Then make the links between Mac and phone.

D:\Max\max2\OSC\iphonephoto.pngD:\Max\max2\OSC\Screen Shot 2012-11-16 at 13.17.13.png

  1. Turn on wifi and select “create network” name it something recognisable e.g. “ed”
go to advanced and note the IP in Network Preferences.

Anticlockwise from top left: “Create a new network” Mac screen shot. The resulting screen with I/P information. Setting up TouchOSC to join the network.

D:\Max\max2\OSC\Screen Shot 2012-11-16 at 11.57.44.png

On the iPhone go to wifi Settings and join that network. Go to settings in TouchOSC and manually type in the IP
e.g. 169.xxx.x.x

From this point the iPhone should send OSC data to MAX via wifi. Now we need to set it up in Max…

D:\Max\max2\OSC\Screen Shot 2012-11-16 at 11.58.18.png

 

This is the ‘basic’ max patch built for TouchOSC it uses udprecieve and udpsend combined with arguments based on the port and (wifi assigned) I/P address of the phone to send and receive data. There are a number of preset interfaces as well as an editor for the purposes of this lecture I have used one of the most simple and made a copy of it in max. Below is a screen shot of the on screen mac user interface showing a real-time visualisation of the phone state.D:\Max\max2\OSC\Screen Shot 2012-11-16 at 13.16.11.png

OO

Over the page is a screen shot of the patch in patcher mode. Whilst in terms of MAX this is a very simple application of OSC it opens up many possible avenues…

Javascript in Max

What is javascript and why do you need it?

JS is a text based language that exists within Max. JS is a common scripting used to automate aspects of larger programs – Mozilla, Flash etc. Some of these programs include extensions specific to the program. Max is one of these.

JS is:

  • Compact

  • Easier to understand in some situations,

  • A bit a faster to do some things

  • Can talk direct to some bits of Max , Jitter etc.

  • It can do programming tricks that Max can’t easily do (recursion, multi-variable loops and so on.)

Mostly – Max is what we call a high level language – it’s really good generating audio and video, keeping things synced to numerous clocks at the same time and so on BUT when it comes to things like mathematical calculations, loops etc. all these boxes and string can be a bit tedious.

Exercise 1:

  1. Open a new max patcher

  2. Save it somewhere convenient as jstest.maxpat

  3. Create a new max object called js jstest (1 in, 1 out)

  4. Look at the max window – error can’t find the file

  5. Double click js then type the following:

 

autowatch = 1;

function bang() {

post(“hello world”);

post();

}

  1. Hit cmd-S to save.

  2. Close the editor window

  3. Now attach a bang to the js box.

  4. Hit the bang and look at the max window.

 

Exercise 2:

Build this in a js object. How do you make it work and what does it do?

autowatch =1;

inlets = 2;

 

var memory = [0, 0];

 

function msg_int(val) {

if (inlet == 0) {

//store and calculate

memory[0] =val;

bang();

}

else

{

//store only

memory[1] = val;

}

}

 

function bang() {

outlet(0, memory[0] + memory[1]);

}

Exercise 3:

The two max patches below create lists of n number of harmonics based on a given frequency. To the left is the awkward Max work around to the right is the significantly faster (in operational terms) js implementation.E:\Max\max2\Harmonics.png

 

See over for js harmonics code.

autowatch =1;

inlets =2;

 

/* Calculates the first n harmonics of a given

root note in Hz and outputs them as a list. */

 

setinletassist (0, “root frequency (int/float)”);

setinletassist (1, “number of harmonics (int)”);

setoutletassist (0, “first n harmonics of input freq (list)”);

 

// global variables

 

var frequency, num;

 

function msg_int(val) {

if (inlet == 0) {

frequency = val;

bang();

}

else

{

num = val;

}

}

 

function msg_float(val) {

if (inlet == 0) {

frequency = val;

bang();

}

else

{

num = Math.round(val); //can’t have non-integral harmonics

}

}

 

function list() {

// takes first two members of list from either inlet

frequency = arguments[0];

num = Math.round(arguments[1]); //can’t have non-integral harmonics

bang();

}

 

function bang() {

var harmonics = [];

for (i = 0; i< num; i++) {

harmonics[i] = frequency*(i+1);

}

outlet(0, harmonics);

}

Exercise 4:

The two max patches below create a pseudo random midi melody. Note the reversal of the order of outlets in the code to match Max’s right to left convention. E:\Max\max2\random note.png

 

autowatch =1;

outlets = 3;

 

setinletassist (0, “bang”);

setoutletassist (0, “pitch (int)”);

setoutletassist (1, “velocity (int)”);

setoutletassist (2, “duration (int)”);

 

// ======================= randnote.js ==============

// this function takes a bang and returns a random pitch from out1

// and a random velocity from out2, duration from out3

// ==================================================

function bang() {

var pitch, velocity, duration;

pitch = Math.round(Math.random() * 24.)+ 48;

velocity = Math.round(Math.random() * 32.)+ 96;

duration = 125*(Math.round(Math.random() * 8.)+ 1);

//post(pitch, velocity, duration);

//post();

outlet(2, duration);

outlet(1, velocity);

outlet(0, pitch);

}

function msg_int() {

post(“randnote only likes bangs!”);

}

Algorithmic composition

Algorithms (or, at the very least, formal sets of rules) have been used to compose music for centuries; the procedures used to plot voice-leading in Western counterpoint, for example, can often be reduced to algorithmic determinacy. The term is usually reserved, however, for the use of formal procedures to make music without human intervention, either through the introduction of chance procedures or the use of computers.

Many algorithms that have no immediate musical relevance are used by composersas creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) are fair game for musical interpretation especially in the field of sonification. The success or failure of these procedures as sources of “good” music largely depends on the mapping system employed by the composer to translate the non-musical information into a musical data stream.

There is no universal method to sort different compositional algorithms into categories. One way to do this is to look at the way an algorithm takes part in the compositional process. The results of the process can then be divided into 1) music composed by computer and 2) music composed with the aid of computer. Music may be considered composed by computer when the algorithm is able to make choices of its own during the creation process.

Another way to sort compositional algorithms is to examine the results of their compositional processes. Algorithms can either 1) provide notational information (sheet music) for other instruments or 2) provide an independent way of sound synthesis (playing the composition by itself). There are also algorithms creating both notational data and sound synthesis.

However, one of the most common wayto categorise compositional algorithms is by their structure and the way of processing musical data. One of the most detailed divisions consists of six partly overlapping models:

  • Mathematical models

Mathematical models are based on mathematical equations and random events. The most common way to create compositions through mathematics is stochastic processes. In stochastic models a piece of music is composed as a result of non-deterministic methods. The compositional process is only partially controlled by the composer by weighting the possibilities of random events. Prominent examples of stochastic algorithms are Markov chains and various uses of Gaussian distributions. Stochastic algorithms are often used together with other algorithms in various decision-making processes.

Music has also been composed through natural phenomena. These chaotic models create compositions from the harmonic and in-harmonic phenomena of nature. For example, since the 1970s fractals have been studied also as models for algorithmic composition.

As an example of deterministic compositions through mathematical models, the On-Line Encyclopaedia of Integer Sequences provides an option to play an integer sequence as music. (It is initially set to convert each integer to a note on an 88-key musical keyboard by computing the integer modulo 88, at a steady rhythm. Thus A000027, the natural numbers, equals a chromatic scale.)

  • Knowledge-based systems

One way to create compositions is to isolate the aesthetic code of a certain musical genre and use this code to create new similar compositions. Knowledge-based systems are based on a pre-made set of arguments that can be used to compose new works of the same style or genre. Usually this is accomplished by a set of tests or rules requiring fulfilment for the composition to be complete.

  • Grammars

Music can also be examined as a language with a distinctive grammar set. Compositions are created by first constructing a musical grammar, which is then used to create comprehensible musical pieces. Grammars often include rules for macro-level composing, for instance harmonies and rhythm, rather than single notes.

  • Evolutionary methods

Evolutionary methods of composing music are based on genetic algorithms. The composition is being built by the means of evolutionary process. Through mutation and natural selection, different solutions evolve towards a suitable musical piece. Iterative action of the algorithm cuts out bad solutions and creates new ones from those surviving the process. The results of the process are supervised by the critic, a vital part of the algorithm controlling the quality of created compositions.

  • Systems which learn

Learning systems are programs that have no given knowledge of the genre of music they are working with. Instead, they collect the learning material by themselves from the example material supplied by the user or programmer. The material is then processed into a piece of music similar to the example material. This method of algorithmic composition is strongly linked to algorithmic modelling of style, machine improvisation, and such studies as cognitive science and the study of neural networks.

  • Hybrid systems

Programs based on a single algorithmic model rarely succeed in creating aesthetically satisfying results. For that reason algorithms of different type are often used together to combine the strengths and diminish the weaknesses of these algorithms. Creating hybrid systems for music composition has opened up the field of algorithmic composition and created also many brand new ways to construct compositions algorithmically. The only major problem with hybrid systems is their growing complexity and the need of resources to combine and test these algorithms.

An Example using a First Order Markov chain

 

For simplicity the notes of ‘Happy Birthday’ as a musical example:


C4 C4 D4 C4 F4 E4
C4 C4 D4 C4 G4 F4
C4 C4 C5 A4 F4 E4 D4
Bb4 Bb4 A4 F4 G4 F4
M:\Windows_Data\Desktop\happy birthday.bmp

A zero order Markov Chain considers only the probability of each note occurring. If we were to use Happy Birthday as source material for our algorithmic composition we can see that the notes occur with the following distributions:

  • C4 8 times – 32%

  • D4 3 times – 12%

  • E4 2 times – 8%

  • F4 5 times – 20%

  • G4 2 times – 8%

  • A4 2 times – 8%

  • Bb4 2 times – 8%

  • C5 1 time – 4%

A first order Markov Chain analysis and generation patch. Here the next note is based on the current note and a list of probabilities for following notes. This is stored in a State Transition Matrix (STM), here’s an STM that lists the notes in our Happy Birthday example and the probabilities of subsequent notes:

 

C4

D4

E4

F4

G4

A4

Bb4

C5

C4

0.375

0.25

 

0.125

0.125

   

0.125

D4

0.66

         

0.33

 

E4

0.5

0.5

           

F4

0.25

 

0.5

 

0.25

     

G4

     

1

       

A4

     

1

       

Bb4

         

0.5

0.5

 

C5

         

1.0

   

The patch is implemented over. At present it is only operating in the pitch domain, although other parameters such as duration and velocity could of course be stored and used.

E:\Max\max2\alg\Screen Shot 2012-12-10 at 16.13.44.png

 

Thanks to Mr R Garrett, Prof. A Lewis &  Cycling74.

One thought on “Max MSP Basics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s