BACKGROUND ON THIRD EYE ORCHESTRA’S CLEPSYDRA
In June 2017, Hans Tammen‘s Third Eye Orchestra performed CLEPSYDRA, a score produced from electronic sample and hold procedures. The following description of the procedures leading to the materials used in this composition is from a series of blogposts on NewMusicUSA.
The piece is supported by New Music USA, made possible by annual program support and/or endowment gifts from New York City Department of Cultural Affairs, Mary Flagler Cary Charitable Trust, Helen F. Whitaker Fund, Aaron Copland Fund for Music, Carl Jacobs Foundation, and New York State Council on the Arts.
The underlying idea of Clepsydra is to draw from electronic music experience to write for chamber ensemble, and “Sample And Hold” techniques provide rich material to draw from.
“Sample And Hold” is a well-known technique in synthesizer music, in that one “samples” a voltage and “holds” it for a certain amount of time. What does that mean? Since the amount of voltage in a synthesizer can correspond to pitch, we can use this technique to create melodies. One does that by sampling voltages at specific intervals and sending this into an oscillator. The oscillator’s pitch will correspond to that voltage, so we can transcribe that stuff and start composing using the material created.
In the upper part of the graphic you can see the Basic Building Blocks of the sample-and-hold technique: a low frequency oscillator (LFO) on the left is providing voltages to the sample-and-hold (S+H) unit, a timing oscillator on top “pings” the S+H unit at specific intervals. Voltages are sent into the voltage controlled oscillator (VCO) and turned into pitch.
However, to get workable material we need to go beyond these basic building blocks. So let’s talk about the lower part of the graphic, because that’s “What I Actually Need”.
Usually the timing oscillator “pings” the S+H unit at regular intervals, but it doesn’t have to be that way. I can ping it in a rhythmic fashion by using sequencers, modulate the timing LFO with another oscillator to create variations in timing, or ping it in an entirely random way.
Second, more possibilities emerge when I modulate the LFO that provides the input voltages to the S+H unit. The type of melodies we get depend on the waveforms fed into the S+H unit and how far both LFOs are out of sync. We can also manipulate the parameters in real time, or modulate another LFO onto the LFO providing the input.
It is of course a little bit more complicated to turn these pitches into meaningful stuff. The resulting pitches do not conform to a specific tuning system, so one has to adjust them. I use a module that constrains (“attenuator”) and adjusts (“offset”) the resulting pitches to a range one can work with. A “quantizer” will then move each individual pitch to the closest note on the tempered scale (either chromatic or a scale I may choose).
The notes would all play legato, and I have better transcribing results if they’re a bit shorter, so a combination of envelope generator and voltage controlled amplifier (VCA) allow me to adjust the length of the notes.
Theoretically one could multiply this setup from the VCO on – if the VCOs are set to respond differently to the S+H’s output, one can create chords. However, if you’re using hardware synth modules, it’ll get expensive quickly, since you need the entire row of hardware modules for every single voice. I’d be better off recording multiple lines if I want chords or polyphony.
The next range of options is introduced by the software for transcribing the output. I use a combination of pitch detection in Max/MSP and my ears, recording in Ableton Live, and scoring in Sibelius. I can manipulate the parameters of the transcription in that its tempo can be set differently from the synthesizer’s tempo, and the rhythmic quantization can be varied as well. Setting the parameters in a way that several notes get tracked during the same time period results in them being stacked on top of each other – another way of creating chords. Some lines I will transcribe by ear (“hand”), because humans perceive the outcome differently than the computer.
The goal is then to produce enough material to start composing. S+H lines often sound simple and random, but by introducing all those variations one should have enough material at hand to make music…
LIVE AT ROULETTE 2017: CLEPSYDRA
Live at Roulette 2017. With Shelley Hirsch (voice), Dafna Naphtali (live sound processing, voice), Sarah Bernstein (vio), David Soldier (vio), Jason Hwang (vla), Tomas Ullrich (cello), Ned Rothenberg (cl, bcl), Michael Lytle (cl, bcl), Briggan Krauss (as, baritone sax), Josh Sinton (contra bass clarinet, baritone sax), Ursel Schlicht (p), Gordon Beeferman (organ), Shoko Nagai (moog), Nick Didkovsky (g), Satoshi Takeishi (perc), Hans Tammen (composition, binary conducting). Video recording by Carlton Bright and Amanda Shaperson, audio recording by Brendan Reilly & Sarah Peterson at Roulette.