Cycles Per Instruction combines free improvised performance with software programs that the band wrote to create sound in novel ways. When all is said and done, the computer ends up being a 4th improvising member of the band, fed by data and expressing sound via algorithms.
"Listening to netcat is a stimulating reminder that your brain organically creates electricity" - Loren Chambers
The Internet is an Apt Motherfucker
This piece combines improvisational playing on cello, synth, and drums, with three main technological components. The first component is a purpose-built synthesis/sequencer program*. The piece opens with this program layering a base motif 64 times with a random time offset, creating a blurred, textural reference to the original motif that varies with each performance. The second component is a generative Markov model of phoneme sequences derived from Wikipedia and a collection of scientific papers*. We use the model to generate novel, incoherent speech sounds. The third component is a sentiment-aware model of statements of preference derived from peoples’ actual statements of preference on the internet*. We use the model to generate positive/negative sentiment couplets, recited in synthesized speech.
This piece combines human improvisors with custom software* that generates sound by analyzing real-time internet traffic. Our goal is to fuse computer network communication with human communication. We capture network traffic and send it to custom software that converts it into MIDI command messages. These MIDI messages then drive software synthesizers that create the sounds on the recording. Each synthesizer has a specific, human-configured sound and set of tunings, but the timing and individual note selection is dictated by the timing and trajectory of packets moving through the network. We improvise with the computers by live-mixing the network synthesizers, as well as on our acoustic/electric instruments.
Approximating the Circumference of the Earth
This piece is a structured improvisation for cello, synth, and chango. The chango* is a novel computer musical instrument that uses computer vision to convert patterns of light into patterns of sound. The chango player associates a different tone with each different part of a frame of video and the light intensity in a tone’s region of the frame dictates its volume. Selectively illuminating parts of the frame plays tones and tone clusters with sound intensity proportional to the light intensity.
Hardware: 3x macbook pro, 1x misc. laptop, Alesis Ion synth, cello, drums
Software: Chango*, NI Kontakt 5 soft-synths, Rax, Wireshark, /usr/bin/nc
* Our custom source code is freely available at github.com/usrbinnc
released 18 April 2014
Trevor Spencer for analog recording, mixing, and mastering.
Tony Fader for inspiration, data, and software on The Internet is an Apt Motherfucker.
Andrew J.S. for art design.
Table & Chairs for releasing experimental music.
Racer Sessions for pushing us to start this project, and Café Racer for hosting the sessions every week.
for the original netcat tool, Gerald Combs for starting the wireshark project, and all the contributors to open-source.