// title: Auditory Scene Analysis, a SuperCollider companion (Chapter 1) // author: mjf352 // description: // This is an estimation of every audio-based experiment from "Auditory Scene Analysis, Chapter 1" in the form of SuperCollider code. The purpose is to provide others with an interactive tool to explore their own auditory perception. Edit lines of code, change speeds, alter frequencies, discover, push, and challenge what you (and your ears!) are capable of. I hope to avoid a tacit and purely theoretical acceptance of "Auditory Scene Analysis". Rather, I wish to situate "Auditory Scene Analysis" as an active, emic inquiry by placing both experimental and experiential agency in the hands of the listener. // // Please not that, unlike the (exceptionally excellent) previous examples from Bregman and Ahad, this SuperCollider code is not organized by auditory phenomena. Rather, it works as a true reading companion, following "Auditory Scene Analysis" page by page, start to finish. Many of the technical details, as documented in the PDF booklet and audio examples, are also ignored and instead the slightly ambiguous descriptions offered in "Auditory Scene Analysis" are followed. The purpose here, as always, is to open an exploratory, interactive field through which to discover auditory perception. If this companion is read as intended, "Auditory Scene Analysis" should have a similar ethos and effect as an Alvin Lucier piece, wherein the roles of the performer, composer, listener, and analysist exist as fluid, overlapping concepts and not separate, discrete categories. // // Each entry below is first listed by the page number that it appears on in “Auditory Scene Analysis”. Then, it is provided a direct quote from “Auditory Scene Analysis” describing the experiment and/or auditory phenomenon the code is meant to represent. Finally, a brief one/two sentence summary of each auditory question that is being asked is written down. Throughout each entry, text is commented in to guide the reader through initializing, running, and modifying different sections. Code is written to prioritize the modification of some parameters over others and is therefore inherently biased. A reader is encouraged to remember this bias and modify the code in any way that they see fit within and (especially) beyond the commented suggestions. // code: /* Auditory Scene Analysis, a SuperCollider companion Chapter 1, the Auditory Scene by Miles Jefferson Friday This is an estimation of every audio-based experiment from "Auditory Scene Analysis, Chapter 1" in the form of SuperCollider code. The purpose is to provide others with an interactive tool to explore their own auditory perception. Edit lines of code, change speeds, alter frequencies, discover, push, and challenge what you (and your ears!) are capable of. I hope to avoid a tacit and purely theoretical acceptance of "Auditory Scene Analysis". Rather, I wish to situate "Auditory Scene Analysis" as an active, emic inquiry by placing both experimental and experiential agency in the hands of the listener. Previous audio examples from Bregman and Ahad: http://webpages.mcgill.ca/staff/Group2/abregm1/web/downloadsdl.htm?fbclid=IwAR2MqzU6XaMmfhamOU22e0UsbCaJYYYTkQbUHgrF6N28D4atrrDXvo1RPAU Previous PDF booklet from Bregman and Ahad: http://webpages.mcgill.ca/staff/Group2/abregm1/web/snd/ASA-Demo-Booklet9V4.pdf Please not that, unlike the (exceptionally excellent) previous examples from Bregman and Ahad, this SuperCollider code is not organized by auditory phenomena. Rather, it works as a true reading companion, following "Auditory Scene Analysis" page by page, start to finish. Many of the technical details, as documented in the PDF booklet and audio examples, are also ignored and instead the slightly ambiguous descriptions offered in "Auditory Scene Analysis" are followed. The purpose here, as always, is to open an exploratory, interactive field through which to discover auditory perception. If this companion is read as intended, "Auditory Scene Analysis" should have a similar ethos and effect as an Alvin Lucier piece, wherein the roles of the performer, composer, listener, and analysist exist as fluid, overlapping concepts and not separate, discrete categories. Each entry below is first listed by the page number that it appears on in “Auditory Scene Analysis”. Then, it is provided a direct quote from “Auditory Scene Analysis” describing the experiment and/or auditory phenomenon the code is meant to represent. Finally, a brief one/two sentence summary of each auditory question that is being asked is written down. Throughout each entry, text is commented in to guide the reader through initializing, running, and modifying different sections. Code is written to prioritize the modification of some parameters over others and is therefore inherently biased. A reader is encouraged to remember this bias and modify the code in any way that they see fit within and (especially) beyond the commented suggestions. Citation: Auditory Scene Analysis: The Perceptual Organization of Sound By: Albert S. Bregman ISBN (electronic): 9780262269209 Publisher: The MIT Press Published: 1990 */ s.boot; s.quit; s.reboot; /* page 14 "old-plus-new-heuristic...the principle of exclusive allocation in which the allocation of the continuing spectral components to the first sound interferes with their being allocated to the second sound" Is B1 (sharing spectral components with A) heard as being a continuation of A? Is B2 (not sharing spectral components with A) heard as being a continuation of A? */ //load synthDefs A, B1, and B2 ( ~rel = 5.0; ~partialsA = Array.series(16,0.5,0.5); ~ampA = Array.series(16, 1, -0.04); ~partialsB1 = Array.series(16,2.0,0.5); ~ampB1 = Array.series(16, 1, -0.04); ~partialsB2 = Array.rand(16, 0.2, 9.0); ~ampB2 = Array.series(16, 1, -0.04); ~exclusive_allocation = { inf.do{ ~a = SynthDef(\A, { var sig, sig2, env, stereo; sig = SinOsc.ar(500*~partialsA, 0,0.15*~ampA); env = EnvGen.kr(Env.perc(releaseTime: ~rel), doneAction: 2); sig = sig * env; stereo = Splay.ar(sig); Out.ar(0, stereo); }).add; ~b1 = SynthDef(\B1, { var sig, sig2, env, stereo; sig = SinOsc.ar(500*~partialsB1, 0,0.15*~ampB1); env = EnvGen.kr(Env.perc(releaseTime: ~rel), doneAction: 2); sig = sig * env; stereo = Splay.ar(sig); Out.ar(0, stereo); }).add; ~b2 = SynthDef(\B2, { var sig, sig2, env, stereo; sig = SinOsc.ar(500*~partialsB2, 0,0.15*~ampB2); env = EnvGen.kr(Env.perc(releaseTime: ~rel), doneAction: 2); sig = sig * env; stereo = Splay.ar(sig); Out.ar(0, stereo); }).add; 0.1.wait; }}.fork; ) //run ab_continuation, wherein A and B do share spectral components ( ~ab_continuation = { inf.do{ '\A'.postln; ~a.play; ~b1.stop; ~rel.wait; '\B'.postln; ~a.stop; ~b1.play; ~rel.wait; }; }.fork; ) //stop ab_continuation, wherein A and B do share spectral components ~ab_continuation.stop; //run ab_not_continuous, wherein A and B do not share spectral components ( ~ab_not_continuous = { inf.do{ '\A'.postln; ~a.play; ~b2.stop; ~rel.wait; '\B'.postln; ~a.stop; ~b2.play; ~rel.wait; }; }.fork; ) //stop ab_not_continuous, wherein A and B do not share spectral components ~ab_not_continuous.stop; //change spectral components for A, B1, and B2 and the rate at which they change. ( ~rel = 0.5; ~partialsA = Array.series(16,0.5,2.5); ~ampA = Array.series(16, 1, -0.06); ~partialsB1 = Array.series(16,4.0,1.0); ~ampB1 = Array.series(16, 1, -0.04); ~partialsB2 = Array.rand(16, 0.2, 10.0); ~ampB2 = Array.rand(16, 1, -0.04); ) //stop all routines ( ~ab_not_continuous.stop; ~ab_not_continuous.stop; ~exclusive_allocation.stop; ) //end example /* pages 14-15 (figure 1.7) "Another case of exclusive allocation...the listeners task was to decide on the order of target tones...CCCFAFCC...tones tend to group perceptually with those that are nearest to them in frequency" An exploration of exclusive allocation and the capturing of tones into one auditory stream. Does the addition of C and C&F to A and B change perceived forms of belongingness and order? */ //load synthDefs for A (~aay), B (~bee), F, and C ( ~speed = 0.2; ~aay_freq = 800; ~bee_freq = 1000; ~f_freq = 500; ~c_freq = 300; ~abcf_synth = { inf.do{ ~aay = SynthDef(\aay, { var sig, env; sig = SinOsc.ar(~aay_freq, mul: 0.5); env = EnvGen.kr(Env.perc(releaseTime: ~speed), doneAction: 2); sig = sig*env; Out.ar([0,1], sig); }).add; ~bee = SynthDef(\bee, { var sig, env; sig = SinOsc.ar(~bee_freq, mul: 0.5); env = EnvGen.kr(Env.perc(releaseTime: ~speed), doneAction: 2); sig = sig*env; Out.ar([0,1], sig); }).add; ~f = SynthDef(\f, { var sig, env; sig = SinOsc.ar(~f_freq, mul: 0.5); env = EnvGen.kr(Env.perc(releaseTime: ~speed), doneAction: 2); sig = sig*env; Out.ar([0,1], sig); }).add; ~c = SynthDef(\c, { var sig, env; sig = SinOsc.ar(~c_freq, mul: 0.5); env = EnvGen.kr(Env.perc(releaseTime: ~speed), doneAction: 2); sig = sig*env; Out.ar([0,1], sig); }).add; 0.1.wait; }}.fork; ) //run A and B only ( ~ab = { inf.do{ '\aay'.postln; ~aay.play; ~bee.stop; ~speed+0.1.wait; '\bee'.postln; ~aay.stop; ~bee.play; ~speed+0.1.wait; }; }.fork; ) //stop A and B only ~ab.stop; //run A, B, and F ( ~abf = { inf.do{ '\f'.postln; ~f.stop; ~f.play; ~speed+0.1.wait; '\aay'.postln; ~f.stop; ~aay.play; ~speed+0.1.wait; '\bee'.postln; ~aay.stop; ~bee.play; ~speed+0.1.wait; '\f'.postln; ~bee.stop; ~f.play; ~speed+0.1.wait; }; }.fork; ) //stop A, B, and F ~abf.stop; //run A, B, F, and C ( ~abfc = { inf.do{ '\c'.postln; ~c.stop; ~c.play; (~speed+0.1).wait; '\c'.postln; ~c.stop; ~c.play; (~speed+0.1).wait; '\c'.postln; ~c.stop; ~c.play; (~speed+0.1).wait; '\f'.postln; ~c.stop; ~f.play; (~speed+0.1).wait; '\aay'.postln; ~f.stop; ~aay.play; (~speed+0.1).wait; '\bee'.postln; ~aay.stop; ~bee.play; (~speed+0.1).wait; '\f'.postln; ~bee.stop; ~f.play; (~speed+0.1).wait; '\c'.postln; ~f.stop; ~c.play; (~speed+0.1).wait; '\c'.postln; ~c.stop; ~c.play; (~speed+0.1).wait; }; }.fork; ) //stop A, B, F, and C ~abfc.stop; //change the speed of the sequence and the pitch of A, B, C, and Fs ( ~speed = 0.03; ~aay_freq = 400; ~bee_freq = 1500; ~f_freq = 350; ~c_freq = 300; ) //stop all routines ( ~abcf_synth.stop; ~abfc.stop; ~abf.stop; ~ab.stop; ) //end example /* page 16 "an interesting illusion could be created when tones were sent to both ears of a listener over headphones. The listener was presented with a continuously repeating alternation of two events...A and B altered, each ear was presented with a sequence of high and low tones" Do tones from A and B sound as a single sound bouncing back and forth between the ears? Do perceived tones alternate between sounding high pitched and low pitched as they bounce between ears? */ //load synthDef for event a and event b ( ~speed = 1.0; ~pitch_seq1 = [360, 720]; ~pitch_seq2 = [720, 360]; ~a_b_bounce = { inf.do{ ~event_a = SynthDef(\auditory_assignment_a, { arg freq1=360, freq2=720, a, b; var sig1, sig2, env; sig1 = SinOsc.ar(~pitch_seq1, mul: 0.3); env = EnvGen.kr(Env.perc(releaseTime: ~speed), doneAction: 2); sig1 = sig1 * env; Out.ar(0, sig1); }).add; ~event_b = SynthDef(\auditory_assignment_b, { arg freq1=360, freq2=720, a, b; var sig2, env; sig2 = SinOsc.ar(~pitch_seq2, mul: 0.3); env = EnvGen.kr(Env.perc(releaseTime: ~speed), doneAction: 2); sig2 = sig2 * env; Out.ar(0, sig2); }).add; 0.1.wait; }}.fork; ) //play sequence of A then B ( ~event_a_b_seq = { inf.do{ ~event_a.play; ~speed.wait; ~event_b.play; ~speed.wait; }; }.fork; ) //stop sequence ( ~event_a_b_seq.stop; ) //change the speed of sequence and pitches of A and B ( ~speed = 3.0; ~pitch_seq1 = [360, 720]; ~pitch_seq2 = [720, 360]; ) //stop all routines ( ~event_a_b_seq.stop; ~a_b_bounce.stop; ) //end example /* pages 17-18 (figure 1.8) "One auditory phenomenon with a direct parallel in vision is the auditory streaming effect...the six tones, shown in figure 1.8, formed a repeating loop that was cycles over and over...the faster the sequence is presented, the greater is the perceptual segregation of high and low tones" How does speed and pitch effect how you hear the sequence of tones? Do you hear 142536, 123456, or some other grouping? Is it impossible to focus your attention on both streams at the same time? */ //load synthDef for sequence of tones ( SynthDef(\gestalt_grouping, { arg freq = 440, gate=1; var sig, env; freq = freq.postln; sig = SinOsc.ar(freq, 0 , 0.05); env = EnvGen.kr(Env.perc(0.005,releaseTime: 0.3), doneAction: 2); sig = sig*env; Out.ar([0, 1], sig); }).add; ) //play sequence of tone and adjust the frequency and speed within the Pbind ~auditory_stream = Pbind(\instrument, \gestalt_grouping, \freq, Pseq([200, 800, 220, 880, 240, 960], inf), \dur, 0.1).play;//experiment with different durs and Pseq freqs //stop the sequencing pbind ~auditory_stream.stop; //end example /* pages 27-28 (figure 1.8) "The phenomenon of masking. Masking occurs when a loud sound covers up or drowns out a softer one...what is more, even if the softer sound is physically removed during the brief loud sound, it is still heard as continuing through the interruption...the illusion of continuity" Under what circumstances does the illusion of continuity exist? What pitch ranges, speeds, amplitudes, (ir)regularities does it require? */ //load synthDef, glissandoing sine tone and random noise bursts ( SynthDef(\masking, { arg glissrange=100, glissrate=0.05, burstrate=0.1, noiselength=0.2; var sig1, sig2, env, gate, sigs; sig1 = SinOsc.ar(LFPar.kr(glissrate, mul: glissrange, add: glissrange*2), mul: 0.055); sig2 = WhiteNoise.ar(LFSaw.ar(burstrate, mul: 0.5, add: 0.5)); env = EnvGen.kr(Env.linen(sustainTime:1.2, releaseTime:noiselength), Dust.kr(burstrate), doneAction: 0); sig2 = sig2*env; sigs = sig1 + sig2; Out.ar([0,1], sigs); }).add; ) //play glissaing sine tone and random noise bursts while also setting it equal to "a" a = Synth(\masking); //change the glissrate, the length of noise bursts, and the burstrate a.set(\glissrate, 2.0, \noiselength, 1.0, \burstrate, 0.5); //stop glissandoing sine tone and random noise bursts a.free; //load synthDefs of glissing sine tone and regular noise bursts where the sine tone is gated ( ~noisespeed = 1.0; ~sinespeed = 4.0; ~glissrange = 100; ~glissrate=0.05; ~illusion_regular = { inf.do{ ~sinegliss = SynthDef(\sinegliss, { var sig, lfp, env; lfp = LFPar.kr(~glissrate, mul: ~glissrange, add: ~glissrange*2); sig = SinOsc.ar(lfp, mul: 0.005); env = EnvGen.kr(Env.circle([0.0,0.9,0.89,0.0],[0.0, ~sinespeed, 0.0, ~noisespeed])); sig = sig*env; Out.ar([0,1], sig); }).add; ~noiseburst = SynthDef(\noiseburst, { var sig, env; sig = WhiteNoise.ar(0.7); env = EnvGen.kr(Env(levels: [0,0.8, 0.79, 0], times:[0.0, ~noisespeed, 0.0]), doneAction: 2); sig = sig * env; Out.ar([0,1], sig); }).add; 0.1.wait; }}.fork; ) //play sequence of glissing sine tone and regular noise bursts where the sine tone is gated ( ~sinegliss.play; ~regular_masking = { inf.do{ ~sinespeed.wait; ~noiseburst.play; ~noisespeed.wait; }}.fork; ) //stop sequence of glissing sine tone and regular noise bursts where sine tone is gated ( ~regular_masking.stop; s.freeAll; ~illusion_regular.stop; ) //play only gated, glissaing sine tone ~sinegliss.play; //free all nodes to stop glissing sine tone s.freeAll; //change length of noise bursts, length of sine tones, rate of glissando, and range of glissando ( ~noisespeed = 0.1; ~sinespeed = 1.0; ~glissrange = 300; ~glissrate=2.0; ) //replay sequence of glissing sine tone and regular noise bursts where sine tone is gated ( ~sinegliss.play; ~regular_masking = { inf.do{ ~sinespeed.wait; ~noiseburst.play; ~noisespeed.wait; }}.fork; ) //stop sequence of glissing sine tone and regular noise bursts where sine tone is gated ( ~regular_masking.stop; s.freeAll; ~illusion_regular.stop; ) //end example /* pages 29-31 (figure 1.16) "the perceptual decomposition of simultaneous sounds...a repeating cycle formed by a pure tone A alternating with a complex tone that has two pure-tone components, B and C." Does one hear the sequence as a repetition of tones A and B accompanied by the not complex tone, C? Or, does one hear the sequence as tone A accompanied by the single complex tone, B and C? How do parameters like the frequency, synchrony, and speed change this? */ //set and load synthDefs for A, B, and C ( ~speed = 1.0; ~offset = 0.0; ~freqA = 900; //set frequency of A here ~freqB = 600; //set frequency of B here ~freqC = 200; //set frequency of C here ~try = { inf.do{ 0.1.wait; ~sigA = SynthDef(\sigA, { arg onset = 0.05; var sigA, env; sigA = SinOsc.ar(~freqA, mul: 0.2); env = EnvGen.kr(Env.perc(onset,releaseTime: 0.3), doneAction: 2); sigA = sigA*env; Out.ar([0,1], sigA); }).add; ~sigB = SynthDef(\sigB, { arg onset = 0.05; var sigB, env; sigB = SinOsc.ar(~freqB, mul: 0.2); env = EnvGen.kr(Env.perc(onset,releaseTime: 0.3), doneAction: 2); sigB = sigB*env; Out.ar([0,1], sigB); }).add; ~sigC = SynthDef(\sigC, { arg onset = 0.05; var sigC, env; sigC = SinOsc.ar(~freqC, mul: 0.2); env = EnvGen.kr(Env.perc(onset,releaseTime: 0.3), doneAction: 2); sigC = sigC*env; Out.ar([0,1], sigC); }).add; 0.1.wait; } }.fork; ) //run tones A, B, and C in a sequence ( ~a_b_c_sequece = { inf.do{ '\sigA'.postln; ~sigA.play; ~speed.wait; '\sigB'.postln; ~sigB.play; ~offset.wait; '\sigC'.postln; ~sigC.play; ~speed.wait; }; }.fork; ) //change default frequencies, speed, and onset for A, B, and C. ( ~speed = 3.0; ~offset = 0.5; ~freqA = 1200; ~freqB = 400; ~freqC = 200; ) //stop sequence of A, B, and C ( ~a_b_c_sequece.stop; ~a_b_c_sequece.free; ~update.stop; ) //end example //*note, the examples that require an infant in a room have been omitted s.quit; //end chapter one