back

ARCH 21ST-03 SPACES FOR MEDIATION
Land Acknowledgement Journal and final Drawing
042723

This course was mostly research based. In the course we were asked to choose a land that we wanted to acknowledge and research through the course of the semester. I chose Bushwick NY. I care about this land because of the music and culture located in that area today. During my research I found that the Lenape Indigenous tribe lived on that land first. Sound became a huge part of my research because it was an ally and familiar medium for me to research the Lenape people and the current day techno and music scene. I then began drawing connections in sound between the Lenape people and the current inhabitants of Bushwick. The rhythm and minimalism in sole heart beat rate drum and vocal use I the indigenous songs of the Lenape people directly related to techno. The ritual practice of performance and dance to simple repeating rhythmic beats and vocal drones directly relates to the ritual practices of attending raves and listening to techno. Both are celebrations, the Lenape people are celebrating the land and the environment, and I am celebrating music, meditation and ritual practice in itself in parallel to the Lenape people. My research on various forms of sonic practice to present my research brought me to granular synthesis because of its relation to nature, generation and generativeness, an initial state of being, and evolution over time.

Granular synthesis reminds me of grains of sand. The wind blows grains of sand in the wind to create a new portrait in the ground. recorded audio is a trim and cut/sample of that moment in time
* representing sound visually, sand and dust
    * in relation to minimalism and minimalistic moves within composition
Granular synthesis reminds me of evolution. You start with a set of genes, that is your audio, your initial state, first generation, and as you stretch that state, that initial audio over time, the longer the audio is stretched, the audio adapts to itself and has to inevitably evolve into a new form of being.
Granular synthesis reminds me of raindrops in a storm

I
presented my work as an installation and sonic performance where I placed real grass in between all electronic musical instruments (my modular synthesizer and speakers) and the ground, creating a filter between the electronic and the organic, a filter between the new, grass as regrowth and generativeness, and the land itself. I processed and manipulated existing authentic Lenape music using Teletype (a live coding eurorack with its own syntax, voltage followed from the amplitude of the incoming Lenape songs informed and was translated into binary, whose values were converted to digital pulses and slewed voltages to trigger and manipulate the granular algorithms controls). This performance was not only a presentation of my land acknowledgment and research but a ritual practice in itself and a representation of the history of music culture through creative computation.












According to Chance RISD Wintersession 2023
Digital Media

Init State (Binaural)



All sound sources used in this work are comprised of simple waveforms and white noise. This piece was preformed on February 8th In the Digital Media building at RISD. The piece was performed in 2nd order ambisonics on an 8 channel Genelec speaker array using Reaper, IEM plugins and a Korg Nanocontrol. All recordings of this piece are binaural for stereo listening. Monome Teletype provided all binary pattern and rhythm generation, and Grid was used as an interface for controlling various teletype operators. 

Teletype Code:

#1
I2M.N# 1 36 127

#2
I2M.N# 2 37 127

#3
I2M.N# 3 38 127

#4
I2M.N# 4 39 127
I2M.N# 4 43 127

#5
DEL RND 2000: $ ? B 8 7
L 1 4: $ 6
L 1 4: TR.TIME I RRND 20 120
PROB 80: BRK
L 1 4: CV.SLEW I 800
L 1 4: CV I V RND 10

#6

#7
CROW.AR 2 0 / LAST 5 J V 10
J + 1 G.FDR.N 3
PROB 8-: I2M.N# 6 63 127

#8
CROW.AR 1 0 / LAST 5 J V 10
J + 1 G.FDR.N 2
PROB 4-: I2M.N# 6 41 127

M
M + J / BPM PRM 4; T + T 1
A SCL IN 0 V 10 0 15
L 1 4: TR.P ? BGER A I I 0
B BGET A % T + 1 G.FDR.N 1
L 1 4: $ ? BGET A I I 0

I
M.ACT 1; PARAM.SCALE 20 300
G.FDX 0 0 0 1 8 3 0 0 4 1




Grid Ops Binary Left Shift Rotating




Interactive performance for According to Chance Wintersession at RISD
Jan 2023




Teletype Code:

#1
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 0; D ? PN B 0 3 0
B G.BTN.V 8
C * B 2

#2
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 1; K PN ? B 0 1 1
IF < Z 4: CV 1 V 5
DEL * LAST 2 2: CV 1 V 0

#3
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 2; K PN ? B 0 1 2
IF < Z 4: CV 2 V 5
DEL * LAST 3 2: CV 2 V 0
CV 3 N + 65 % T G.FDR.N 11
CROW.AR 1 0 / LAST 1 4 V 5

#4
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 3; K PN ? B 0 1 3
CV 4 N + 36 % T 9
CV.OFF 4 N + 12 G.FDR.N 13
CROW.AR 2 0 / LAST RND 2 4 V 5

#5
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 4; K PN ? B 0 1 4

#6
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 5; K PN ? B 0 1 5

#7
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 6; K PN ? B 0 1 6
Z G.FDR.N 14

#8
EV + 1 G.FDR.N 0: I2M.N# J K A
J PN * B 2 7; K PN ? B 0 1 7

#M
T + T 1; X G.FDR.N 11; CV 4 O
M + + 20 * G.FDR.N 9 20 J
J << G.FDR.N 10 RND Z
Y BGET X % T G.FDR.N + 12 K
TR.P ? Y ? Z 1 4 ? TOSS 2 3
K RND * G.FDR.N 13 4; $ 7

#I
G.FDX 0 0 0 4 1 0 0 0 1 8
A 127; T 0; M.ACT 1; $ 1
G.BTN 8 4 0 1 1 1 0 1
G.FDR 9 4 1 1 7 1 0 0
G.FDX 10 5 0 1 8 1 0 0 5 1
PARAM.SCALE 0 10

#P
8    8    8    8
1    1    1    1
0    0    0    0
63    63    63    63

10    36    1    36
10    48    2    37
10    41    3    38
10    58    4    39
10    40    5    40
10    49    6    41
10    42    7    42
10    39    8    43



Untitled



Performance for Of Sound and Vision Wintersession in collaboration with Haram Lee
RISD
Jan 2022



“As planned, we made a project where we incorporated most of the hardware available to us and utilized max as a giant hub for all the sound to be modulated by and to pass through. Also, by incorporating network connection through udpsend/receive, we could control the other’s patches by using our own modulating sources. Max was also used for all the visuals in the performance. Femi made a patch that uses the velocity of the movement seen through the webcam to control the serge modular, and I had a visual patch that was controlled by the sounds coming from Pulsar-23. Max was also used for live sampling all the sounds coming from the hardwares and modulating them, and then resampling them, and so on. All in all, max was used in a lot of different ways including but not limited to, mixing sounds, sampling sounds, making visuals, providing utility for signal control/routing, connecting Femi and my patches, modulating/signaling sounds for hardware, etc.” 

All Sound Sources from Eurorack modular, Serge Modular and Elektron drum machines, synths and samplers. All Visual Sources come from Max MSP.



CHARACTERS



RISD
May 2021
Performance for Sound Synthesis Spring Semester



I have been really interested in creating immersive and complex textures with the Serge and I wanted to focus more on this idea of maximality by using feedback paths and cross modulation of signals (feeding back and wave multiplier or cross patching oscillators using their frequency modulation input) instead of a random source such as noise or sample and hold to have more control over the patch overall. This effectively let all of the individual sounds or characters communicate in an easier and more efficient way, allowing for a balance of chaos and each sound's primary source. I also wanted to focus on giving every individual sound its own space, so as to not have the patch be so dense.For my final project I wanted to focus on the idea of characters and synthesizing organic and controlled random. I used two CRT TVs as a visual representation of these characters, one visualizes the Serge analog synthesizer, and the other visualizes SuperCollider, to show the communication between analog and digital. I performed this patch live because I felt it was important to interact with these characters in real time, or in an improvisational way.



HYPERENCODER





RISD
May 2021



Performance for Spatial Audio Spring Semester

In this work I experiment with mapping synthesized sounds in an imaginary, impossible space using and abusing 3rd order ambisonics spatialization techniques. I wanted to focus on the conversation between organic and synthesized sounds, and creating the organic from the synthetic. All sounds in this piece were synthesized. This piece was originally performed at the 25.4 hemisphere speaker array at RISD. The performance was recorded on 24 channels and decoded to stereo binaural.

click HERE for the album attached to this project.


Hardware:
Modular Synth, 25,4 Speaker Array.

Software:Orca, Cassetter, Repaer, Max MSP
Click Here For Full Performance