top of page

Virtual Reality Digital Audio Workspace

I will be joining the VR DAW project in Fall 2019 with Vincent Olivieri and Theresa Jean Tanenbaum. Watch this space for updates on my contributions.

Of Music and Lasers

Oct 2017

The laser is reflected off a mirror that is affixed to the skin of a balloon. The balloon is stretched over a bowl that contains a speaker playing the ensemble's live audio.


Performed by the Creative Practices ensemble at the University of California, Irvine under the direction of Nicole Mitchell.

Christian Darais

Ryan Miller

Austin Lopez

Tyler Shelton

New Music Controllers Workshop

Fall 2017

I had the privilege of instructing The New Music Controllers Workshop (Semester Course) at BYU. The curriculum for the course included learning the Arduino framework, basic programming skills, continuous and discrete control, and max patch design. Students were required to produce their own instruments which they demonstrated at the end of the semester. 

Tests were done with a few different materials for plates. This is how the interesting sonic properties of the metal plates were discovered. 

This is a very simple construction of a Chladni Plate. A modified PVC cap is fixed to a long bolt that supports the metal plate. The PVC cap is then fixed to a speaker. Future experimentation will include audio feedback and more complex audio signals being fed through the plate. 

The Chladni Plate

From Harvard Natural Sciences:

"A Chladni plate consists of a flat sheet of metal, usually circular or square, mounted on a central stalk to a sturdy base. When the plate is oscillating in a particular mode of vibration, the nodes and antinodes that are set up form complex but symmetrical patterns over its surface. The positions of these nodes and antinodes can be seen by sprinkling sand upon the plates; the sand will vibrate away from the antinodes and gather at the nodes."

Audio Dome

July 2016

The AudioDome is an Arduino device that uses five photocells and a piezo sensor.

All sensors send continuous information, though the piezo is intended for discrete events (threshold trigger).

The AudioDome was built during the New Music Controllers workshop at CCRMA.


Does a CADP environment change the way we think about performing music?


   The question frequently presented to participants of the CADP seminar is, “How does a cross-adaptive environment change the way we think when performing music?” I maintain that this question, while in no way nefarious, is misguided in its internalist assumptions regarding skill acquisition.

Skill acquisition is commonly modelled as the process of amassing specific instructions and rules intellectually. Mentally represented actions are then played out physically according to the mind’s design. However, this internalist model breaks down under the scrutiny of modern science and philosophy regarding the embodied, external, and enactive nature of cognition which invalidates the Cartesian dualism of mind and body. Hubert Dreyfus explains in detail an externalist model for skill acquisition. Mastery is not attained through the enumeration and recall of highly specific rules and experiences. Rather, it arises through generalized embodied interactions and generalized experiences. The body attunes itself to a task through the drive to attain a maximum grip on the present task. While initial instructions and discussions can greatly aid in the learning process, it is ultimately attunement through bodily experiences that skill is acquired. The intellectualization of these skills is reflective, rather than premeditative.


    Brandtsegg discusses similar skill acquisition in terms of flow or automation:


“There is…a common concept among musicians, [describing] when the music flows so easily [it is] as if the instrument is playing itself. Being in the groove, in flow, transcendent, totally in the moment, or other descriptions may apply.  One might argue that this phenomenon is also [the] result of training, muscle memory, gut reaction, instinct. These are in some ways automatic processes. Any fast human reaction relies in some aspect on a learned response, processing a truly unexpected event takes several hundred milliseconds. Even if it is not automated to the same degree as a delay effect, we can say that there is not a clean division between automated and contemplated responses. We could probably delve deep into psychology to investigate this matter in detail, but for our current purposes it is sufficient to say automation is there to some degree at this level of human performance as well as in the instrument itself.”


    The phenomenologist Aron Gurwitsch concisely describes the process:


“What is imposed on us to do is not determined by us as someone standing outside the situation simply looking on at it; what occurs and is imposed are rather prescribed by the situation and its own structure; and we do more and greater justice to it the more we let ourselves be guided by it, i.e., the less reserved we are in immersing ourselves in it and subordinating ourselves to it. We find ourselves in a situation and are interwoven with it, encompassed by it, indeed just ‘absorbed’ into it”.


    The question, “How does a cross-adaptive environment change the way we think when performing music?” assumes musical performance to be the hylomorphic act of mentally representing cross-adaptive relationships, planning actions, and then physically enacting represented actions onto physical material. Here I propose a different question that allows for an externalist model of skill acquisition: How does a cross-adaptive environment change the way we perform? This question allows for reflective mental representation, not premeditative. It favors the accomplished action and the acquired skill, while avoiding the pitfalls of internalist cognitivism.

    Presumably, answers to the question in its initial form would look something like: my thoughts are x, y, and z, when performing in a cross-adaptive environment. These answers are entirely reflective, though posing as having been known in the moment. Dreyfus recounts a study performed by the Air Force which reveals major discontinuities between a flight instructor’s self-analysis of their techniques and the reality of their techniques:


“Air Force psychologists studied the eye movements of the instructors during simulated flight and found, to everyone's surprise, that the instructor pilots were not following the rule they were teaching, in fact their eye movements varied from situation to situation and did not seem to follow any rule at all. They were presumably responding to changing situational solicitations that showed up for them in the instrument panel thanks to their past experience. The instructor pilots had no idea of the way they were scanning their instruments and so could not have entertained the goal of scanning the instruments in that order.”


    How useful is it, then, to recount what we assume to have been our thoughts and techniques while performing music if intellectual reflection on skill is uncertain and likely inaccurate? Is it more useful to ask the suggested question: How does a cross-adaptive environment change the way we perform? This question opens present research up to effective analysis including the reviewing of recordings and both qualitative and quantitative assessment of changes in performance behaviors. The result is accurate and meaningful analysis; analysis that does not subject itself to the inconsistencies of time-distorted reflections.



An Externalist Process for Developing Cross-adaptive MIDI Environments



    Ligeti’s CADP seminar exposed participants to new compositional and performative territories. Having had no experience with intentionally cross-adaptive environments, it comes as no surprise that composing for such a thing was difficult and uncertain. When composing, there is a strong impulse to intellectualize every aspect of cross-adaptivity. Also acting as performer of said composition only increases a need to obtain a maximum grip on the environment. Attempting to mentally obtain this grip without the physical, embodied experience of interacting with the imagined environment can only result in stressful, fruitless hours of work which produce less-interesting writing and composing. Furthermore, attempting a cross-adaptive composition without physically experiencing the intended MIDI relationships is akin to learning how to ride a bicycle by reading a book. A person cannot imagine what they have not experienced (I make this statement tersely, but qualify it by stating that the sublime is still accounted for in the externalist model of cognition). This is why it is important to setup and attempt to interact with sound and hardware as early on in the cross-adaptive writing process as possible. Composing a piece becomes clear and present after having interacting with the physicality of the thing.

The need for simplicity in a cross-adaptive environment cannot be overstated. It is obvious that there are certain circumstances where a higher complexity is more desirable, but in general it can be said that over-complicating a cross-adaptive environment results in less interesting performance and composition. Brandtsegg addresses this:


“With complex crossadaptive mappings, the intellectual load of just remembering all connections can override any impulsive musical incentive. Now, after doing this on some occasions, I begin to see that as a general method perhaps this is not the best way to do it.”


    When spoken this way, it is apparent that more interesting music would likely embody a performer’s impulsive musical incentives. Too many cross-adaptive relationships, according to Brandtsegg, will impede these incentives and dull the work. Compositions that arose from the CADP seminar appear to support this analysis. The interest and meaningfulness of a piece, it seems, sprouts from simple relationships and supports intentionality without representation.


Future


    Both Brandtsegg’s and Ligeti’s cross-adaptive projects unearth interesting areas of performance research and skillful action. To continue discovery, it is vital to perpetuate a common definition and understanding of the term cross-adaptive. Increased understanding of the process needs to come through enactive interaction and physical experience with cross-adaptive environments. Only then can accurate and empirically reflective analysis aid in the dissemination of knowledge and information learned through the cross-adaptive experience.


Figure 2: cadp.LogicGate.amxd and cadp.LogicListener.amxd allow for the creation of standard or custom logic gates which open or close a MIDI stream on an Ableton Live MIDI track based on the presence of MIDI sensed by listeners. Device created by Omar Costa Hamido and Kevin Anthony, 2019.

Figure 1: cadp.MIDI_Energy.amxd captures continuous behavior and maps it to a single value at the composer/performer's discretion. Device created by Kevin Anthony, 2019.

Cross Adaptive Data Processing with Lukas Ligeti

2018 - 2019

The Cross Adaptive Processing as Musical Intervention project led by Øyvind Brandtsegg, which ran from 2016 to 2018, used, “digital audio analysis and processing techniques...to enable [sonic] features of one sound to inform the processing of another.” The Cross-adaptive Data Processing (CADP) seminar led by Lukas Ligeti (University of California Irvine, 2019) adapts this concept for use with MIDI data rather than audio signal analysis and processing. This writing will discuss the definition and use of the term cross-adaptive, analyze the translation of cross-adaptivity into the MIDI domain, and propose an externalist paradigm for describing the cross-adaptive performer experience.


Cross-adaptive as a Term


   Determining whether audio signal processes are being cross-adapted proves to be a relatively easy task, but it is not necessarily free of complications and is not entirely defined. Imperative for this discussion is a common understanding of what is meant by the term cross-adaptive. Currently, it denotes a breadth of environments and there remains a lack of commonality amongst peers in the defining of it, particularly in the general sense of the term.

   The goal of a cross-adaptive environment when dealing with audio signal data is to allow one audio signal to alter or control another. The most superficial of ways to create this alteration or control of a sound is to add a different, simultaneous sound arbitrarily.  Those familiar with cross-adaptive methods will readily disagree with identifying two simultaneous sounds to be cross-adapting one another, arguing that the two sounds are not informing each other in any clear way. It may be asked, do additively combined sounds qualify as being cross-adaptive? Considering Brandtsegg's above definition of cross-adaptivity, do the sonic features of one sound inform the processing of the other when they audiate simultaneously? When it comes to additive relationships in a direct sense, yes. The totality of sonic features of one sound informs the complete processing of the other. When the sample data or waveform of the additive, resultant sound is analyzed, it is clear that a complete transformation has occurred. Why, then, is there no physically perceived influence from the human perspective of one sound on the other and why is there a shared hesitancy to label additive transformations as being cross-adaptive? Possible answers may require a full discussion of psychoacoustics and the human experience, which will not be attempted in this context. Suffice it to say that the sensori-audio experiences of the human body allow for parsing additive sonic-transformations without cognitive reflection. This, as well as the surfacing hesitancy when attempting to qualify additive sonic transformations as being cross-adaptive both help to clarify further what is meant by the term cross-adaptive. It reveals a gradient of cross-adaptivity, with additive transformations (e.g. two simultaneous sounds) inhabiting the infinitely broad limit of the term, and fragmented psychoacoustically-ambiguous sonic feature-control (e.g. the spectral flux of one audio signal controlling the reverberation gain of another) inhabiting the readily specifiable limit. Notice the inverse relationship between true amount of cross-adaptivity and clarity of cross-adaptive presence. As the amount of influential sonic features narrows—and as the influential features themselves narrow—the clarity of a cross-adaptive presence increases.

   Thus, describing a specific relationship as only being either cross-adaptive or not cross-adaptive results in low-resolution analyses of the circumstantial realities. A more accurate approach is to describe the cross-adaptivity of the thing in terms of levels, amounts, and presence. This distinction is vital in describing a translation of Brandtsegg’s methods from audio signal relationships to MIDI signal relationships.

Translating Cross-adaptivity to the MIDI Domain

   Consider a two performer environment. Performer-A is improvising on a MIDI controller using arbitrary, pitched samples in a sampler. Performer-B is transposing Performer-A’s pre-sampler MIDI note numbers with a MIDI slider. The question could be asked, “Is this cross-adaptive?” Let us attempt to apply a translation (to MIDI) of Brandtsegg’s definition. Do the MIDI features of one MIDI stream inform the MIDI processing of another? Similar to the additive audio example previously explained, Performer-B’s control might be said to be entirely cross-adapting as their MIDI stream is entirely informing the MIDI processing of Performer-A. A hesitancy arises, however, when attempting to call this environment cross-adaptive. Performer-B can be said to simply have direct control over Performer-A’s MIDI stream without any cross-adaptiveness present. The reason for this hesitancy is clear when taking into account the inverse relationship between: 1) the true amount of an environment’s cross-adaptivity and 2) the clarity of cross-adaptive presence. Because Performer-B is directly controlling Performer-A’s MIDI stream, there appears to be little or no cross-adaptivity present. How, then, does one create an inherently cross-adaptive MIDI environment?

   In the sonic domain, cross-adaptive environments are created using feature extraction. This allows for a sonic behavior to be captured and repurposed as a control, in some sense. Clarity of this sonic cross-adaptive presence increases by having highly specified sonic features control highly apparent sonic results. Mimicking this in the MIDI domain requires similar techniques. Feature extraction for MIDI data, however, varies greatly from that of audio data (which has a strong precedent). Current research regarding “feature extraction” from MIDI data is focused on methods and applications for machine learning. As such, there are present methodologies for extracting features like key signature, chord, chord progression, tempo, etc. These features may certainly be repurposed controls for processing MIDI data, but they are not sensitive to all genres and playing styles. Ligeti’s CADP seminar necessitates features conducive to more improvisatory play styles that do not always adhere to assumed MIDI functionality. As such, the aforementioned MIDI features do not lend themselves well to the project. Instead, the CADP project uses several Max for Live devices which extract a range of highly specified to moderately generalized MIDI features for use as a control. Some techniques of these devices include the extracting of a continuous value (0-127) based on note frequency (how many incoming MIDI notes per stretch of time), note velocity, note number, or length of held MIDI notes (see Fig. 1 below). Discrete events are also extracted, which served as event triggers. These include specific note number monitoring, generalized monitoring of MIDI presence, or monitoring/filtering MIDI presence within a specified range (see Fig. 2 below).

   It is noteworthy to point out that the concepts of feature extraction and cross-adaptivity as a spectrum are only articulated reflectively. During the development and research process for the CADP seminar, explorations were driven by intuition rather than intellectualized methods. This is highly appropriate, even preferable, given the explorative nature of the seminar. Participants in the seminar and its development phase began with a very rudimentary understanding of what a cross-adaptive environment is; MIDI or sonic. It is only through attempting casual performance and skilled interaction with the developed Max for Live devices that a more internalized understanding calcifies of exemplary cross-adaptivity in the MIDI domain.



What is next?

Despite it being performance capable in its current state, I will be making SoundPainter a stand-alone application.


Check back in 2020 for updates!

Max Patch

The SoundPainter Max project (download demo files) recieves OSC messages from KinectV2-OSC through Matthew Web's Python script. These messages allow Max to be aware of the location of several body points (x, y, z coordinates in meters). This system uses only the left hand.

When the record state is triggered, a function plots a series of points along the motion path made by the left hand. A sound clip from the audio source is also recorded.

This sound clip is then saved in a file path determined by a Max-for-Live granulator.

After the path and audio clip are both recorded, then patch enters a playback state. While in this state, the patch constantly returns the closest point of the motion path (in relation to the left hand) to MIDI CC. This is mapped to a corresponding location on the granulator in Ableton Live.

Volume is also ramped by proximity to the closest point.

DOWNLOAD PROJECT DEMO 

This Max project is NOT in release condition, though anyone is free to use or reference the files.

Software

The entire system is comprised of four applications:

KinectV2-OSC by Andrew McWilliams.

Max 7 by Cycling 74.

Ableton Live by Ableton.


A Python script for OSC routing written by Matthew Web (I originally used OSCRouter by ETC Labs, but the data began to lag several seconds after an OS update).

How does it work?

SoundPainter begins with simple input from the Kinect and an audio signal.


The Kinect requires a USB 3 port. An i7 processor is recommended. 


A wireless microphone connected to a simple audio interface allows for the performer to record vocal samples, though the audio signal can be from any source.

Sound Painter

2015-2016

What is SoundPainter?

SoundPainter is a composition-instrument.


The system allows a performer to place audio in a three-dimensional space, and then explore the sound with their hand. 


In the summer of 2015, I became obsessed with the idea of molding sound in a physical space. Several months later, SoundPainter is now a functional system stable enough for live performance. 


It was chosen to represent the BYU Graduate School of Music at the BYU Grad Expo. SoundPainter was voted among the top 5 displays. 

Beat Detection

June 2019

An experimental approach to beat detection using Cycling 74's Max software. 


We take a psuedo-Fourier Transform approach to detecting the implied tempo of a tabla performance. 

Select Project...
Audio Dome
Beat Detection
Chladni Plate
Cross Adaptive Data Processing
New Music Controllers Workshop
Of Music and Lasers
Sound Painter
VR DAW
bottom of page