Issue 2

︎Why A Remote Laptop Ensemble?︎Ross Wightman︎Why A Remote Laptop Ensemble?︎Ross Wightman︎Why A Remote Laptop Ensemble?︎Ross Wightman


Still image from Ghostline by Jeff Snyder, adapted for virtual performance on Zoom by Ross Wightman. Performed by members of CCAMACC, 2020.


Why A Remote Laptop Ensemble?
Ross Wightman

In the week leading up to the 2020 spring break, CCAM hosted Jeff Snyder as part of my new Sound Art series. Snyder is a composer, improviser, and instrument-builder, a figurehead in the experimental music community. I had invited him to perform with The Yale Laptop Ensemble, where I am a teaching fellow, and which is directed by Konrad Kaczmarek. We focus on creating music via computers, combining programming, design, and synchronization for an experiential live performance. We had already rehearsed one of Snyder’s pieces, called Opposite Earth; he was the ideal participant.

Snyder is also the director of the Princeton Laptop Orchestra (PLOrk), and I suggested that he speak to class earlier that day. He presented recordings of pieces he has written for PLOrk, including Ghost Line, which he wrote for Sideband, another laptop ensemble that he and Kaczmarek both perform in. That afternoon, we all rehearsed together. At the event, the full power of a laptop ensemble was put on display, making complete use of the Leeds Studio space. We set up networking OSC to a localized router, aurally encompassing the audience with use of the multichannel, spatialized speaker array. We visually augmented these sounds with multiple projectorsmoving images of planets orbiting in concentric ringscontrolled by a JavaScript conductor patch. We premiered two pieces: Condensate, written by me, and Spinning Plates, written by Kaczmarek.

After that week, Yale would not return to in-person teaching or meeting for the rest of the semester, and the Yale Laptop Ensemble would not have the opportunity to play together again in the same room. Despite this setback, we decided to figure out how to perform remotely.

Starting with PLOrk in 2005founded by Dan Trueman and Perry R. Cookthe ‘LOrk’ concept of laptop orchestras or ensembles was conceived as chamber music group made up largely or exclusively of performers playing laptop computers. Compositions often ask for both computer-generated audio and visuals. They also rely on some kind of network protocol-based coordination to either conduct the performers, pass audio streams between computers, or pass control information (streams of data that affect the way sound is generated in any given computer or instance of audio software). Traditionally, ensemble members are equipped with individual speakers (in the case of PLOrk, six channel hemi-spherical speakers) so that each performer makes sound spatially relevant to themselves in the same way that an acoustic instrumentalist would. Kaczmarek, who was a member of PLOrk during his graduate studies at Princeton with Trueman, brings a similar approach to his laptop ensemble course at Yale. He uses a hodgepodge of different bluetooth and portable speakers for each member to amplify their individual sound, thereby not mixing the ensemble’s sound down to a central stereo mix. Since laptop ensembles depend on software to produce sound and create instruments (typically programs such as MaxMSP, SuperCollider, and ChucK), ensemble members often fill many roles as software designers, technicians, composers, improvisers, and performers, creating work that blurs the lines between what is considered a piece of music, an instrument, or a methodology.

Ghostline by Jeff Snyder, adapted for virtual performance on Zoom by Ross Wightman. Performed by members of CCAMACC, 2020.

It would be tempting to think that, of all musical groups, a laptop ensemble might adapt easily, even seamlessly, to a remote setting in a time where our work, school, and socialization has been relegated to online settings. However, after the end of Spring break and the setting-in of our new remote reality, Kaczmarek and I attempted this adaptation and began to realize the challengesand opportunitiesthat came with transforming a laptop ensemble to a remote configuration.

As Zoom became the Yale-adopted teleconferencing platform of choice, Kaczmarek and I started to find ways to hack it together with programs already used in the ensemble, primarily Max/MSP. We initially had to address internal system audio routing; gathering audio signals from our microphone and from Max/MSP into one stream that could be passed through Zoom without having to screen-share is not possible for everyone simultaneously during a call. By creating aggregate devices in Audio MIDI setup and using Blackhole (a macOS virtual audio driver) we were able to achieve this readily, but while Blackhole itself is zero-latency (doesn’t add any extra time to signals it passes), Zoom adds latency that is undesirable in time- and pulse-oriented music.

︎It would be tempting to think that, of all musical groups, a laptop ensemble might adapt easily, even seamlessly, to a remote setting.


We decided to veer away from addressing this issue of latency. Instead, we focused our efforts on being able to individually pass video in the same way we had been able to pass audio (without screen-sharing), and on creating music that wasn’t concerned with precise rhythmical synchronization. The Jitter and OpenGL-based video rendering capabilities within Max allowed us to create audio-reactive visual work, which we then had to pass into our video stream in Zoom. To do this, we worked with two main applications: CamTwist to route a video stream into Zoom, and Syphon to route the video stream from Max into CamTwist using a group of Max externals called Syphon for Jitter.

This setup allowed us to present our multimedia Max/MSP patches to Zoom as a webcam and microphone, taking over our video tile and playing our live-generated audio simultaneously. While this was exciting, it only worked with macOS (Blackhole, Syphon and CamTwist are all macOS exclusive). Kaczmarek spent time compiling ‘setup’ sheets for students in the ensemble (most of them Mac users) so that they could install and connect the necessary software to be able to join us in performance. There were attempts to find a comparable solution on Windows using a program called Spout instead of Syphon, but as the end of the semester approached and our class meetings became non-compulsory, we focused our energy on progressing these experiments with our voluntary student cohort that happened to be entirely Mac-based. As the semester ended, we had found a mostly satisfying means to perform multimedia work as a socially-distanced laptop ensemble, but there were still some problems. Firstly, the audio latency and the less-than-desirable audio fidelity native to Zoom, and secondly, the convoluted and somewhat inaccessible setup procedures.

Out of all this was born the CCAM Audio Composition Collective, or CCAMACC. I wanted to put together a summer audio working group that would be an extension of the work initiated with Kaczmarek to address these issues. As a CCAM studio fellow, I reached out to two other fellows: Dakota Stipp and Liam Bellman-Sharpe, who are both recent alums of the Sound Design program at the Yale School of Drama; Matthew Suttor, who is a professor from the School of Drama’s Sound Design program and is very active at CCAM; and Roxanne Harris, a member of the 2020 Yale Laptop Ensemble who majors in Computer Science and minors in Music at Yale College. Stipp is a brilliant computer programmer. He and I have worked together extensively, collaborating with members of the School of Art on an installation/performance piece for the Yale Cabaret titled I=N=T=E=R=F=A=C=E. Bellman-Sharpe is a great composer and sound designer who makes really interesting work with Ableton Live and MaxMSP. Both Bellman-Sharpe and Stipp studied with Suttor, who has extensive chops as an electronic musician, working often with analog synthesizers and technology to create theatrical works as well as concert music that blend electric and acoustic instruments. Harris has been making really great interactive work with motion capture and SuperCollider, and collaborated with Kaczmarek and I towards the end of the semester in our early efforts to get the laptop ensemble configured for remote performance. “Each member of CCAMACC brings a different perspective on electronic music making, programming, improvisation, and how the various tools we adopt influence and inform the ways we make and think about music,” says Kaczmarek.

We spent a few weeks brainstorming different technical possibilities for hacking Zoom, and shared different audio streaming platforms and tools. The problems we addressed were latency; the low quality and fidelity audio in Zoom; inaccessibility to tools outside of traditional studio settings (microphones, controllers, instruments, and so on); and the limitations of each person’s individual setup, with computer hardware or instance CPU. It quickly became clear that the most pressing issues were experiential.

Bellman-Sharpe's biggest concern was how to recreate energy. “The introduction of network or computer-based latency makes it impossible to synchronize properly with other performers,” he explained. “It makes it impossible to rely on micro-rhythm as an expressive device. But also, other kinaesthetic phenomena that one relies on for coordinationbody language, eye contact, or a hard to define ‘vibe’ present in the groupare also interfered with or done away with entirely.” Suttor, also, mentioned physicality: “The tools have changed dramatically, but the instinctual desire for physical action and resulting sound between composer and musical instrumentwhether it be a piano or a computer, real time or notremains.” For Stipp, his attention was drawn to unity: “How does one describe the experience of making music with an ensemble? Of breathing the same air before singing a chord? How can one possibly expect to recreate said experience virtually?” On her part, Harris said: “With so many unknowns regarding remote audio collaboration, I was intrigued to explore ways in which existing technologies could be modified and transformed to suit odd and specific needs. Facilitating conversation between software, network, and local was of special interest to me.”

Since all of the group members are Mac users with varying degrees of proficiency in Max/MSP and Ableton Live, we adopted these as our current means of experimenting. Since Ableton has native support for Max (through its “Max for Live” protocol), there were interesting ways in which we could combine the programs to meet our needs. Stipp suggested that we check out  AudioMovers, which allowed us to stream our audio at low latency and high fidelity to each other in Ableton. Once we started using this, we configured our group to have one member collect all of our audio streams from AudioMovers into Ableton on individual tracks. This allowed us to find an appropriate group balance and mix it down to a monitor track that we could each reference in order to hear the group playing together. This approximated the ‘monitor mix’-type solution often used by conventional amplified ensembles. This ensures that everyone is hearing the same thing, and allows us to get a clean-sounding recording of any performance or experiment.

︎It quickly became clear that the most pressing issues were experiential.


Early on in the summer, we tested different ways of musically interacting with each other by sending MIDI and control data rather than sound. By figuring out ways to send MIDI and OSC data to each other, we were faced with a new set of advantages and disadvantages. As an advantage, sending streams of numbers is much less taxing on a computer than sending constant audio-video streams. Also, this gave us the interesting ability to remotely control each other’s gear (controlling sound generation on someone else’s machine, rather than just sending them sound). The disadvantage of this, however, was the security element. We had to set up port forwarding protocols in our routers and share those with each other, allowing data to pass through our individual firewalls and into Max (this is not particularly safe, and should only be done with people you trust!). The ability to connect this way added another layer of glue that could make the ensemble unified, and opened up possibilities, such as having one member play a chord on a MIDI keyboard in one home, and another make a Moog synthesizer play that same chord remotely in another home.

From the beginning of this experimental process, I have been interested in the great potential of the webcam as an instrument. Through workshopping different pieces with this ensemble, trying out different hardware setups (MIDI controllers, Joysticks, and so on). I discovered that there is something satisfying about limiting yourself to the laptop itself, with no external hardware connected, and exploring the QWERTY keyboard, webcam, and trackpad as the sole interfaces. To incorporate the webcam as control data for an instrument, Kaczmarek and I had tried out PoseNet, “a machine learning model which allows for real-time human pose estimation in the browser.” This allowed us to track motion from our webcam into Max/MSP and then parse those data streams to be mapped to whatever audio or video processing parameters we wanted. This was interesting to work with, because a user could send their Max/MSP audio out to Zoom while still having an unprocessed Zoom video feed controlling the audio. Additionally, while using a selected capture area of the desktop via CamTwist as an input for PoseNetit opened up the ability to screen capture someone else’s video stream in Zoom and use it as input to affect parameters of a Max patch.

As continuing a laptop ensemble online was originally the goal of this project, I have been interested in trying to adapt previously-written laptop ensemble pieces for this new remote setting. I thought of Snyder’s visit and presentation at CCAM in March, when he showed his work Ghost Line that he wrote for Sideband. I was really drawn to adapting this piece because it uses webcam input as a mode of performance and audio control without directly tracking motion to generate streams of data (as we previously explored with PoseNet). In his own words, Snyder describes Ghost Line as “using the webcams as instruments, with the pixel data from the cameras interpreted as audio waveforms. The performers alter the sound by moving within the frame, or by processing the video stream (altering the x and y resolution, adjusting the focus, or changing the speed or direction of the image scan). Resonant just-tuned sonorities devolve into aggressive clusters of noise, producing a masterful mix of patient harmonic changes and dense, frenetic timbral shifts.”

I overhauled the Max patch for Ghost Line with Snyder’s blessing, stripping away the parts that were not relevant to our current configurationnamely, anything involving local network communicationand tried to figure out the ways it needed to be altered for remote performance. As with many laptop ensemble pieces, there is a ‘conductor patch’ that controls the more global parameters of the music, and ‘player patches’ that each instrumentalist uses to perform. I created a new conductor patch that used our port forwarding protocols to be able to send control data from the conductor in the player patches to effect global pitch parameter shifts, as is done in the original piece via local network. In the player patches, we used: Our Syphon-CamTwist setup to get our individual video stream into Zoom; created [vst~] objects in Max that allowed us to transmit our individual audio streams to each other via AudioMovers; and mapped the controls to our laptop keyboards and trackpads that were initially meant to be controlled by external Logitech Joysticks. I feel that this version of the piece demonstrates a compelling unification of some of the most successful aspects of the experimentation process of this group. Many of the various approaches to remote performance work together to retain both the best elements of this piece, and add new richness to the production.

A virtual performance of Spinning Plates by Konrad Kaczmarek, performed by CCAMACC, 2020.

While all of this work was engagingand felt especially relevant to be addressing with no in-person performances on the books for an unforeseeably long period of timeit was honestly a hacky, lofty, and at times frustrating experiment that led to a very small amount of music-making in relation to the extended periods of experimenting and troubleshooting. In his seminal paper “Why A Laptop Orchestra?” Trueman discusses the genesis of the laptop ensemble and frames it by discussing its strengths and weaknesses as compared and contrasted with its acoustic elder relative. While this paper does an excellent job exemplifying the reasons one might hope to pursue a laptop ensemble, in closing, the point he makes is that: “While high-speed networking has enabled a certain amount of distance communication and collaboration, there is little doubt that it is a sorely incomplete substitute for the kinds of interaction that occur when musicians are together in one place. One of the most exciting possibilities afforded by the laptop orchestra is its inherent dependence on people making music together in the same space.”

While I wholeheartedly agree that making music together ‘IRL’ is unbeatable, I like to think of this venture as more than merely “a sorely incomplete substitute” and instead as a further step in the direction of creating compelling multimedia performance remotely.

In a time when our interactions have become sequestered into two dimensional webcam streams, virtual spaces for music-making have become a necessity for musicians yearning to play and collaborate once again. The corporeality and spatiality of an ensemble is disrupted, and replaced by a gallery of 2D frames on a screen. While interfacing with a computer, we as laptop performers have a new potential for how we present our work: Audio reactive video streams, manipulation of sound based on camera input, and avatar-like representations of ourselves. By creating work for a remote laptop ensemble, the performances are inherently multimedia. Finding ways to create new work for this remote settingas well as adapting pre-existing works for laptop ensemblehas been the impetus for the CCAMACC. It is my hope that through sharing our work, it can be used and or adapted to become more accessible to a wider variety of performers and audiences.

Ross Wightman is a double bassist and composer, a fellow at CCAM, and a recent graduate of the Yale School of Music. As a double bassist, he focuses on the performance of contemporary music—specifically microtonal music—and seeks to bring these rarely performed sounds to wider audiences while commissioning new works. As a composer, his practice focuses on electro-acoustic multimedia composition, and instrument building. He has performed with contemporary music ensembles such as Ghost Ensemble, Ensemble Mise-en, Callithumpian Consort, and at international festivals including the Darmstadt Internationale Ferienkurse für Neue Musik, The Lucerne Festival, The New York City Electroacoustic Music Festival, and the Bang on a Can Summer Festival. In his touring solo project, Water Feature, he improvises with feedback and noise sound sources on his self-made electroacoustic instrument. Drawing on a deep love of film, his work often incorporates video art which he creates by digitally sampling video as well as circuit bending analog video equipment as seen/heard in his solo musique concrete audio visual project, Gravepact. This past year he was commissioned to compose an audio visual work titled, Touch System for Ran Blake as part of the ReVox Installation, commemorating the 50 year anniversary of OHAM at CCAM. At the Yale Cabaret, he worked in collaboration with members of the Yale School of Art and Drama on an interdisciplinary experimental theater performance/installation titled I=N=T=E=R=F=A=C=Ewhere he co-wrote, performed, and built electroacoustic instruments and scenery. For more information please visit rosswightman.net and follow him on Instagram