website statistics





Support Us











The Musical Web: Weird Experiments with Music on the Internet

August 18, 2023 Spring 2023

The Musical Web was a 10-week online course led by myself and Chloe Alexandra Thompson during the spring 2023 semester at the School for Poetic Computation. Both Chloe and I have musical practices that integrate computer programming and custom software development, and most recently have been interested in how creative coding languages are starting to more fully integrate sound in web-based contexts. We wondered about the ways that the accessible nature of web-audio works, which are quickly distributable to large numbers of people, can change the way that we think about, share, make, and listen to music in a telematically consumed world. We were also curious about how these new audio tools, now in the hands of web-programmers and net artists, offered the sonic arts a new pool of capable composers and musicians. The journey was an opportunity to dive headlong into some of these technologies, in order to discover and speculate on the new forms that could be realized in this cool, and in many ways, new territory.

The course material that we put together bridged basic web programming knowledge with some common strategies in music composition (a term which we defined liberally and unpacked throughout our time together), and included techniques like sound college, synthesis, and spatialization using 3-dimensional environments. We also looked at some uniquely “webby” aspects of web-based programming, such as communicating with real-time data streams (APIs for the initiated) and “sockets”—a means by which multiple users can participate in a shared experience (think chatrooms or massive multiplayer online games). These database and networked approaches to the study were particularly interesting sites to explore as they enabled us to go beyond the experimental musical forms that we were used to, into territory that felt unique to the class and more connected to a social fabric.

What was most exciting about the experience however, was the very unexpected contributions made by so many of the students. What we hadn’t considered going into the class was the wide variety of backgrounds the participants would come from. While many of the students of course had some type of musical interest, few had any traditional musical training, which often led to more “out there” and less classically “musical” results (a good thing). Additionally, participants came from a wide variety of different technological backgrounds. There were students who had almost no programming experience, and some who had written code for big tech companies. While this technical gap made certain aspects of teaching more difficult, it made the work that came from the cohort all the more diverse—and in those less technically experienced participants, free from some of that sometimes creatively impeding baggage of defaulting to “tried and true” patterns, and always having to write clean or performant code.

The first few classes were focused on finding a common ground in the musical language, and the various software environments we would be using during the course. These initial classes showed us how esoteric and inaccessible so much of the language we use to discuss electronic music making can be, and encouraged us to find analogies for sometimes difficult algorithmic processes allowing for a maximum number of voices to participate in the conversation. Finding clear ways to communicate complex musical as well as technical concepts without assuming previous knowledge is one aspect of the class I think we could continue to build on. Class usually began with a show and tell period. These were opportunities to show or perform works in progress, and to get feedback from the group. We encouraged people to share their “failures” as well as their successes. Talking about the difficult parts allowed others to share strategies and solutions for moving forward with the work. As teachers, we found that these were some of the most revealing parts of the class, where we were able to see what was making sense, and what needed more conversation.

One of the subjects we returned to most often during the course was the distinction between tool, instrument, and musical composition. We created an environment where each of these approaches were valued equally, and for their final projects these lines were pleasantly and positively blurred. The class culminated with a showing and performance of participant projects, which can all be found at the showcase website programmed by participant Kate Grant: I’ll touch on a few of the projects here in more detail but they’re all great, check them out.

“Re-arranging” by Ali Akhtar uses the A-Frame library to create a navigable 3-dimensional virtual space filled with ambient sound sources and models of electric guitars. Visitors of the site are prompted to “pick up” and throw the guitars around the space which results in suspenseful stringed sounds in an act that feels liberating and fun. The position of the navigator’s point-of-view camera adjusts the various sound sources creating a dynamic mix that is ever changing, subverting our expectations of a spatially, or temporally fixed listening experience.

Ali Akhtar’s 3-dimensional, interactive sound environment. Visitors to Akhtar’s world can “pick up” and throw sound sources around the space, creating new spatial mixes that are determined by virtualized physical movement.

Maxine de las Pozas’ untitled work presents the visitor with a representation of a computer keyboard. Samples are triggered on keypresses in a non-linear pattern she describes as “searching for interaction.” Like this piece, several of the projects leveraged the ability to combine visual interfaces and game-like interactivity to create a musical experience that was immediately playable with little instruction. Grace Chang’s piece “Magnetic Sonification” for example, virtualizes the familiar act of arranging text-label refrigerator magnets, each of which corresponds to the playback of a sound file to create an interactive piece that is simultaneously charming, and compositionally imaginative.

Maxine de la Pozas’ untitled piece is inspired by a dream about unruly “gear.” Users are encouraged to discover special sample pads by activating the instrument.Grace Chang’s “Magnetic Sonification” uses a familiar yet atypically musical interface to guide new sonic arrangements.

Other works such as Bailey Manning’s “You’ve Got Mail!,” had more minimal user interfaces that evoked a classic Web 1.0 style. This piece in particular made use of the p5 speech library, a javascript abstraction that taps into the operating system’s voice over capabilities. I love the idea that the piece sounds different depending on the type of computer it is played back on. Other projects used the browser’s capabilities as a multimedia canvas to work through and display their own archival audio-visual material. Stephen Anderson’s “Realake” for example, is a musical embodiment of his research on the diverse topographies of the Great Salt Lake, and includes a recorded spoken word composition with processing techniques learned in class. Tatianna Overton’s “overwhelmed by possibilities” seems to relay to the visitor the anxiety recording artists feel when having to finalize a piece of music. The interface for her contribution features 5 buttons that correspond to separate parts of a complete song. Visitors to the site may click on these buttons at any time to unmute the tracks and create a unique version of this meditative sound piece.

Tatianna Overton’s “overwhelmed by possibilities” creates an interface for mixing multiple stems of a song. The piece reminds us of the subtle ways we can use the web to expand existing sound practices into interactive, living experiences.

Networked projects like Kate Grant’s “YouAreAmI,” and Izzie Colpitts-Campbell's “Mobile Choir,” make use of “socket” programming to bring a sense of co-presence into the typically solo, screen-based listening experience. The work is activated when two or more people join the websites and participate in remote viewing and interaction. Parameters of these works are synchronized across independent nodes in the network creating a kind of decentralized, and persistent experience. Experiments like these make it clear that the web can be a compelling tool for distributing aural experiences in ways that extend and rethink the standard musical release.

Kate Grant’s "YouAreAmI” page allows multiple remote users to gather at the same “site.” Participants are encouraged to type in a description of their mood which is translated to tones using a machine learning API and broadcast to the rest of the participants.

We are grateful to Sam Tarakajian and Bomani McClendon who visited our classes as guests and shared their works with us. These visits inspired the network based pieces in particular and demonstrated to us new perspectives in web-enabled sonic art. We are also very thankful for Cycling ’74 and their generous support in the form of Max software licenses for the duration of the class. We relied heavily on Max not only to create the actual audio engine for many of the projects using their RNBO environment, but also to demonstrate compositional, and synthesis techniques using a visual graph. A special shoutout goes to the p5.js community for the work they have done in making user-friendly documentation and examples. Their resources provided a sturdy foundation for working with sound in javascript on the web. Additionally we’d like to thank the organizers of SFPC who have for years cultivated and grown a curious and bold community, without whom we wouldn’t have had such a skilled and exciting cohort to share in our exploration at the outer edges of music and the internet.