Sweep To The Fiddle, Then Pan Out To The Whole

The nature of human song was changed by electric amplification. Now a whisper or throaty croon into the microphone suffices, no projection or push from the diaphragm needed, a radical break from millennia of performance that required unaided lungs to fill places of worship, palaces, and concert halls. Just as the modern piano’s sound was partly born from the vastness of symphony halls, the breathy notes and throaty growls of contemporary popular music have as their parents the furnaces of electric power stations. Because we have an abundant choice of music, albums and tracks are set into competition with one another for our attention. The loudest ones usually win, even if we think we have no preference for loudness. Our brains consistently judge louder music as better. More, our brains also prefer music that has had its quiet passages cranked louder. Producers increase the amplitude of every part of the music, turning the variable loudness of a piece of music into what they call a brick wall, a final product in which every part of the track is boosted to the highest level possible. The resulting sound file on a computer screen shows a tall and unvarying wall of intensity instead of the ups and downs of the volume of most live music. The overall impression is of louder, more present music. Two infamous examples are the albums Californication by the rock band the Red Hot Chili Peppers and Death Magnetic by the heavy metal band Metallica. These platforms automatically adjust volumes to avoid jarring changes in loudness between tracks.

Always  Forever Now

Always Forever Now

This removes some of the incentive to push up amplitude on recordings. The digital version is often produced as if for vinyl, hearkening back to a world where the sounds of recorded music came from the physical motion of industrial diamond on rotating plastic. Earbuds and lightweight headphones, too, make new forms of sonic space. The evidence is here in my desk drawer. Each one has poor sound quality, delivering the outlines of music but not its subtleties. Low and high frequencies are mostly absent. Ambient noise penetrates the thin foam or plastic and washes out quieter sounds. And so on my flimsy 1980s headphones, music that arrived on cassettes from friends after multiple rounds of copying sounded pretty much as good as the original cassette. The bootleg culture of cassette tape copying and, later, the early popularity of highly compressed digital audio files, many also pirated, were made possible in part by the low quality of earbuds and small headsets. The devices we poked onto or into our ears created a new sonic space and, as it always does, music changed according to the particular demands and possibilities of the space. Technology mediates this relationship, as it does in the analog world. The intimacy of headphones changes the relationship between music and listener too.

These Are My Twisted Words

Singers whisper directly through our earbuds and headphones. Billie Eilish’s Bad Guy is a conspiratorial murmur. She’s right there, her lips to our ears. Joe South’s Games People Play is reverberant, distant. He’s on a stage with his band, the sound seeming to flow into an audience. The same speakers lop off the depth and blur the inflections of the violins, organ, and drums on South’s track. The plastic capsules in our ear canals have changed the form of music. The technology weakens the link between form and sonic qualities of space, for the first time in the long history of musical evolution. One effect will be a closer relationship between audience, musicians, and composers. When a performer works in a space mismatched to their music, they’re fighting against the room’s acoustics, as if trying to get their sounds, and thus their feelings and ideas, through a headwind. Tuning a room to the particular needs of music therefore activates connections among artists and audiences. The flight that I heard in Elena Pinderhughes’s flute music was one example.

Can You Hear Me Now?

She played her flute from the stage, but the music drifted and swooped through the performance space in service of narrative and emotion. Composer and electronic music pioneer Suzanne Ciani, interviewed after using a Meyer system at Moogfest, put these possibilities into context. Wherever the dance is participatory, rather than watched by a seated audience, these new audio systems will allow music to move along with human bodies. From ballrooms to clubs, composers and performers can now make music dance, literally. This builds on the link established hundreds of millions of years ago when our fish ancestors first evolved inner ears that detect both motion and sound, a design that we and all other vertebrates have now inherited. But spatialized sound technologies also offer an opportunity to understand traditional instruments in new ways. When we hear a violin, guitar, or oboe, we receive an integrated sound that flows from the instrument’s entire surface and volume. But when your ear is close to the instrument, you realize that its sound has a topography. Might we now, as part of the narrative of an instrumental piece, travel across the varied terrains of a violin’s belly, the bore of a flute, or the surfaces of a piano? Form of instrument and form of music can now converse not only through time, a single dimension, but within the three dimensions of space. Our ears could also be given what live musicians have, a position on the stage. Sit with the violas. Fly to the brass at just the right moment. Pause a moment now between the bass and the banjo in a bluegrass concert, then, as the music demands it, sweep to the fiddle, then pan out to the whole. Such compositions would bring to the concertgoing experience some of the same spatial dynamics of walking in a forest or through a sound installation in an art gallery. Moving through an ecological community is an experience in which sound has form and texture within space. The same is true when sound is used as a sculptural form in gallery spaces or outdoors.