We’re premiering a fantastic new piece called Visible by Elevate composer-in-residence Julie Herndon on our season opening concert, and it takes an unusual format. The performers actually read the piece off of a video screen instead of traditional music notation. We asked Julie to explain the piece to us, where she got her inspirations, and why she chose video artist Patricia Robinson as her collaborator for this piece.
Visible combines data from a UN report on gender equality with biological cellular processes. What inspired you to combine these ideas?
It’s all about perspective! Looking at massive global data made me think of the other side of the spectrum, to microscopic activity. What’s interesting is that biology is described in familiar terms, like the “mother” cell and “daughter” cells that become “sisters.” It made me think about all the narratives happening around us that we can’t see, either because we (humans) are too small to perceive it, or because we are too large to see it.
A few months ago, I went to Patricia Robinson’s show at b4bel4b in Oakland and was struck by a piece she did on the gender gap. She was projecting data into what looked like moving webs, and my impression when I saw it was that it made the statistics seem alive, like living organisms breathing on screen. I asked her if she would be willing to work with me on a project, along the same lines as her gender gap piece, and she agreed!
So, as we were working on this piece together, we were pinning images of things that interested us. I was following a line of associations and wound up on biology websites, looking at images of different kinds of cells, and watching YouTube videos taken from microscopes. The way the cells looked, interconnected but independent, was so similar to how we were conceiving of the data. I followed that thread and looked even more closely at their similarities. The text about mitosis used language eerily similar to the UN Women’s reports that Patricia had based her original project on, so I started weaving them together.
The piece uses video projection, both for the performers and the audience. Can you describe in a few words how this works and why you chose this approach?
The performers are playing a version of the video with some guidance on how to interpret what they see. They’re filtering the information, responding to it as it happens.
What is the difference between what the audience sees and what the performers see?
The performers have a set of instructions and a notational grid that dictates how they play the video. The audience is seeing the same thing, but without the grid.
Screenshots from Visible: performer’s version at the top, audience version at the bottom.
How much flexibility in interpretation do you give to the performers in the notation? Are there rhythms and dynamics and phrasing, or is it more open than that?
The performers have some flexibility with how they play the score. They see the dots moving between staff lines and they get to interpret: am I going to play a G or an A here? Am I going to trill between them? Or am I going to play a G-sharp because it’s between the two? So the precise pitches are approximated. However, rhythm/timing is more precise. As soon as a point appears, they play it as quickly as possible and hold it until new material appears.
Phrasing is addressed differently in the two sections of the piece. In the first half, the players group notes together that appear together and diminuendo on the last note until new material appears. In the second half, performers play pitches for the length of a breath. Dynamics follow the size of the point, growing larger and smaller as they rotate around the orb.
Overall, contour is dictated by the video, but the specifics are chosen by the performers. When new material appears high above the staff, they must play high; if it is low on the staff, they must play low. But again, there is flexibility as to the exact pitches.
This makes the players interpreters of the data, just like we are when we read information. We decide what is important. We filter out what we’re going to pay attention to, how exactly we’re going to remember it (e.g. “Are our bodies 90% water or 99% water?”). Things get fuzzy, but we retain the gist of the information. The players are doing that same process.
Given the non-traditional notation in the score, do you expect this piece would sound recognizably the same in each performance, or is the idea to create something entirely new?
A little bit of both. Because the video is fixed, the gestures, timing, and structure will be similar in each performance. The performers are not just improvising—they’re playing from a score. But some of the notes and energy will absolutely differ depending on the day.
You talk about “sonifying data.” Can you explain what that means?
“Sonifying data” could also be called “sounding out information.” We’re exposed to so much—it’s easy to get our hands on data about just about anything: the weather, the stock market, how many steps we took in a day… Sonifying data is just taking that information and finding a way to make sound about it. So, that involves organizing and prioritizing the data, and then setting rules about when and how to make sound based on what you want to emphasize.
Is sonifying data something you do often? What appeals to you about the idea of taking non-musical information and turning into musical instructions for performers?
This is my first data sonification project! However, I’ve done many open scores (graphic and text) and this was, for me, a similar process. Graphic and text scores are all about setting priorities and leaving space for improvisation and interpretation.
In a graphic score I might tell a performer how to interpret certain parts of a graphic, for example, play light colors quietly, dark colors loudly. I might tell them how to approach the graphic, for example, “start in the upper left corner and move down to the bottom of the page at your own speed.” But I may not tell them what notes to play, what particular rhythms to make.
By leaving out that information, the piece becomes about something else. It becomes about interpretation; it becomes about the players’ own language with their instrument. And that is what I love about it. I supply the narrative, and ask the performers to appear as characters, playing themselves in the story.
It seems to me that when you sonify data, a bit part of the task is figuring out rules and priorities that will guide your composing. In a sense then, it’s not so different from what classical composers did when they chose to write in prescribed forms like the sonata or da capo aria. I’m wondering, therefore… Is the data itself important to the composition, or is it more of a vehicle to stimulate creativity and set up a path for the piece to take shape within?
You’re right that the data supplies a set of rules in the same way a traditional form would. It suggests handling the material in a certain way. I would even go so far as to say it is impossible to make any piece without some sort of “vehicle.” You have to start somewhere, even if it is that you only know you are writing for flute, or you know you have a piece of plywood with which to construct something. That initial departure point is essential to the piece, even if the piece grows into something else (i.e. you wind up buying 2-by-4s to support your plywood).
But it seems you’re also asking, if we could wax philosophical: is form inherent to content? And this makes me think from a performer’s standpoint, because performers are interpreters of form. We could ask, is it important to the pianist that she is playing a late-romantic sonata? One would hope that this would influence how she interprets and leads the listener through the narrative of the piece. However, she must make it far more than that, addressing also the sonic, expressive, musical content. The form and content need each other that way. Similarly, in Visible, the data is an integral and formative part of the piece, but in the end, the piece becomes about more than just the data.