Ocular Scores ToolsOcular Scores is being developed in collaboration with designer Joseph Browne. The project is in residency at Matralab, Concordia University, Montréal, Québec, Canada, Sandeep Bhagwati, director.
Ocular Score Version 1: Capturing Gestures
We are using MaxMSP for sound analysis, specifically pipo.yin.
We import a sound file, analyze it, and send the data on frequency, amplitude, and onset to Touch Designer to create a drawing using a single line.
The final graphical score will be created with three or four layers of information that can be removed at will. Our first layer will be the drawing created by Touch Designer. The second layer will be a traditional music staff on which to map frequencies and rhythms. There are many existing notation software programs; the challenge is to use a simple way to import music staves that can be coordinated on the x-axis over time. The third layer will be available to further explain in technical terms the specific sound to create through images, fingerings, or other specific information.
There are a lot of parameters that are still controlled by hand in order to make a drawing that is coherent with the sound qualities that we feel are important to express.
See image to the left that shows all the parameters we can control in Touch Designer.
Ocular Scores Version 2: Live Input
In this prototype, the analytical data is produced inside TouchDesigner. We can upload a sound file, like in Version 1, or we can analyze the audio input from a live performer. The images are created in real time on a continuous scrolling image. It is as if the pen is stationary on the righthand side of the page and the paper is pulled from right to left.
This prototype continues to give us the option of creating still images to capture a gesture — 8 seconds or less (which is the size of the current window). The advantage of this second prototype is that we now have white space (silence) when there is no sound.
A number of parameters affect scaling and the character of image, and those parameters can be changed on the fly to affect the image as it is created.
In this prototype, the continuous recording of the images, creates a long score that can be saved and exported into a very long jpg. We can export this long score into other scrolling score software for future performances.
This tool can be used to create musical scores or to document a performance for repeated performance. The long score can also be used as a visual representation of a piece to analyze the larger structural elements of a work.
Ocular Scores Version 3: Performance Application
Version 3 is designed so the composer interacts with the graphic score as it is being created in a performance context. The composer creates a dynamic score with the help of filters that are programmed in advance with a series of presets and manipulates the graphic score in real time by choosing certain shapes for the musical gestures and by using delays; changing the Tessitura of the musical gesture; adding color and distortion; and changing the size and scale of the shapes.
In Version 3, we have the same parameters as in the previous versions but we now have presets that the composer can trigger during a live performance to manipulate the shapes in a live setting.
We have different colors for the shapes – the goal is no longer to interpret a sound into an image, the goal is now to create a performance environment.
We can delay the drawing of the image, and we can use a low-frequency-oscillation (LFO) to shape the phrasing of the musical gestures by periodically removing the image entirely. In Version 3 2×1 we have 2 performers playing from one projection surface and in version 3 2×2, we have two performers each performing from their own projection surfaces.