Earth Symphony

with Philippe Tortell, scientist, Jonathan Girard, conductor and contributions from scientists, musicians and students of the University of British Columbia

Earth Symphony explanation (for CBC)

Related: IceCoreWalk take a 35 minute walk and listen to this sonification of 800,000 years of climate from ice core data

Background:

Chris Chafe is the director of Stanford University’s Center for Computer Research in Music and Acoustics (CCRMA) and approaches the practice of sonification from a background in computer-generated musical composition, using algorithms in the sculpting of musical detail. In much the same way, sonification uses datasets to generate sounds that can lead to a different or deeper understanding of patterns and processes in the sonified data. From global economic trends, atmospheric CO2 changes, or seemingly mundane events such as the ripening of fruit, sonification provides a means of gaining new perspectives on data through listening. He is a Peter Wall International Visiting Research Scholar whose project for Spring 2020 is an “Earth Symphony” with datasets from UBC scientists and musicians. The collaboration celebrates the Earth Day’s 50th anniversary.

The COVID-19 quarantine postponed our project. Here’s a small test of what will be the coda for the ending of the Earth Symphony which will feature data showing dramatic reductions (this test uses Nitrogen Dioxide levels from Wuhan and Milan in late Nov-19 to early Mar-20) and is scored for quartet (cl, cl, tp, vc). I hope the final version can include a noticeable effect on Carbon Dioxide. As of Jun-20, we haven’t seen a leveling, yet. But the music is still being written…!

Sonification: Making Data Sound

Computers and music have been mingling their intimate secrets for over 50 years. These two worlds evolve in tandem and where they intersect they spawn practices that are entirely novel. One of these is “sonification,” turning raw data into sounds and sonic streams to discover new relationships within the dataset by using a musical ear. This is similar to data visualization, a strategy that reveals new insights from data when it is made for the eye to perceive as graphs or animations. A key advantage with sonification is sound’s ability to present trends and details simultaneously at multiple time scales, allowing us to absorb and integrate this info the same way we listen to music.

Chris Chafe’s sonification workshops lead off with a discussion of the practice and application of sonification in a wide array of disciplines, drawing on his own extensive experience in this field. Using examples from a variety of datasets, he shows how sonification can lead to the creation of innovative new musical pieces, and to a deeper understanding of many natural and human-influenced phenomena. The presentations are followed by a hands-on tutorial, where participants will have an opportunity to work with their own data and explore sonification as a new approach to their datasets. Participants in hands-on tutorials should plan on bringing a laptop with either Firefox or Chrome installed. See https://ccrma.stanford.edu/~cc/workshop for an introduction.