Thanks for coming to my site!
I gave a quick Ignite at EyEO this week.
- Slides: https://github.com/auremoser/stereo_signals
- References/Creds: http://urli.st/sr3
- Website: >>>>> www.stereosemantics.com <<<<
- Old website under the projects nav option
Some wiser person once told me to never give a talk that should have been a blog post. So to solve for that I’m doing both; less is
more just less. There’s always so much to say in presentations, regardless of the constraints (5min time limit, frozen audience, IT issues), and some of those things can best be said when you’ve had the opportunity to reset sanity off-stage. This will be a quick post about what I did for the talk, how I did it, and why. I put the links at the top, so you don’t have to read everything.
Read on below!
I started Stereo Semantics a few years ago because I was a radio fiend a bit frustrated with radio options. I like to toggle between human-run radio (East Village Radio, NPR), and algo-radio (Pandora, Rdio, Apple’s equiv) because both offered advantages, while neither was ‘complete.’ The former had some human narrative running through each show (which I liked) and usually episodes were themed by show genre (metal, indie rock, top 40, _core…). The latter had some personal curation (which I liked) but felt annoying because I seemed to ‘like’ my way to progressive musical homogeneity, so I lost the serendipity of being guided by a Beatrice DJ through the heavens of music happiness. I wanted to build a project that bridged the two; and meanwhile up my own music foo.
At the time, I was working in the library, and researching graph databases, ontologies and other taxonomic cataloguing schema for grad school, so I wanted to apply that knowledge somehow. My main objective was to learn, and make a show built on the solid (stereo) meaning-making (semantics) that I always find connects everything in music as in life. So, Stereo Semantics was conceived as a show about the kind of semantic connections you can find between similar things (in this case music, bookended by a source and a cover song) organizing playlists based on networked data about musical entities.
I used to archive the episodes on my website (THIS yes this), with graph maps, track lists, mp3 recordings and descriptions. I’ve since migrated that material to its own site, and the aesthetic of the new project pretty much sums up the kind of retrospective and graphic fun I like to have with all side-projects. In season 4, you’ll note I have a cohost, he is awesome and his name is Aubergene, find him in the humans.txt.
Stereo-graphs + Spectrograms
On to the imagery!
I’m more of a sci-tech girl, but I like art lots; I wrote my thesis on developing sustainable conservation and emulation programs for New Media and born digital artworks, so I can appreciate the use of graphics tech arts enough to want to preserve them in perpetuity. Imagery can be so much more powerful than words and numerical components for communicating information; mixing all those things together makes for the best expressive media.
Each episode of Stereo Semantics was based on a visual graph (not like a bar chart, like a network/node-edge graph); this was a way to view the connections between nodes (artists, album titles, song titles) and edges (SPARQL endpoints for DBpedia and Wikipedia links) and thereby explain the path from source to cover song based on the episode theme.
I used RelFinder, a Source-forge open project to build graphs. It pulls connections from wikipedia and dbpedia (the semweb wikipedia), relating artists and songs as well as many other entities based on data points like “birthplace,” “shared record label,” “shared release date” etcet. When I thinking about a the new website (instead of the algorhyth.ms > Projects > Radio sub-site), I thought I should do some experiments to test other, more intuitive and simple, visualization types, to perhaps collapse the distinction between source and cover song more clearly. I started thinking about the shape of a sound recording like a rorschach test, and how perhaps overlaping the waveforms or spectrogram readings of similar audio files might help make sense of their relationship. So based on my unrepresentative statistical sample of the source-cover pairs I had in show episodes (4 seasons, about 10 episodes a seaz, 40 pairs, 80 songs), I started making different spectrograms to see how closely pairs ghosted eachother. I started playing around with sorting songs by spectrographic amplitude, to se if that arrangement was somehow interesting; turns out, it wasn’t, but it was fun to do #20%[-ok-maybe-two-weekends]-time.
You can make spectrograms in IPY Notebooks, in Audacity, in Sonic Visualizer. I use all 3 but used the latter (latest? lastest?) for my talk because I wanted to add some math and some of the awesome VAMP plugins to make the imagery more intuitive. Default spectrograms are complex and it should be noted that I’m not a musicologist and most of what I derived from the spectra was based on note of empirical trends between pairings, and general visual character of the resulting images. Spectrograms give you a frequency distribution which can help tease out trends and makes for some beautiful imagery, but to be meaningful, they need to somehow relate to our own vocabulary for music-making and listening. So I applied a chromogram transform to the default spectrograms because I wanted to map frequency readings to pitch (notes on our known scale of music). Pitch and frequency are related but different, and pitch is more “legible” for humans because it’s how we typically understand sound beyond waveforms.
Sonic Visualizer also has Similarity and Adaptive Spectrogram plugins, which I recommend for creating composites from a set or series of tracks; these provide, among other things, estimated readings of similarity and clustering (there’s good documentation on both, but they often assess similarity based on timbre and chromatic calcs, studying distance between beat spectrum and divergence from a sample mean). For learning how to read spectra, I’d recommend this resource, the sonic visualizer docs, and some of the friendly musicology papers online. There’s also this dude that I found doing some cool stuff with covers, but I’m not sure how his research has progressed post-pub. And some Danish scientists have been studying what makes a song “catchy”, citing the “gapiness” of a song paired with a predictable beat that makes people want to tap their feet (intentional rhyme! reference here); I also mention detecting “emotion” from frequency distribution in my talk, there’s some research on this, you can find an example here. All of this informed my talk; good reads all-around.
There are people who do this for (or as part of) a living and they should be applauded and read above the recreational mathletes like myself; I can point you to some if you’re curious, just ask. My objective was fun experimentation with a 20% project and an impulse interest. EOD, acoustical quality is a complex cocktail of pitch (frequency dependent) and harmonic factors; a spectrogram can help you read these but it isn’t a perfect method; the chromogram transform I applied reads the graphical distribution of the spectrogram, depends on the gain and color scheme often, and sometimes skews actual audio analysis. I sprinkled some different spectrogrammatical captures into this post, at various resolutions and color schema. Beyond the math, I wanted episodes to be visually and semantically clear; coordinated by both automation and human curation, and spectrograms served that purpose for a small batch of my source<>cover pairs.
Some of the other imagery in my presentation pulls from Album Cover art (a nod to the “cover song” theme) and sleeveface meme-ory on the internet. I’m a big fan of memes. I cited all the images here, in case you’re curious.
Here are some reject slides that I didn’t talk about but totally informed my thought process.
- I love cover songs, like some people love the circus, karaoke, other people’s children. Sure, they’re loud, obnoxious…sticky? but I find it curious how polarizing they are conceptually…lots of people hate the idea of a cover, some people love them; I wanted to do something with this split dipole moment.
- I also love spectrograms, and while they are usually chromatically epic, sometimes they also have a steganographic character that is really beautiful and delightful, definitely the appropriate material for a talk on stereo semantics, there just wasn’t enough time :(.
- We have a pretty copy-cozy cover practice as humans these days; most things are shared via the concatenated re-creativity of other people…as is evidenced by this pinterest post which cycled through several translations and users until it got to this girls’ tattoo board
- So covering and copying as a way of re:creatively processing information is ancient of days (see: Cicero), but what’s more interesting perhaps is how ideas travel through several tunnels of interpretation before coming to us. For example, I didn’t translate this quote from the latin, it was published in the Swerve: How the World Became Moderne (2012), but I didn’t get it there either, it was captioned in Kenyatta Cheese’s Final Boss Forum Blog, and then sent to be by a friend with a note about the longer passage referencing librarians as ancient roman copy-slaves. I included it in my talk out of that elaborate context, but the re-creative trajectory of its arrival in my slide deck is still pretty interesting and common for creative assets these days.
My talk was about different ways to create visual semantics for radio, to tease out connections between musical nodes and similar but distinct readings of the same music. Stereo Semantics was a show about cover songs and the connections between their nearest neighbors. Covers have this elegant, rich, and complicated character, that’s kind of lost until you visualize it, as with so many things.
At this point,open source communities have obliterated the shame of the “cover band.” We’re all about open information, sharing, copy-coding each other’s brilliant for building more brilliant things. When you listen to a cover you’re listening to enthusiasm and appreciation, and iteration; all qualities of our most prized software projects and programs. Iterating on an idea, versioning it to perfection, is admirable, not obnoxious. Plus, a track that’s often covered is a track that’s well-loved, it’s one that’s well-learned. Who doesn’t love learning, #librarians *this girl*.
We’re all a class of cover artists, making meaningful things from other people’s meaningful things, building on libraries, viewing “source” on a webpage the way artists view a source and adapt a cover song. Plus, we’re pretty fascinated as a species by patterns, by distribution and differential readings over time, so spectrogrammar and graphing seems to suit. Spectrograms are used in heart rate monitors and in seismographs; we use them to track our heartbeats and to probe the pulse of our planet, so why not use them to prioritize our playlists, and pull out the shape proximity and phonasthetics of our favorite sonic clones.Repetition and redundancy is exceptionally common in human history, musicology is not exempt from this, neither is software development. Finding ways to express that concept and de-stigmatize it through some simples synaesthesis is kinda cool. -Grams and -graphs allow us to process visually what we otherwise appreciate aurally and develop the visual sensitivities to look at something audible and see why it’s beautiful.