The Mouth & Razor FX Racks

If you’ve downloaded my “Vocoder for Push” racks, then you should understand the concept behind these racks… basically they’re performance orientated racked instruments that hack the external instrument and external effect devices to be able to pre-map the audio routings. By default audio input 1 is used as the mic source, but you can change this if you want.

In this Live set there are two instrument racks, one for Errorsmith’s “Razor”, and the other for Tim Exile’s “The Mouth”. Both devices require a license and the Reaktor 5 Player available via nativeinstruments.com.

This is my (nearly) 3 year old daughter singing through The Mouth – “Autoharmonious” Rack.

Download the LivePack file that has these Instrument/FX racks HERE.

If you like these Ableton racks, please consider donating a dollar!

Composition & Sound design for Breath of Light

Over the last few months, I’ve been composing and creating sound for a really beautiful meditative puzzle game for iOS called Breath of Light.

Made by Melbourne game development company Many Monkeys, Breath of Light is the sort of puzzle game that takes time to master, there is no rushing the process, It’s a slow paced game that required immersive, meditative music.

The interface is a bit like a Zen Garden, where the player must arrange and move objects within the space to allow the flow of energy from one lotus flower to another. The game is set over 4 seasons, Summer, Autumn, Winter and Spring, each progressively more challenging as the levels evolve.

Each season has it’s own musical underscore created from a bank of loops that are designed to both work in any combination, but also evolve and develop over time with the gameplay.

It was decided very early on that User Interface (U.I.) sounds should add to the musical score. The way we made this work was to have a series of randomised tones associated with every object and movement within the game. All these sounds were composed in a way that to match the tonality of each season’s underscore, which allows every gesture within the game-play to contribute to an interactive soundtrack.

During the development of the soundtrack, I used Ableton Live to create a performable set to play these sounds to both demo these musical ideas to the guys at Many Monkeys and to test how the U.I. sounds blended with each other. I did this by creating drum racks that housed the U.I sounds and used scenes to work out the different combinations of underscore loops for each level in the game.

Screen Shot 2015-03-25 at 10.49.30 pm

Once the nuts and bolts of the game sound was finalised, I set about developing a performable live set to recorded each ‘season’ as a musical composition in it’s own right. This was done with a little extra help from the wonderful Max for Live Dub Machines audio FX ‘Magnetic‘ and ‘Diffuse‘ which added extra flow and movement.

Below are the recorded seasons of the Breath of Light soundtrack, available for free download via Bandcamp.

QUADwrangle at Arts Centre Melbourne

The QUADwrangle interactive sound sculpture project at Arts Centre Melbourne is all done and dusted (for now!)

Arts Centre Melbourne’s youth programmer Dan West arranged for this video to be made, which sum up the project’s aims and intentions pretty well.

During the second weekend session we battled a bit to get our awesome sculptural object to wired up to the two MaKeyMaKey Arduino boards. But once it was finally working, and in the Fairfax Theatre foyer space , it sprung to life and happily interacted with the general public  for five days straight!

There were four stations where people could play the sculpture, each with a sound set that corresponded to the object’s design. The fence section played bass notes, the brick section played harmony, the tiles played melody, and the diamond wall section played rhythms.

IMG_4266

Sound came out of four speakers than were set up in various places within the sculpture. Two up high, two down low. The bass sounds and big kick sounds all come out of the low positioned speakers, and more melodic and ‘toppy’ rhythmic sounds came out of the high positioned speakers.

Central to the sculpture making sound were two MaKeyMaKey boards that sent QWERTY key messages to Ableton Live 9. These MaKeyMaKey boards are triggered by making a connection between the boards earth, which were wired to copper plates on the sculpture, and to various other points that were wired up to black conductive paint.IMG_4282

Have a look at Flick, Ayten and Kelsey demoing it during the construction phase.

#quadwrangle starts to take shape! Opening Wednesday. @artscentremelbourne

A post shared by Matt Ridgway (@winterparkmusic) on

Each QWERTY key command was assigned to various ‘gestures’ within Ableton Live 9. I say gestures, because each key command triggered a number of different actions simultaneously such as triggering an audio or midi clip at the same time as triggering FX chains and movement between the four speakers.

One example of how this triggering of gestures worked is the letter “R” (see screen grab below). When the point on the sculpture triggered Capital R, this then triggered the key command gestures that were assigned in Live 9.

Screen Shot 2014-08-29 at 11.44.03 am

“R” was set up to trigger a simple melodic riff, but if held down for long enough also triggered a ‘resonation FX chain’ that would gradually fade in to make the whole object become awash with a harmonically rich reverb sound. This was accomplished by the triggering of “dummy clips” on the Output channels, which have no audio information, but instead hold automation data. These clips were set up Gate mode, so they would only play when held down.

Screen Shot 2014-08-29 at 11.35.18 am

These channels received audio from each of the four sound sources through their respective sends. In this way, at any time we could route any of the different sounds within the sculpture to any or all of the four speakers in any combination.

Below is the Fence Bass being routed to sends B & D, which in turn were being sent to outs 2 and 4, the speakers located near the ground.

Screen Shot 2014-08-29 at 11.37.56 am

We also had a series of field records that acted as an underscore for the sculpture, so that the object wasn’t silently sitting there when people were not interacting with it.

This underscore slowly moved around each of the four speakers through send level automation, once again created within a dummy clip.

Screen Shot 2014-08-29 at 12.08.05 pm

This project was really super fun, and great to work with the wonderful staff at Arts Centre Melbourne, and the talented and creative young people who recorded the sounds, designed the sculpture and wired it all up to work!

Check out their various soundcloud and web links:

https://soundcloud.com/felicity-yang

https://soundcloud.com/ayten

https://soundcloud.com/jordanoba

https://soundcloud.com/donald-uren

https://soundcloud.com/niconiquo

https://soundcloud.com/connor-black-harry

http://www.khendersondesign.com/

New album… track selection!

I recently talked at one of the Ableton Live User Group meet-ups in Melbourne about how i’ve started the process of creating a new album. I plan to create a series of blog posts about this process as the album evolves, so this first part is all about the initial stage of how and why I choose certain tracks.

Being a long time devotee of the Beatles, I often have grandiose visions of creating a seamless album like the B side of Abbey Road, where each track flows into the next in an effortless and joyous montage. This is something that I first started to seriously try to do with my third album, which I released in 2011.

I’ve been thinking about putting together another release for some time, I originally wanted to do a short EP as a bit of a stopgap, partly because I didn’t think I had enough completed songs, and partly because I’m not sure I really want to go through the challenging and time consuming process of putting together a full album, promo campaign, and touring live show to support the album.

But I figured I should do some folder house-keeping and try and categorize all the half-finished, or even just loop ideas that I’ve been playing with and then saving and forgetting.

So, I opened every live set that I had created over the last two and a half years, and attempted to categorize them into newly created folders named:

Not much here, Cinema, Darkwave, Offhop, TV Sync, Weird doof, Remix, Atmospheric twinkles, The ultimate 80’s soundtrack & WP work more

Screen Shot 2014-03-22 at 12.53.00 pm

Once I did this, I realized I had over 50 tracks that I could consider potential Winterpark tracks.

In terms of how I define what a Winterpark track is, and how that is different from one of those other categories… that’s a somewhat difficult thing for me to explain! I probably know more-so what is NOT a Winterpark track. They’re not overly dance, not overly dark in nature.

But to try and find the elements that they all seem to share; I guess the common thread for WP tracks are that they’ll generally have some processed guitar on them, they are melodic, hopefully uplifting and sonically cinematic.

So, with over 50 unfinished projects, I had to make some further decisions and categorizations, so I created folders called: Start, Middle, End, Interlude & Cannibalize these.

Screen Shot 2014-03-22 at 12.53.14 pm

These were based purely on where I felt like each track could potentially go in an album sequence. Whether it had enough substance in it to ultimately be a finished track, whether it felt like something that could open an album, or whether it could close an album. If it felt like it was never going to be finished, I’d decide whether it had something in it that was worth cannibalizing, or perhaps create a short sonic interlude of the various elements that worked in it.

Due to my current hectic life, (work & young bub) only some of these tracks had some arrangement that had been half worked on, but most were simply session view clips… for these songs I’d create a down and dirty ‘vibe’ arrangement by arming an audio track to resample from, and just trigger clips or scenes as the case may be, do some on the fly volume mixing until I got something that sort of worked. I would do only one or two re-samplings like this per track, and chucked them in a new folder called “order mixes”.

I also bounced out any arrangements that I’d created, I didn’t really want to concentrate on ‘mixing’ or anything just yet, just getting the sound of the track, and hopefully a basic arrangement that showed off the different elements in the track, so I could see if they worked together and flowed as an album.

It’s sort of like sequencing the album before it’s finished. It informs me as to what tracks I need to work on and which ones I don’t, and also helps me decide what sort of intro or outro is needed for each track to transition seamlessly (hopefully!) to the next.

album 4 track order

In the video below is the first part of my presentation at the Ableton Live User Group session, which is basically just me talking through what i’ve written about here.

Insights into Vocal editing

I recently talked to a bunch of producers about vocal editing techniques at one of the Enable Music School sessions, and I thought I’d post a little something that I shared with them, and also expand a little more on some of the techniques I use.

As an example I’m using the final track on my Sunday Morning album, called Gotta Sleep Now.

It’s a pretty sparse track, with lots of atmospheric guitar loops, some synth bass, and features the wonderful vocals of  Susannah Legge.

I’ve included a Live Set version of this as an instrumental track, with the original vocals, and with the vocals edited an processed, so you can listen to and see the techniques I’m writing about, but please note, that this live set, and the audio files within it are for your education only. If you get all inspired to take those files and make a remix of it, that is totally fine, just let me know! but I retain all rights to this music.

Screen Shot 2014-02-22 at 12.18.14 pm

Download the set here.

Screen Shot 2014-02-22 at 12.29.13 pmScreen Shot 2014-02-22 at 12.29.30 pmThis is the vocal processing channel strip I have created. Whilst it may look a little bit overkill, it’s actually a pretty standard vocal strip, with low cut, noise gate, glue compressor, saturation, EQ, and some parallel FX with reverb and delay. 

I’ve also created some Macro settings to dial in more or less of certain parts of the  vocal sound.

Screen Shot 2014-02-22 at 12.19.46 pm

I love breathy vocals, and I never want to fully get rid of the character it brings to a track, but once you’ve added some vocal FX, like Compression and Saturation, you’ll really start to notice that these Breaths and Esses in your vocal track sometimes get too loud.

The technique I use to deal with this is actually a pretty easy and quick workflow using Ableton Live.   Firstly go “Off grid”, that is press CMD+4, and lose the snap to bar/beat feature, then highlight the region with the breath or ‘ess’ sound and press CMD+E, this will split that region in vocal performance into a new clip.

Go into that newly created clip and in the Sample section, lower the volume by between 4-10dB, depending on how loud the breath or ess is, and how much you want to get rid of it.

Once you have reduced the volume of that clip, open up the fades option, and cross-fade between the clips, this will give a smoother transition between the two different volumes.

Screen Shot 2014-02-22 at 12.20.36 pm

To add a little extra  zest and character to a vocal, I also like to automate the volume levels of a performance using a Utility plugin after the Vocal FX chain.  In particular, I like to grab the ends of words, and bring them up just by a few dB. I find this adds to the intimacy in the vocal performance, and you feel like the singer is right in your ear.

By automating the gain of a Utility, as opposed to the channel fader, you have the ability to keep the fader adjustable later in the mix, whilst keeping your vocal volume automation intact.

I hope you find this useful!

Ableton Live MIDI FX

Here are a couple of Midi FX plugins that I created for the Enable Music Theory for Electronic Musicians session I presented at in December.

The idea is to have a series of Midi FX racks which allow you to use single key strokes or button pushes to create lush evolving chords.

With *Epic Chord Rack*  you can switch between major and minor, and increase the harmonic overtones of either the Harmony, or Tonic notes in the chord, as well as control to tweak the Maximum velocity out, which can come in handy if you start pushing the velocity messages up too high.

ecr

*Note Remover* uses a trick of three velocity plugins in a row, with various settings to randomly remove the number of notes that the chord plugin splits out.The Macros give you the option to have either more or notes in the chord as well as an output velocity level.

nr

*Re-Voice* basically ensures that all notes will always sit within about a two octave range, no matter what the incoming note message is. You can assign an octave range via the Macro.

rv

*Racked Scale* is a couple of scale plugins conveniently grouped together and macro-mapped so you can change between Major and Minor Scale shape, transpose your scale anywhere within a two octave range, and change into any of the diatonic modes within a major or minor scale via the Base Macro.

rs

GRAB THE LIVE SET HERE

If you have any further questions, feel free to write a reply and I’ll do my best to answer! and if you find this useful, and want to support the creation of these FX, please consider donating a dollar!

cheers, Matt

Visual Feedback for Looper on Push

This is a post for the Ableton Live nerds out there…

I’ve discovered a way to provide dedicated visual feedback functionality for the looper device using the Push User page.

First you need to set up the preferences so Live is sending note information to Push’s user port, and that Push’s user port is set up to send remote messages. This way, Push can remote control Looper, and it also allows for visual feedback in user mode.

Instead of just a regular audio input, I like to group the external instrument rack within a drum rack so I can get visual feedback as to where I am in a 4 bar phrase in native Push mode.

The added benefit with using the external instrument, is that you are able to use a midi clip to send note and velocity information to the push user port channel 1, which then will light up the pads in User Mode.

The ascending notes in the midi clip, send messages to light up push’s top 4 rows, just the same as the drum rack does, with one row representing one bar.

I’ve also created a set of 8 buttons in the bottom left hand side that each correspond to a function I want to map to the looper device, overdub, play, stop, undo, clear, etc…

You can download the Live Set I’m using in the video here.

Screen Shot 2013-10-12 at 3.40.54 PM