Tumgik
Week 12 - Listening and Troubleshooting
For this week's exercise, I will use my track "Bird at My Window" for the subject matter.
OneDrive Link: https://melbournepolytechnic-my.sharepoint.com/:u:/g/personal/s1541884_student_mp_edu_au/EaQbl6gHXVJDi1TaWKnlNA8BBekBL_0N7V69Uyqxl11HaA?e=Fupbps
Listening back to the track I can hear the vocals are still out of time in different places, and I think need to be redone. I could even try singing it a little lower by transposing everything down a couple of steps. I think I also need to EQ some of the elements of the mix to make more room for the vocals, it sounds like they get masked in some of the sections. It sounds like the snare in the chorus is pushing the vocal down in an unwanted way. I would like more vocal layers as well.
I'll definitely have to go over all of the tracks looking for any pops or crackling sounds and also hiss and other unwanted noises that may appear on any of the VSTs. I think I need to look at the samples I have used on this track and check they don't have too much processing on them and try to make them blend into the mix/space more realistically.
Another thing I just learnt was latency and how different parts of a mix can be out of time, sometimes just a couple of milliseconds and other times more. This has also made me more aware of how much all of the elements need to be in perfect alignment and pitch, otherwise, anyone with a good ear can tell straight away that something is out. Especially the vocals, they need to be spot on in pitch and timing.
Overall I am still very happy with this track and feel it has moved on from the demo stage and is now getting closer to release quality and will be used as the first single release for my DIY release at the end of the 2nd semester this year.
END WEEK 12 - LISTENING AND TROUBLE SHOOTING
0 notes
Week 11 - Mixing Advanced Techniques
For this week's exercise, I chose a track I wrote from last year's Music Production class called Planet Star that uses heaps of synth, it actually came from a session at MESS and I ended up tracking drums and guitar for the final version and am very happy with the way it came out.
For the exercise, I wanted to use a basic compressor that comes with Ableton to take some of the edge off of the cymbals and hi-hat in the stereo mix. So once the compressor was on the stereo track I set a fast attack and fast-release and then opened the additional settings tab to reveal the EQ settings. Once I was in there I set the Frequency to be affected to 12.9 with a -7db gain reduction and could see I tiny amount of yellow line skimming across the top in the settings window. At this stage, I could also hear the small effect the compression was having on reducing the amount of top-end in the cymbals and hi-hat.
Tumblr media
I added the Vinyl plugin to the track and used the warp setting which gave a really nice effect to the synths in the track, it also shaved a whole heap of top end off of the cymbals and gave the low end some nice analogue warmth.
Tumblr media
END WEEK 11 - MIXING ADVANCED TECHNIQUES
0 notes
Week 10 - Mastering Tools
For this exercise, I used my track "Autumn Leaf" and "Heart of Gold" by Neil Young as the reference track.
The first thing that stood out to me was the style of production that dated back to the early 70s, so I then found a 2009 Remaster of the track. In both mixes, there are hardly any hi-hats or cymbals, just a kick and snare for most of the track, and the other instruments were panned either hard left or hard right. The other instruments like the guitars and harmonica had a lot of reduction in the high end to take away any harshness and smooth everything out.
For my mix, I didn't want to attempt panning any of my tracks hard left or right as that method isn't used much these days, but I did want to emulate the drum sound from the reference track. Neil Young's vocals on this track were very dry as well, and I had already added effects to my vocals and was happy with them.
My mix had a lot more high-end and felt like everything was playing at the same volume throughout, the reference track felt like there was more distinction between the different sections of the song and the different instruments would intentionally appear and then disappear. Each of the different elements/instruments also appeared to sound sonically separate but happening at the same time, sort of like you could hear every instrument individually, while in my mix it sounded like everything was just appearing in the same place at the same time, with no separation.
I think this might be the methods used to record each instrument, and maybe mic choice, and the way they have been EQd. It is such a great recording and has definitely stood the test of time. There is also a hi-hat section but it sounds like it was done separately. Definitely a lot of things I can work towards for recording future acoustic tracks.
Note to self, how to create separation in a mix?
Heart of Gold Shortearm LUFS - -10.6 (not remastered track) Autumn Leaf Shorterm LUFS - -9.5
END JOURNAL WEEK 10 - MASTERING TOOLS
0 notes
Week 9 - Re-Amping
For this track, I re-amped the vocals of my track "Ghost in the Darkness" with a really small Peavey amp and I was really happy with the results.
I started off soloing the vocal tracks on the track and then feeding a mono out of the Scarlette2i2 into my small Peavey guitar Amp with a little bit of distortion over the feed. I then placed a RODE NT1 5th GEN condenser mic directly in front of the amp. I had to keep the level on the amp right down otherwise it would feed back into the mic. I then recorded all vocal tracks for the arrangement in Ableton.
Once I synced the track with the other vocal takes I was impressed with how much better the vocals sounded. The new re-amped track had a natural room reverb, which made it blend into the track seamlessly, and with the distortion added to the signal made it pop just that little bit more in the mix. It also made the vocal sound a little more natural.
I am very happy with the results and will conduct further investigations into re-amping and if done correctly, how it can be used to improve just about every element of a mix.
The track above is the chorus from the track with automation adding in and removing the re-amped vocal track, you may need to use headphones to hear the edits.
END JOURNAL WEEK 9 - RE-AMPING
0 notes
Week 8 - Referencing Emotional Effects in Recordings
Pick a mood or feeling and note what methods you might use to evoke this feeling?
Shock, awe, excitement.
The sounds I am thinking of for these feelings would be like a runaway train smashing into the end of the line as the train breaks up into millions of pieces and strewn all over the scene, with fire bursting into flames and explosions echoing out into the silent night.
Sounds that could be used to represent that situation would be thunderous drums, heavily distorted guitars, shrieking vocals, metal scraping on metal, and Industrial factory sounds with lots of reverb. Also lots of low-end thump and high-end shrill, maybe even when a guitar uncontrollably feeds back, and gets that high-pitched shrill that makes everybody cover their ears.
youtube
Pick a recording by another artist. Start by describing the mood/feeling/atmosphere(s) it creates and then list how the methods, elements of sound and concrete production decisions support and evoke that mood?
Bjork - Big Time Sensuality
Fun, happy, energetic, toe-tapping, organised, sonically pleasing, movement, playful, carefree, modern.
The tempo and beat (drum production) make this track feel happy and energetic and is what makes you feel like dancing. The lyrical delivery makes me feel happy and playful, the way it is broken up into the way only Bjork knows how while singing in a happier mode. The samples used for this track also have a very modern texture that has been current for about the last 30 years. There are also acoustic sounds added like the clarinet and the organ (maybe Hammond) for some acoustic grounding.
END WEEK 8 - REFERENCING EMOTIONAL EFFECTS IN RECORDINGS
0 notes
Week 7 - Comparing Recorded and Software Sounds
Analyse how real-world and software instruments are used in a chosen recording?
youtube
I chose a live performance of Gotye's "Somebody That I Used To Know".
Note which sounds you think are real-world and which are software? The 2 chord guitar riff is a live sampled loop that holds the different sections of the song together. There is a muffled kick that sounds like a real micd kick drum and then sampled into the track with low pass eq and compression.. There is a live-performed bass line. There is a glass-sounding bell xylophone instrument that has also been sampled for the live performance of the track with little processing. Vocals are all live and real-time. A synth sound appears after the 1st verse, and also drums that sound live recorded and sampled into the Push. The chorus then brings all elements together including drums, synth lines, bass, and glass xylophone, while dropping out the guitar sample/loop. The 2nd verse introduces a Hammond organ sound and additional live-tracked drums. The 2nd chorus adds vocal tracks and harmony tracks and all elements again.
Note whether real world or software sounds form the main basis/mood of the piece? The mood of the song uses real-world sounds that are sampled or looped, which creates an interesting effect. Because we are familiar with the real-world instruments, we associate their natural sounds, but by looping them creates a space that is real yet synthetic.
Describe how the sounds that don’t form the basis of the piece? The main short guitar loop has been eq'd and compressed to sound like a normal guitar with very little effects, maybe just a tiny bit of gain. The glass bells sound very real with very little processing that add to the acoustic sounds of the piece. The synth sounds in the pre-chorus and chorus sound like VST instruments that don't have much processing on them.
This track has a very acoustic sound overall that has been created with loops and samples of real-world instruments that have been arranged to create a hybrid sound that blends both worlds together in a way that is hidden to the listener.
END WEEK 7 COMPARING RECORDED AND SOFTWARE SOUNDS
0 notes
Week 6 - Collaboration Roles
Talk to your collaborator for the assessment and note.
What are some production influences you have in common and some that differ? Jay: We both like Metal and have extensive knowledge of back catalogues of metal acts. Andy: Similar taste in production regarding the metal/rock side and some of the classics era. As far as producers go, Adrian Younge is a big influence of mine.
What do you each do that is unique compared to the other one? Jay: I play drums, guitar, and keyboard, and can write lyrics and sing, and also have a little experience mixing. Andy: Outside of instrumental ability, I think I have a knack for creating ‘moments’ in songs or ear candy. Pursuing potential ideas and see what they turn into.
What style of collaboration workflow(s) will you use and why (e.g. live recording, layer by layer or and/or remote)? Jay: I think it would be a mixture of techniques, so tracking guitars and bass at home, while tracking drums and vocals live on campus, and then sending tracks back and forth to work remotely and also for mixing. Andy: Agreed, mixture of techniques. We may record parts of it live, work remotely and in class on mixes and likely build layer by layer for some aspects of the song ie guitar chords > add a lead > add bass etc
How will you divide and collaborate on the different stages/tasks? Jay: I will definitely track drums, and some guitars, and maybe sing, while Andy may track guitars and bass. I think we have talked about Andy doing the mix (drums), and then I can have a look at mixing the guitars or vocals (not set in stone). Andy: Agree with the above. Keen to be present during the recording of the drums but we are certainly able to track other stuff individually and on campus together. Maybe with the final mixing stage we agree on a role until a certain point then go away and work our own ‘final mixes’ and share them until we come to a consensus.
Who will be the ‘keeper of the DAW session’ and how will the non-keeper have input? Jay: We havent talked about that yet but probably whoever has the fastest internet, but I don’t mind doing it, or we could both have different versions on our own computers. Andy: This will be a tricky one. I’d like it if we can work in Logic personally. Perhaps one holds the ‘main’ session until we get the actual composition finished then go from there. We’ll be flexible and seeing as we’re in several classes together, we can likely make time to work together in real life rather than solely remotely.
END WEKK 6 - COLLABORATION ROLES
0 notes
Week 5 - Coaching a Great Vocal Performance
Which mic is your favourite on your voice and why? I quite liked the SM7b, it felt like it smoothed out my voice without destroying the natural overtones, and it also had a more focused and controlled low end, and as my voice has a lower register I think this would be the best for my voice. I am also working on pop music, and for pop, smoother is better.
I did like some of the detail in the mids on the ribbon mic, they might help my vocals pop out more in the mix, but I wouldn't know for sure until I heard it in the mix with post. I also liked the pencil mic, it sounded very clear, almost a little harsh, while the Large Diaphragm Condenser sounded truest to me.
I could also hear a lot of room on the ribbon mic and wondered how it would sound in a controlled space. At home I use a Rode NT1 Condenser, it would be interesting to use that in comparison with the mics in this exercise, but I am guessing it would be similar to the condensers we used for the activity, meaning they would be quite smooth, clear and natural sounding.
END WEEK 5 - COACHING A GREAT VOCAL PERFORMANCE
0 notes
Week 4 - Refining Aural Perception
The track I used for the aural EQ exercise was something I wrote in the mid-year break last year, called Big Breakfast. It's a dancy sort of Electropop track.
100hz Boost: Muddy, warmth, boxy, woody, clunky, bassy, rubbery.
1000hz Boost: Boxy, telephone, closer, louder, whistling sound, more detail, wooden.
10kh Boost: Glossy, harsh, hissy, shiny, shimmering, clicky, clearer, hollow.
100kz Cut: Makes the kick and snare easier to hear, clearer, less bassy, not as boxey.
1000hz Cut: Flat, more bassy, less detail, further away.
10k Cut: Dull, bass-heavy, less detail, more mids.
youtube
For my reference track, I chose "Firestarter" by The Prodigy and straight away I noticed the high end was much harsher and with a 10k boost sounded scratchy and whistley. While the low-end (100hz) boost made things really muddy and out of control and wobbling all over the place. The low-end cut nearly completely removed one of the wobbling bass synths. The mid-boost made the track sound closer while also bringing out more detail in the vocals, synth, and lead synth.
The relation to the balance between the lows, mids, and highs, I think having the really harsh high end brings out more lows, and other lower mids are mixed in a way that if you boost them they become muddy, sort of trying to control a lot of the acoustic energy to contain as much detail as possible without losing control, so by adding the 10kz boost it becomes out of control. I think it is the same in the mid-range where they have added as much information so that if it exceeds by being boosted, it becomes out of control.
END WEEK 4 - REFINING AURAL PERCEPTION
0 notes
Week 3 - Sonic relationships – Vocals, Rhythm and Bass
OneDrive Link: https://melbournepolytechnic-my.sharepoint.com/:u:/g/personal/s1541884_student_mp_edu_au/ETTZc0aDPlNFvUX5ONBhX1gBdtJq5UtKB0OXl3pnvQPRnA?e=6nnbxp
I chose the In the Box approach and found this journal prompt rather difficult due to the drum samples I chose and also the synth bass sound. The track that I ended up making was strange with 2 different drone tracks that were randomly bending out of pitch, and then with the melodic lead synth, and major sounding vocal, this track definitely turned out nothing like I had planned, but it happens sometimes.
The drum track was quite full as well with more than just kick and snare, and I think that made the track not very dancey as well due to not very much space around the instruments, and no syncopation. Maybe this came from somewhere that didn't want to hear syncopation.
I didn't really create a chord sequence either, instead I started dropping out different tracks and experimenting with simplicity. I noticed how your attention was drawn to the basic elements and where still being entertained when there were only 2 tracks, the drum track and drone playing. I have always had lots of elements/layering going on to impress the listener, but this is like a back-to-basics thing. I think that is also where the dancing thing comes from, when you're dancing to the music you are more aware of the rhythmic elements and the voice, and unintentionally block out the instruments playing the chords to lock into the rhythm.
The reference track was the Radiohead track "Full Stop" from their A Moon Shaped Pool album, where there are very little drums and a droning bass track, you could dance to it, but it would be very difficult. There are also parts that drop out to basic instrumentation and vocals, and again you stay interested, I like this.
This is something I will spend more time on when I get the chance, experimenting with which instruments and or vocals to drop out to create more layers within my tracks.
END WEEK 3 - VOCALS, RHYTHM AND BASS
0 notes
Week 2 - Sub bass – Exploring Low Frequencies
How accurate is your system in terms of the ‘sub’ part? I got a pretty even volume across the whole sub-signal, the cross-over freq is set to 100hz and all of the sounds under that frequency are very clear and punchy, it feels like there's a lot more going on down there than the sub in Red Sofa Room.
‘Low cut’ filter EQ at 100hz, what changes? Beyonce's track only misses a small amount of sub-activity after the low cut is applied, the different sections aren't effected at all. I used Bjork's "Army of Me", and with the low cut the track loses a lot of energy in the verse, but not the chorus. With the low cut, there is still enough bass information there to represent the track's bass activity. But you cant beat having that extra low end punching through.
‘Hi cut’ filter EQ at 100hz, What changes? Bjork's "Army of Me". Amazing, I can still hear some of the vocals with the high cut, I can also hear the kick, synth bass line, and some of the instrumentation. It's like a murky underwater version of the song. The kick seems to be the most dominant sound in the chorus but then falls away behind the bass for the verse.
Beyonce's track, I can hear some vocals, the sub-signal drop, and some other instrumentalization. It feels strange just hearing how the bass elements and other instrumentalization have been developed around the signal drop to utilize as much sub-frequency as possible for the drop.
Tumblr media Tumblr media Tumblr media
END WEEK 2 SUB BASS - EXPLORING LOW FREQUENCIES
0 notes
Week 1 - Production process – Customizing Process to Suit Your Environment
Out of all the production knowledge we recapped today, what felt like your week link personally?
When the lecturer talked about the concept of resonance and its relation to the size of the waveform, and how that waveform fits into the room, I am guessing that this concept has something to do with mastering, and the way speakers and baffling are arranged within the room to create the clearest most accurate reproduction of the sound possible. It also suggests that there is a very detailed and exact science behind the process of achieving the best possible outcome for a track or piece of music and that this knowledge can be passed down from teacher to student. The knowledge of audio waveforms, their sizes, how they bounce, how they resonate through different materials, how they affect each other, how they affect microphone sensors, how they emanate from different instruments, how to best capture them, and then how to best reproduce them are very exciting scientific occurrences that I find very interesting and would like to learn more of. For instance, the frequency at which bees' wings vibrate is what causes the wax to form into its hexagonal cells, it is also responsible for the purification of the honey.
Talk about an action you’ve taken to address that? I have been spending some time watching YouTube tutorials on how to master tracks, getting LUF levels on a track so that when you release it, and it is played on different mediums it is not too loud or too soft, and then can also be played in any country on the planet.
There were also concepts on compression that addressed settings on compressors that were foreign to me as well, which were when changing the release settings you had control over gain reduction, and how much gain reduction was best settings.
Even though these concepts may seem small in relation to the amount of information there is for audio production, for me they are big steps, that hopefully when I get a better understanding of, can then pass on to someone who is ready to learn them as well.
END WEEK 1 Production process – Customizing Process to Suit Your Environment
0 notes