Well, it’s been a while. Just over a year ago I took a full-time job and it has pretty much consumed all available free time. So musically, things have definitely been on hiatus. However, I have been keeping an eye out for sales and deals and there has been some re-tooling going on here in the Studio. I didn’t acquire or upgrade these all in one go! It’s been 13 months.
The Lede: Apple devices are applying Automatic Gain Control (AGC) to my digital audio files, and I don’t know why.
Update: Not that simple. Turns out it is VLC on iOS that is the culprit. See additional notes at the bottom of this post.
Back to the original post: I don’t normally listen to digital music streamed through my iPhone, because a) I won’t go near iTunes, and b) I have a couple of perfectly good USB MP3 players, which I’ve used on cycling outings.
Recently we’ve been ripping our audio CDs to FLAC format and loading them into a NAS from Synology, which provides a DLNA (Universal Plug-n-Play) audio server. It works fine although the available client applications that you can use to listen to the files are just okay. VLC seems pretty good, and it is supported on all our computers (Windows, Linux, Mac) and mobile devices (Android, Apple).
This weekend I did some work in the garden, and because I was within our WiFi zone, I decided to try streaming music from the DLNA server using VLC on my new iPhone, an SE 2020.
It sounded like shit.
Now, Apple did a lot of funky stuff in recent IOS versions (14, 15) involving monitoring playback volumes, and reducing headphone levels if you exceed recommendations. To be clear, I’m not talking about that. That’s a whole other rant. No, I’m talking about Automatic Gain Control, which used to be a thing on tape recorders to limit audio levels hitting the tape, or on playback (sometimes called “dynamic loudness control”).
It was really noticeable on my latest completed project, the Annulus Suite.
Annulus was stitched together in a Cakewalk project and mastered at 14 LUFS in order to retain the dynamics. I thought the end result was pretty good. I exported it to a lossless, compressed format called FLAC using CD-quality 16 bit audio depth. The results were pretty good, and that ‘s the version I uploaded to BandCamp to make available for purchase – the official release, if you like.
So it was very distressing to hear the peaks squashed, and any significant bass pulling the levels down, the dynamics chewed away and spat out. It was being pushed through an audio compressor.
I couldn’t find any obvious documentation about this from Apple (apart from the aforementioned digression about the nanny-state excessive headphone volume control). Where was this effect coming from? Was it the Apple Earbuds? VLC? The iPhone? I did some comparisons, and here are the results:
The FLAC file is located on the Synology NAS, available to stream via DLNA. I used VLC to play the audio file, and recorded the output from various mobile devices via the 3.5mm audio jack, into my ECHO Layla 3G PCI audio interface.
Track 1 is the FLAC file, directly imported into the project.
Track 2 is from my Windows 10 laptop running VLC;
Track 3 is from my iPhone 5c (IOS 9.3.5) running VLC app;
Track 4 is from my iPad Air 2 (IOS 13.6.1) running VLC app;
Track 5 is from my iPhone SE 2020 (IOS 15.0.2) running VLC.
Because each device had slightly different levels on the volume control, I normalized each clip for comparison purposes. The Windows playback sounds great and seems unchanged. However I hope it is obvious to you that the Apple devices are severely messing with the dynamic range on playback. I don’t think there is any practical difference between tracks 4 and 5, but it is interesting that the iPhone 5c running the older OS is still clearly affecting the output dynamics, albeit not as badly as the later devices.
I’m going to publish this post now, but I’m not done researching. More as it comes to hand.
Update: A couple of further tests today. Stick to one device, but compare apps. I used VLC, Synology’s DS Audio, and FlacBox:
Each track was recorded from the iPhone SE using the same volume level on the iPhone control (one notch below 100%). Again, I’ve normalized the clips for display comparison.
DS Audio and FlacBox do not apply any AGC to the output. It sounds dynamic and great. So… is it just VLC? VLC has always been my “gold standard” on conventional desktops (Linux, Windows) so it is distressing to find that the mobile version appears to be misbehaving.
There ARE EQ controlls in the iOS app! You have to be in Landscape mode… And guess what? The “Preamp” is above 0db by default and seems to be causing the ducking effect! I lowered the preamp to 0 and my issue is resolved! I’d like to say this is a bug; by default VLC should not be doing any DSP.
I carefully set the EQ to “Flat” and adjusted the preamp to 0db. The result:
Okay, it worked. There’s much less compression or ducking going on. I won’t say “none” because I can’t tell without extra careful comparison which I can’t be bothered doing. These controls are hard to find; tricky to adjust; and not persistent between songs! This is not a solution; barely even a work-around.
Also, this thread on the Videolan forum shows that there are other folks who have noticed the problem, complained about it, and the “known issue” doesn’t seem to be able to be addressed competently. Yeah, I said it. Even the “fixed in 4.0” comment doesn’t fill me with confidence that the issue has even been identified correctly. This is sad.
Until further notice, I do not recommend using VLC on iOS for audio playback.
Vol.2 is next, and most likely will consist of three individual songs rather than a related suite. I’m currently working on the first of these, ensuring that the melody is pitched right for my voice, and that I’m not plagiarizing myself (or indeed, any other artist). At least, not too obviously. This track I’m working on definitely wants to use a chord sequence from “Paradigm Shift” and I’m not sure that I can let it do that so blatantly.
It’s past time for an update on Studio activity for album#3, I think. I haven’t settled on a final name, yet, but the working title of the album is “Circles”.
As described in the last diary update, I had lyrics for five or so songs, and some melodic ideas. Lyric notebook in hand, I took a couple of long walks up the local hill to find some solitude away from the studio equipment and distractions, and tried singing the verses and choruses at various tempos and pitches, until I found something comfortable that “worked”. Then I recorded them into my iPhone voice memo recorder, for reference. In the process I re-wrote a lot of the lyrics to fit the “right” meter.
Back in the studio, I listened to the results and translated them into a simple piano arrangement with two or three tracks for chords, bass, and some melody. I find a Rhodes electric piano patch is best for this process.
(Okay, I realize as I review what I just wrote, that this is not particularly insightful. It’s not Rocket Science, and represents a typical songwriting process. I’m not special, I get that.)
For the last two months I’ve been working solely on one specific track, taking it from the linear “Idea Bucket” project and building it out to a full sixteen minute epic, with four or five movements. The piece is called “Annulus” and it is based on a 2017 trip we took with a friend to Oregon, to view the total solar eclipse.
At the risk of getting way too conceptual, the five movements map to the five phases of a total eclipse: First and Second contact; Totality; followed by Third and Fourth contact, as the Moon’s disc passes and overlaps the Sun.
The corresponding musical sections are: Departure I; Arrival; Aperture; Circles; and Departure II.
These sections were all sketched out and re-arranged in the form of pure MIDI tracks of bass, piano, and percussion. Using MIDI instead of trying to record audio allowed me to play around with tempo and arrangements, and even some transposition, as I prepared to record the vocals.
Happy with the arrangement, tempo, and pitch, I recorded all the vocals, lead and harmony. Done!
(This is so different to how I used to create. I would spend hours and hours on the music before ever getting near the microphone to record vocals, only to discover that much of what I’d done had to be removed, or replaced, or rearranged to make space for the vocal lines. I’m going to try and avoid that way of working, in the future.)
To be honest, prior to recording the vocals, I had spent a lot of time with various piano patches and a LOT of reverb, working on some of the instrumental sections.
With the vocals done, I practiced and recorded fretless bass and chapman stick. Then, having built up both a familiarity with the music and also some callouses on my fingers, I wiped what I’d recorded and re-recorded the bass parts, better.
Next, drums. I find that bass followed by drums is best, because I’m much more likely to develop an interesting rhythm part as the bass part is refined, and then if I’ve already invested time in the drum tracks, I have to go back and re-do. So, bass comes first.
And now we’re caught up, because I “finished” the drum tracks this weekend. I solo’d the Bass and Drum buses and it all sounds pretty locked up.
I have a tendency to overplay (really?!?!) and so I know there will still be some refinement required, mostly removing unnecessary rhythm parts. But it’s a good place to stop for now.
Just for fun I took the Cakewalk project and shrunk it down to show the full extent of the project in full-screen:
There’s no guitars, and not much in the way of synths or other fairy-dust adornments. Just Vocals, Bass, Drums, and half-finished Piano tracks.
I’m thinking seriously about breaking up that 16 minute monolith into separate projects, for safety and simplicity and speed. I’m not sure where the breaks go – there are short instrumental bridges between each of the major slabs of composition and so I’d have to allocate them appropriately. Some thought required.
I think I’ll move on to recording guitars. Some of music currently realized as piano notes will have to come out to make room. There are some parts I hear as “hammond organ” in my head, so that’s coming up at some point. And definitely some more synth-y fairy dust.
After completing the WestPac Bolero, I decided to produce a cover version of Simple Minds’ “New Gold Dream”, originally released in 1983. I’ve always wanted to do this, and the recent state of the world inspired me to update it for the ’20s. I temporarily uploaded my version to SoundCloud earlier this month – since removed – but now I’ve released it on BandCamp:
I could write a whole blog post about the trick needed to get the OBXa strings slap-back to sound right. (Shorter version: the 4th triplet delay on the synth is not synchronized to the BPM of the song – it’s slightly slower, 114 bpm instead of 123.)
I wasn’t originally planning to include it in Album #3, but it might end up as a bonus track or something, because I am quite proud of it. One thing I did was to make sure that I pitched the backing tracks to my voice before attempting to sing it, and somewhat to my amazement, I was able to lay down all the vocal tracks in two short sessions. Clearly there’s a “pro tip” in here somewhere.
But that’s enough about old music…
I’ve mentioned before about how this next album is going to be a blank slate, with nothing pulled out of the archives and re-worked, and as of now, this is still more-or-less true.
For the songs on Steel Tree and the Inevitable sequel, the music has typically always come first. A lot of work then went into refactoring the existing music to accommodate the lyrics, applied retroactively. Hopefully that effort paid off, and the results weren’t too awkward.
In the case of Album #3, however, almost the opposite is true: I have lyrics for five songs, and although I hear proto-melodies and harmonic changes in my head, I didn’t have any music committed to a project that directly corresponded to these five compositions. So where does the music come from?
Shortly after I acquired the Novation PEAK synthesizer, I started creating custom patch after patch. Pretty soon I realized I needed somewhere to store musical ideas that developed as I played around with each sound. So I created a Cakewalk project called PEAK_Patch_Demos, with a separate MIDI track for each patch slot, with musical phrases and text notes. Some of these ideas show promise.
In the past, when I’ve experimented with a new piece of software (VST instrument or effect) I’ve create a project to host the plug-in, and save any musical ideas that developed as I experimented. Some of these are pretty interesting.
For some time, I’ve also had a single project containing piano improvisations, where each track is its own little melodic idea, collated and built up over time. I expanded on this and started trawling through the other projects to bring all the ideas into one big “idea bucket” project.
Each idea has its own track, and a choice of instruments to play back on – either strings or a type of piano. Only one track is un-muted at any one time – the tracks are not related to each other.
The Idea_Bucket project made a great starting point for identifying similar ideas that might work together in a single composition.
The next step is to create a second type of “idea bucket”, this one for linear composition: I have instrument tracks set up, with drums, bass, piano, rhodes, and strings. My drum instrument is comprised of just a Cajon and Hi-Hat, in order to limit distractions: I can create a simple beat, but not get carried away with elaborate percussion fills before basic arrangement things like key and time signatures are decided on. This stage is all about finding the right vocal melody, pitch, and correct meter and tempo, using only just enough instrumentation to establish the feel.
At this point it’s a bit of a dance: The lyrics have to come together with rhythm and melody – there’s some give-and-take there. I have to practice singing the melody against the music – and now the tempo and pitch might have to be adjusted to suit my voice. Choices such as, do I sing that an octave higher? In which case I have to transpose that section down a fifth for comfort. Now I have a problem getting section B to follow section A… etc.
And at the back of my mind, I have this fear that I’m actually ripping off some other artist subconsciously, and if so, will it be too blatant? I’m pretty sure this is all normal creative angst. When it gets too intense, it’s time to go out on the bike with the mp3 player on shuffle and listen to some different artists.
Alas, they don’t really like the cover, but generally it is a good review and I appreciate the thoroughness of Theo’s research and informed evaluation of our offering.
A couple of comments on the review:
I don’t get the similarity between Paradigm Shift and Painting Abstracts, but perhaps I’m too close to the music. Theo’s entitled to his opinion. They are both have 7/8 riffs and have similar tempos but apart from that… I’m proud of both pieces.
(Update: I might now understand where Theo’s coming from. The opening statement or phrase in the chorus in both songs is a rising 5th interval, and I acknowledge the similarity, but I am not embarrassed by it. If self-plagiarism were a crime, Neal Morse would be put away for 10 years.)
He’s also not a fan of the Spoken Word verses in The God Program. I respect that. I did try other things early on but kept coming back to it. It’s pretty close to how I heard it in my head, originally, and I just have to plead my lack of ability to realize it in a way that resonates for everyone. I’ve used the growly, pitch-dropped vocal technique before, in the previous album. There’s some continuity in it, but hopefully it is not a “signature”! I don’t intend to use it again.
My brother and I aren’t actively writing together because we live in different parts of the world and have done so for many years. Long-distance collaboration is not something we’ve been able to do. There wasn’t a “falling out” or anything. There’s still music that we wrote together that may see the “light of day”, but I also have more ideas of my own. Hopefully album #3 will happen.
Melodic progressive rock songs and instrumental interludes, a touch of 70’s influence but a product of the dystopian Now.
“Very smooth, hi-tech sounding delivery…” – Chris Jemmett, alt.music.yes
“This guy is awesome.” – Dazed, on the Carvin Forum.
“..on a rare occasion you just have to conclude that the prog world should be feasting upon the birth of a new and promising act. That’s exactly the case with this [first] album.”
– Theo Verstrael, DPRP.net
“I find this new album attractive, [..] slightly less appealing than the 2014 debut. But as that is often the case with great artists, let it not distract you from trying this fine album. Especially those that are interested in bands that play varied, cleverly made, well played and sung [..], this might just be your cup of tea.”
– Theo Verstrael, DPRP.net