Dynamic Grading’s histogram views are a quite unfamiliar sight in the audio world. In this post we show how to interpret these graphs to get valuable insights into your audio’s dynamics.
If you’ve edited photos before, there’s a good chance you’ve seen some variant of a histogram. These graphs are a common way to identify how the low, medium and high brightness parts of an image relate to each other. Or in short: to analyze the dynamics in an image. Often you will find these histograms combined with some means to manipulate the brightnesses in the image, which has a direct effect on the histogram.
It’s this way of visualizing and manipulating the information in an image that inspired Dynamic Grading. Only that instead of the brightness of pixels in an image, it works with the loudness of different sound events in an audio recording.
How Dynamic Histograms Work
In Dynamic Grading, audio dynamics are displayed as so-called dynamic histograms. These graphs represent a statistic of perceived momentary loudness over a given length of time. The easiest way to think about it is via the commonly used level meters you know from DAWs and mixing desks:
Now imagine a really fast-moving level meter like the one on the left, and a superhuman statistician sitting in front of it, looking up the level on the meter really often and keeping a tally sheet of how often she or he reads each possible value over – say – the last 20 seconds. The result is plotted as a bar graph like the one on the right. In the example above, you can see that values around -40 dB have occurred much more often than values around -50 dB or -16 dB, as an example.
This kind of graph tells us a lot about the dynamics of audio recordings. It reveals the dynamic range (the highest and lowest readings) as well which loudness regions are most prominent and how much perceived loudness varies (= how dynamic it is).
A sharp peak in the graph hints at a static tone, note or noise with a constant level, while a broad and flat shape means there is a lot of dynamic variation.
A Simple Example
So let’s look at a simple example from the real world. Below you see a dynamic histogram of a synth bass arpeggio pattern. You can see that it looks kind of like a smooth bell curve. Due to the regular and repetitive pattern in the audio, we get a poster child that helps us establish some rules to identify important parts of the histogram.
The bulky maximum of the bell curve tells us where in the dynamic range most stuff is happening. We call that range the “body” of the signal, because this is usually where the most important musical information is located. For most instruments – as in this example – these are the sustain phases of notes. The body range is usually what carries what we perceive as pitch and timbre of an instrument. It also plays a significant role in the perception of overall loudness and the dynamics of the “player”.
Many instruments such as this bass synth feature a significant onset or attack, which is louder compared to the body range, but only for a short time. That’s why in dynamic histograms, you’ll often see a decay towards higher levels. This range is what we’ll refer to as the “punch”. When an instrument features very pronounced and percussive attacks, the punch range will stretch farther out from the body. When attacks are less pronounced, the decay towards higher levels will be shorter.
The third range worth noting is what happens below the body range. This is where lower level sound is located. That’s why we call it the “floor”. In this floor range, you find the decay of stopped notes, reverberation and echo, as well as the noise floor if there is a significant one. The floor can be best described as the “space between the notes”.
More Complex Example
Let’s look at a more complex example. Below you see the dynamic histogram of a drum bus. It looks much more jagged, but nevertheless we can roughly identify body, punch and floor ranges here as well.
In this case, different drum instruments (like kick drum, snare, hihat) are involved, which also cover a broader dynamic range. Thus, the body range is not as “belly” or “bulky”. Also the floor range is thicker, because the room and instrument decays are more pronounced. Finally, the punch range has its own peak, because main kick and snare hits are louder than the hihats and ghost notes, which are more located in the body.
These are only two examples of a wide variety of audio tracks. When you start using Dynamic Grading on different instrument and vocal recordings, you will encounter patterns very similar to those we’ve seen here. Most real world audio tracks exhibit some flavor of punch, body and floor ranges.
When using Dynamic Grading, the first step is always to identify the punch, body and floor ranges roughly and adjust the source markers accordingly. After doing that, you can start shaping the dynamics to your liking. Read on to learn how!
Tailored Processing For Punch, Body and Floor
By adjusting the target markers for punch, body and floor in Dynamic Grading, you can shape these ranges. For example, you can squeeze a range to compress, or stretch it to expand. As each range contains distinct audio features, you can achieve different outcomes by shaping either.
As outlined above, the punch range contains note onsets, transients, attacks. When attacks stick out too much and you want to tame them, you can compress the punch. This will reduce the impact and free up some headroom. Punch compression also comes in handy when you want to push a signal more towards the background.
On the opposite, you can also expand the punch range when you want to enhance attacks and onsets. This will move a signal more to the front and “in the face” of the listener.
We learned that the body features the most important musical information. It is the “meat” of an audio track. When you compress the body, you can increase its presence in a busy mix. In that case, sustains can become stronger, softer notes and ghost notes increase in level and have a better chance to cut through other instruments in a busy mix. Compression of the body ranges is essential if you want to create a dense and loud mix. Another great use case for body compression is to make a vocal recording more intimate by bringing up breathing and other low-level noises.
By expanding the body range, you can create more space for other instruments. This may also lead to an enhanced feel of “groove”, as instruments become more dynamic.
The “space between the notes” is another range that carries important audio features we want to control when mixing a song. Some of them are wanted, some we’d rather get rid of. In deciding what to keep and what to eliminate lies a lot of creative expression.
The floor range is home to low-level sound such as reverberation, echoes or noises. If you compress here, you can exaggerate roominess and thus create a more immersive feeling. It’s also great to create weird artistic effects.
Often it is a very good idea to try expanding the floor range, which cleans up the track by reducing reverberation, noise or other unwanted elements. This is an especially valuable technique if you’re dealing with recordings made in less than ideal rooms.
A Handy Cheat Sheet
To conclude the above, here’s a handy cheat sheet, as an overview on the different ranges, what you typically find there, and what you can achieve by shaping them.
Dynamic Grading makes it easy to quickly identify the different ranges in a given audio track and modify them all at once to create the effects outlined above. Turning a bland, thin and roomy instrument recording into a punchy, fat and dry track that really ties your mix together is a matter of seconds.