IMPMOOC Lesson 1 Propagation, Amplitude, Frequency and Timbre

Following is my word-heavy explanation and teaching assignment for segment one of Intro to Music Production, taught by Loudon Stearns on http://www.coursera.org.

Propagation deals with the way sound moves through a medium, from source to receiver. Sound is represented as a series of waves of pressure traveling through a compressible medium. When these waves of pressure reach a receiver like our ears they are then interpreted into sound by the brain. The most common way we experience sound is through the medium of air. When dealing with sound it is best to think of it in two ways, or represented by two types of waveforms, the transverse and longitudinal wave form. A longitudinal wave is the best way to demonstrate HOW a wave actually works in air by showing compression along a single axis.

Credit: Computermusicresouce.com

Longitudinal wave showing compression in air

However, the most intuitive way to actually view and USE the information is to transcribe a longitudinal wave into a transverse wave that allows us to view amplitude and frequency over time.

Credit: computermusicresource.com

A Longitudinal wave

Some effects in the DAW that affect propagation are delays and reverbs. This leads into the next portion of this lesson: Amplitude.

Amplitude can be thought of as the strength or power of a signal wave, or to what degree it rarefies or expands the air. The amplitude is represented by the height of the waveform.

Credit: computermusicresource.com

Amplitude is the height of the peaks of a waveform.

We interpret this information as loudness or quietness. It is possible to encounter some confusion when MEASURING amplitude, due to the fact that we can measure it both in the computer AND in air. The common unit of measure for amplitude is decibels (or dB). When we are measuring in the air we are referring to decibels of Sound Pressure Level (or dBSPL) and in the computer we are measuring decibels at Full Scale (or dBFS). These numbers are quite different from each other due to the fact that decibels are a RELATIVE measure. When measuring in air, we start at 0dBSPL and then increase from that point to the loudest thing we can hear comfortably, or the Threshold of Pain. When measuring in the computer, it is necessary to set zero as the LOUDEST thing that can be represented by the computer and then decrease from that point. Often, decibels in the computer will be represented by negative numbers. Due to this difference in measuring, it is necessary to know whether you are dealing with decibels of sound pressure level (in air) or decibels at full-scale (in the computer). When working in your particular DAW, there will be a number of dynamic plug-ins (you may or may not already be familiar with these) that control amplitude of signal over time. These include: expanders, gates, compressors and limiters. Another point where amplitude comes into play is when looking at the Dynamic range of a piece of gear like a microphone. A microphones dynamic range is the range of decibels in which it will reproduce sound correctly. There are added variables when dealing with a piece of gear and its limitations including: noise floor and distortion. Noise floor is where the quietest sound will be overtaken by ambient noise and distortion happens when a very loud sound cannot be accurately reproduced by the mic. Dynamic range also represents the range of loudness across an entire composition.If you are math-minded or interested in theory, you can do further research on how logarithms are used to represent intervals and semitones although this has more to do with frequency ratio. Logarithms are also instrumental in describing even-temperament for tuning and playing chords although I must admit my knowledge in this area is lacking to the point of non-existence and I am only aware of these as nebulous concepts.

Moving on to Frequency is when things get really fun. Frequency is related to pitch but is used to describe how a computer would measure a wave, and this is measured in Hertz. 1 Hertz translates into one vibration per second. PITCH is related to frequency, but is what the human ear detects.

Image credit: physicsclassroom.com

Image shows differing frequencies.

In music, we rarely encounter pure frequencies, more often we encounter sounds from instruments that have energy across multiple frequencies, leading to the distinct sounds of those specific instruments.In the course, we learned that frequency is closely tied to Timbre or the spectrum of frequencies present in a single instruments sounds.

When we talk about fequency, it is important to know the human range of hearing. The human range of hearing is generally accepted to be from 20 Hertz to 20,000 Hertz (although, in practice we find that 18,000 Hertz is a more reasonably detectable upper-end). Now that we have introduced frequency into the equation, there are a whole host of interesting topics that can be discussed. Loudon Stearns introduce the terms phantom Fundamental and Masking at the end of the talk on Frequency and I will provide my best summary of those concepts here. Both of these concepts are part of what we refer to as psochoacoustics or the study of how humans perceive sound. Phantom fundamentals are harmonic series that give human hearing the PERCEPTION that the base frequency is present, without that frequency actually being present. In one example, a high-pass filter might be used to remove frequencies below a sound system’s capability and reproduce them at a higher harmonic. Masking is what happens when one sound is affected or overpowered by another sounds presence. This topic is incredibly complex and I won’t be able to describe in detail, but masking heavily affects how humans percieve sound, especially within varied sound environments.

Timbre is the collection of sound in multiple frequencies that makes up the specific sounds of varied instruments. This is what differentiates musical instruments from each other, as two instruments playing the same note at the same loudness will sound distinctly different. Timbre may also be referred to as the color or tone quality of an instrument. one humorous definition is: “the psychoacoustician’s multidimensional waste-basket category for everything that cannot be labeled pitch or loudness”(McAdams and Bregman 1979, 34; cf. Dixon Ward 1965, 55 and Tobias 1970, 409).The characteristics of each instrument allows us to identify them separately, even in ensemble settings, where masking may occur and allows us to pair instruments that complement each other or fine tune the attributes of music in mixing to make them better complement each other.

 

So there it is! I hope you enjoyed reading it and look forward to learning more about music production in the coming weeks!

Leave a comment