Chromatic scale tone frequencies
On the previous post, Spectral Analysis and Harmony, it is shown an elementary introduction to harmony and digital signal. We are now going to study the range of tones between A3 an A5. Our central axis is A tone (or A4) which frequency is equal to 440Hz.
The next table shows all the tones and frequencies within the chromatic scale belonging to the range between A3 and A5. The piano key number corresponding to each tone is also displayed.
|C4 Middle C||40||c′ 1-line octave||261.626|
|A4 – A440||49||a′||440.000|
|C5 Tenor C||52||c′′ 2-line octave||523.251|
The difference or leap between two tones is called interval. One interesting feature of the chromatic scale is that it is composed by constant intervals. For instance, tone A3 is equal to 220Hz, tone A4 to 440Hz and tone A5 to 880Hz. Each tone frequency is double its analogue tone from the precedent respective octave.
The important idea is that we can analyze tones as numbers and operate with basic arithmetics with them with their frequencies. Who said emotions cannot be explained by Science? Do not be intimidated if you don’t know neither music theory nor Optical Physics; These texts will led you by the hand on a trip at which end you will know how to extract the waves, tones and emotions from digital music even without knowing none of those.
Frequency analysis in a nutshell
In order to analyze the frequencies that compose a piece of music, we take a part from it and extract a subset of frequencies. Like using an equalizer we filter the sound between two specific frequencies or tones. For instance, we could read the first ten seconds of a music mp3 file and generate a table displaying how many times tone A appears within that sequence. Going farther we could analyze how many tones appear and how many times each tone is played within those 10 first seconds.
As seen on the Emotions Within Digital Signals article, those tones can be used to define the chords and keys a piece of music is formed by.
In order to extract the signal frequency occurrences, we can use a frequency spectrum graph. This graph displays how many times a frequency appears on a signal and its power or prevalence other the rest. In this case, the signal is the first 10 seconds of music. Let’s see an example:
From the graph on the right we can see that the most used frequencies, those having higher \(|F|\), are one next to the 200Hz, another between the 300Hz and 400Hz and a third one between the 400Hz and the 500Hz. The x-axis shows the frequency spectrum (or range) we are analyzing, and the y-axis the power of the signal. The higher the line at a certain point on the x-axis, the more the power that signal has over that frequency.
To get an insight of the most used tones, the frequencies that have more power can be extracted, and in this case the dominant frequencies within the signal are in particular 220Hz, 246.942Hz, 329.628Hz and 440Hz. Rounding those frequencies to the nearest integer and comparing them to the ones in the table above, we can extract some of the main tones within the first ten seconds of the song.
From the data above it can be determined that the dominant key within the first seconds is composed by tones A, B and E. That key corresponds to chord A2Sus (A 2nd suspended). A chord is how it’s called the sound composed by multiple tones, multiple frequencies. The names of the different chords are not described in this article, since there are many of them.
In terms of music harmony A2Sus, or generically speaking 2nd suspended chords are tones that create a sensation of waiting for something to be resolved. The listener is holding on until the song resolves in something. We could say that the first ten seconds of this song are causing an emotion of expectation.
For more information on music and emotions, search in Google “emotions chords harmony”. For a good introduction to the matter I would recommend the paper Music and Emotions.
This article and the previous one, Emotions Within Digital Signals, set the basis to successfully tackle the problem of extracting emotions from music sequences. I will explain how to perform that task using Python language in the post Python for Digital Signal Processing.