Emotions Within Digital Signals

Music and artistic expression are conceived to provoke emotions to people. Music and visual arts travel in waves through the air across distances, from the transmitter to the receiver. Music is maybe the most influencing form of art, capable of producing deep emotional effects, evoking feelings and awakening memories when one is exposed to it.

The sound perceived is nothing else that the effect from the vibration of the eardrum hit by the sound waves traveling through the air. Like a pendulum, a fast one, the eardrum oscillates and that oscillation is felt as an emotion by our brain.

The fundamental unit in music is the tone. When one sings a song that one is reproducing a sequence of tones in a certain order to produce a melody. In music secondary tones usually follow the lead tone or principal melody. When multiple tones sound at the same time we may call it a chord. Chords define the temper of the music, and are in large part responsible for the emotions that individuals will appreciate when hearing the music.

The tone is the basic musical unit. Western music uses twelve typical tones (C, C#, D, D#, E, F, F#, G, G#, A, A#, B). That range of tones is called octave, and that tone structure is also commonly called chromatic scale.

Each chord is composed exclusively and always by two or more of those tones in western music. For instance, the C Major Chord I is formed by tones C, E and G played at the same time.

Same way we call chord to the sound of multiple tones at once, we call key to the group of tones the music evolves through. Key is similar to chord, and the basic difference is that keys are tones across time within the same space or plane, and chords are tones on the same instant but across different planes. To summarize we can assume that chords are multiple simultaneous tones and keys are multiple tones belonging always to the same space of tones.

For instance C Major chord would consist on tones C, E and G played at the same time for two seconds. C Major key could consist instead on C tone played on second 1, E tone played on second 2, and finally G tone played alone on second 3.

Remember that we said that music are just waves, in fact tones are waves too, and each tone has an unique corresponding wave. If we examine the most common waves within each part of a musical piece, we can find out which notes are defining that music within each time interval. We can therefore extract the tones, chords and the key of that music just by analyzing the frequency of the waves it is composed of.

Frequency is the time a wave completes a cycle. It is measured in hertzs. One hertz is equal to one cycle per second. Each tone has a fixed frequency that never changes. For instance, tone A corresponds to a frequency of 440Hz. Instruments are usually tempered using that tone A as a basis, meaning that all instruments that we can hear and produce notes will produce the same frequencies for the same tones.

In the next post, Spectral Analysis and Harmony, we will see how can we take advantage of wave analysis (Digital Signal Processing) and Music theory (Harmony) to programmatically identify feelings from music files.

Leave a Reply

Your email address will not be published. Required fields are marked *