Do we need to know how to handle a Reverb, or learn how to control an EQ, or decide on our guitar whether to use the Flanger pedal first and then the Distortion pedal, then let’s think about it a bit before getting into it. Many times we hear words that (by dint of sounding so much) are familiar but lacking of effective fundamentals: DSP, Effects, Chorus, Delay, etc. To understand the use of effects in any mixing or post production situation, it is necessary to review in depth what “processing a sound signal” means. If we take a few minutes to reflect now on the basics, when we have to face a complex mixing situation, we will save a lot of time. I assure you.
Before we dive into the subject, we should reflect that there are many types of signal processing. because there are many types of signals. The word DSP (Digital Signal Processing) is not limited to the world of sound. We are full of digital signal processing, video can be digitally manipulated and transformed, images can be digitally transformed, and any kind of processing involving digital data is also DSP processing.
Now, can only signals be processed digitally? Definitely NO. And in many cases, talking about the sound signal, we resort to an endless number of audio signal processes and not all of them are digital. Entering the world of audio, we should remember that the only two magnitudes that count for the sound are: Sound Pressure & Time. All the other parameters of sound: frequency, timbre, envelopes, equalizers, compressors, reverberation, echo, and everything, absolutely everything, can be explained with these two concepts alone.
(if some of these terms are difficult for you, you can consult our basic acoustics articles to help you strengthen your full understanding of the topic we are covering). We define a
signal processing (SP) as any method either mechanic, analog, electric, electronic or digitalwhich allows us to transform an original sound signal (S0) into another resulting sound signal that keeps a greater or lesser degree of identity with the original signal.
When a pianist presses the sustain pedal of his grand piano, he is processing the signal. When we are singing, as we modulate the mouth and larynx, we modify the sound emitted by the vocal cords, strengthening the pitch on the vowels and creating the phonemes with consonants (which are noises) that help to understand the meaning.
When a trumpet player attaches a mute to the instrument, or when a guitarist steps on a fret on the guitar neck to shorten the string, they are also doing signal processing. In all these cases, we are talking about what I dare to define as “Mechanical Signal Processes”.Mechanical Signal Processes” or MSP (Mechanical Signal Processing).
If you thought that the Wah Wah effect was exclusive to electric guitar players’ pedalboards, we invite you to watch this short video with an MSP example:
When we turn up the volume on our television set, when we transform the human voice with a microphone into a signal that reaches the recording system, or when we amplify the bass in a show, we are performing an operation a nalogicaloperation of signal processing ASP(Analog Signal Processing).
Now, when we use a plug-in in the computer, or when we take samples to a sampler, or edit parameters in a digital synthesizer, we are talking about digital signal processing or DSP. DSP (Digital Signal Processing). The same analog conversion that your sound card performs before you can record into your favorite program has many processes of manipulating the signal digitally.
Regardless of the environment in which we process audio signals, the processes have several axes of analysis that share several things in common, from a methodological point of view. For most musicians and sound technicians. many processes go unnoticed because we are so naturally familiar with them.
The identity of the signal after processing:
When we talk about signal processing, many people think of effects. Those manipulations that the sound engineer makes to finish a recorded product (EQ, compression, Reverb sends, Chorus, etc.) But, instrument builders, amplifier designers, sound designers, performers, and even composers and arrangers, we are undoubtedly permanently transforming sound signals.
So, what is a mixing effect? If I stretch the violin peg I am certainly changing the tuning and that is not an effect. Nor do I stop sounding like a violin. So? Well, in order not to divert the discussion (necessary by the way), let’s focus on what generally involves more doubts: The effects for the mix.
The difference between considering a Signal Processing as a effect(FX) or a necessary variation of some sonorous parameter, resides in the relationship between the two signals. the relationship of identity that both signals have with each other. That is, if the S0 (original signal) after processing retains characteristics that continue to identify it in spite of being an S1. S1 (Processed Signal), we speak of EFFECTS. If instead after processing the S0 we obtain a signal that has little identity with the original one, Z1 (Processed Signal), we are probably talking about SYNTHESIS.
Since signal processing is one of the basic principles of synthesizers, we should not be shocked by this last statement. Any guitarist from the moment he decides how to connect his pedals, is shaping the sound in such a way, that he is without a doubt a SOUND DESIGNER. There are some plug-ins that rather than being considered as fxare true synthesizers of the signal, since the result obtained has no discernible relation with the original signal. The following chart explains the difference:
Perceptibility in time:
Processes can be REAL TIME o NON REAL TIME. This means in the first place that as we provoke a variation of a control, without perceptible delay, we hear a variation in the signal. In digital processes NON REAL TIMEdigital processes, a calculation time is required (minimum or eternal depending on the power of the system) but perceptible as a delay in varying the signal.
It should also be noted that in the DSP environment, there is no such thing as INSTANT calculation time. Every digital process takes (due to the microprocessor architecture) a computation time. Only that it is (or should be) IMPERCEPTIBLE
When it starts to be perceptible, i.e. when we start to demand the system to deliver a result when a variation occurs and we feel that the response is not immediate, then we are talking about LATENCY of the system. And I’m sure as you start using more of your digital system resources, you’re going to have to deal with this latency issue. But this is a topic that we will address in subsequent articles.
Preservation of the original signal:
They can be DESTRUCTIVEo NON-DESTRUCTIVE. When we talk about non-destructive processes (we are not referring to Sound Forge’s UNDO) we mean that the resulting signals are calculated at the expense of the original signal, without the original signal being definitively lost. For example, if we record a voice with the console equalizer activated before the recording system, we are destroying part of the original spectrum, which we will never recover. On the other hand, if we record the voice directly into the recording system and use the EQ on the way back to the mix, the original signal will always be there.
Nature of the processes:
As we have said before but it is worth reiterating, the processes can be mechanical, analog, digital, or a combination of these. Let’s think about this case: A singer being recorded in a studio that has a digital console, but uses a natural resonance chamber (A small U-shaped cavern that is usually built in the basements of some very professional studios to have a natural reverberation). And that is all.
This will record and mix the shot. But, if we were to break down the processes that the signal goes through, we would be surprised at how complex and varied it turns out to be. Just as an exercise, let’s go through this process together, step by step.
A singer is emitting his voice through a microphone, the microphone enters our digital console, we use a bus from that console to send an amplified signal to a speaker located in a natural reverberation room (A small U-shaped cavern that is usually built in the basements of some very professional studios to have a natural reverberation), we pick up the signal with another microphone in the reverberation chamber and bring it back to our digital console, there we mix it with the original signal of the voice. We obtain a voice with reverb. But we have gone through ASP, DSP, ASP, MSP, ASP, DSP, DSP, DSP and from there to the amplification system (ASP) and the final monitoring will be expressed by ejecting your loudspeakers in sound pressure variations in the environment. (MSP) .
|We have a singer emitting his voice to record the shot.
|The microphone picks up the vocal signal, converting the sound pressure into electrical impulses. It is connected by a cable to the mixing console.
|In a digital console, the ADC converter brings the signal into a channel and converts the electrical potential variations into D-WORDS. Digital coding
|Since you want to use a Natural Reverb, the signal is sent through an auxiliary bus.
|The output of the bus is amplified, and that amplification, as an electrical signal, reaches a loudspeaker inside the resonance room.
|The loudspeaker expresses itself by transforming the electrical signal into sound pressure changes.
|The resonance room adds its reflections and coloration by reflecting the sound pressure signal.
|A second microphone picks up the vibration of the loudspeaker with all the resonances added by the natural space.
|It transforms sound pressure into variation of electric potential traveling along a wire.
|The signal from the camera reflections is fed to the ADC converter of another channel of the mixing console.
|The technician decides to equalize the reverb signal in order to color it according to his taste.
|The sum of the two channels, the “dry” voice and the reflections signal, are mixed (returning the reverb per channel instead of the auxiliary) and go to a subgroup. That will be added to the rest of the mix of the other channels. and from there to the master.
|The output of the master is converted from digital to analog (DAC) to be passed to the amplification system.
|The power sends current pulses through the cables to the sound engineer’s near-field monitors.
|The monitor speakers convert the sum of all the signals into sound pressure inside the Control Room.
|The sound technician listens to the mix with his ears. The sound pressure is transferred to the eardrum.
|The eardrum transfers the energy in mechanical motion to the chain of ossicles in the middle ear.
|The hussels transfer the energy again, this time inside the cochlea, converting the mechanical movement into hydraulic movement.
|The movement of the liquid inside the cochlea moves the cilia of the inner ear, shaking them mechanically.
|At the base of the cilia, nerve endings concentrated in the organ of Corti exchange potassium and sodium ions to generate electrical potential differences.
|the electricity generated travels to the brain where the sensation of sound is decoded.
Multiple processes can be sequential (chain) or parallel. In sequential processes the S0 passes through a process, becomes S1, the S1 signal becomes S2 and so on up to the SR. In this case, the order in which the processes are carried out is of vital importance, since depending on the order within the chain, the final resultant signal will not be the same in any situation. Compressing and then equalizing is not the same as the other way around, for example.
In parallel processes the S0 is processed by several controls at the same time and the signals obtained: S1, S2, S3, etc., are mixed and possibly amplified by the summation of dynamics, in a final resulting signal: SR
Processors normally act on one or more of the sound parameters. We find those that operate on time: phasing, chorus, reverb, delay, etc. Others will act on the dynamics: volume, compressors, limiters, expanders, noise gates, etc. Others will act on the spectrum: Equalizers, Wha Wha filters, Polar filters, etc. And others will process the sound frequency: Pitch Shifter, Octaver, Autotune, etc.
Some processes involve complex operations where more than one of these parameters is processed. Example Melodyne.
There is still a long way to go, in successive installments we will get to know the effects one by one, needless to say that for the following articles we will need some extra notions of acoustics, so check in the library of our site those concepts that are weaker, in order to take better advantage of the notes that will come.
Here are two audiovisual examples of REBERVERANCIAS whose origin is mechanical and then analogical:
- SPRING type reverbs and PLATE type reverbs
- How would a Natural reverb work for your studio?
Angel Diego Merlo
©2004 Audiomidilab.com. All rights reserved. The publisher may have included audiovisual content in external links that correspond to third parties. Invoking the right to quote for educational purposes.
© 2005-2017 audiomidilab.com All rights reserved.
By Angel Diego Merlo – firstname.lastname@example.org
Recommended reader level: Intermediate