Question:
How does the Shannon-Nyquist Sampling Theorem apply to microphones?
Alessandro D
2012-01-03 07:24:51 UTC
Hi,

Firstly, I understand the Shannon-Nyquist Sampling Theorem (or whatever you know it as) to be, essentially, this:
When you wish to take a recording of a sound, in order to get a realistic reproduction of the sound, the sample rate must be at least 2b, where 'b' is the highest frequency that you need to record. So, because the human hearing range is 20Hz-20,000Hz, then the minimum sample rate for, as an example music, should be 44,000Hz- 44,100 being the standard these days.

So, my question is, how, if at all, does this affect choice/use/design etc of a microphone that's being used for recording?

I have a basic understanding of physics&electronics with regard to sound and music, but find it difficult to understand overly technical descriptions, so please word your answers accordingly.

Also, if my understanding of the theorem is incorrect or incomplete please feel free to correct me.

Thanks in anticipation.
Three answers:
lunchtime_browser
2012-01-03 08:55:28 UTC
There is a slightly esoteric point to be made here, I suppose..



The thing about a microphone is that you are not in total control of the frequency range that can be presented to it. There could, in principle, be frequencies present above 20 kHz.



So what happens if there are frequencies present which are higher than half the sampling rate? You get a phenomenon called 'aliasing' which results in bogus low-frequency components being represented {and therefore recorded} which will appear as audible noise of some sort.



This suggests that you need a microphone with a flat frequency response up to 20 kHz, then a steep fall off at higher frequencies.



In reality, any respectable A to D converter would include frequency response shaping to handle this issue anyway.



If I was being picky, your statement



"When you wish to take a recording of a sound, in order to get a realistic reproduction of the sound, the sample rate must be at least 2b, where 'b' is the highest frequency that you need to record."



might be better stated as:



"When you wish to take a recording of a sound, in order to get a realistic reproduction of the sound, the sample rate must be at least 2b, where 'b' is the highest frequency that MAY BE PRESENT"
Steve4Physics
2012-01-03 07:56:42 UTC
When you choose a microphone, the manufacturer usually tells you its frequency response - this is how sensitive the microphone is to each frequency in its range. This information is often given as a graph of output versus frequency.



For maximum fidelity ('accuracy of sound') the microphone's frequency response should be the same for all frequencies in the human hearing range of 20Hz-20,000Hz. This is called a 'flat response' as the graph would have a flat shape.



Some microphones have a much smaller frequency range - e.g 50Hz - 10kHz, so if you are sampling at 44kHz you are 'wasting' some of the recording fidelity. Similarly there would be no point in buying an expensive microphone which is sensitive up to 40kHZ, as the recording system couldn't record the high frequencies (and in any case, humans couldn't hear them).
anonymous
2016-12-01 08:41:23 UTC
A sampling scope would not inevitably carry out a linear pcm conversion on each and each pattern to fulfill Nyquist/Shannon requirement for a exact dithered quantizer. certainly there won't be any a/d conversion in any respect -- the sampling ought to can charge a capacitor, whose popular voltage is then switched over at some decrease fee. the secret's that sampling happens at some fee it is synchronized with the enter sign (mathematically resembling an autocorrelation) so as that the waveform could be reconstructed besides the reality that it is too quickly for the definitely a/d pattern fee, that's only approximately continually lots slower than the "sampling fee" of the scope.


This content was originally posted on Y! Answers, a Q&A website that shut down in 2021.
Loading...