24bit/96/192KHz vs 16bit/44.1/48KHz for music playback - A scientific look.

  • Thread starter Thread starter mrk
  • Start date Start date
if it brings the high-fidelity, well captured, professionally mixed and mastered records to the masses then I'm all for it!

yep, i can't disagree with that. but this can be done without using hi-res files for distribution. the only reason there are terrible quality CDs out there at the moment is because of the idiot human beings who made them, not the technology. i've heard flea from red hot chili peppers is backing this PONO thing yet their californication album is widely renowned for its terrible quality. that isn't the fault of the format.

If I have an instrument that I wish to capture with a microphone accurately then 44,100 samples a second of this source (with a 16-bit bit depth integer) does NOT accurately capture all of the details, nuances and harmonic content inherent within the original wave disturbance of the air molecules.

i'm not sure i agree entirely with your reasoning but i don't think anyone would suggest recording at 16bit is a good idea. the article from the first post covers that....

When does 24 bit matter?

Professionals use 24 bit samples in recording and production [14] for headroom, noise floor, and convenience reasons.

16 bits is enough to span the real hearing range with room to spare. It does not span the entire possible signal range of audio equipment. The primary reason to use 24 bits when recording is to prevent mistakes; rather than being careful to center 16 bit recording-- risking clipping if you guess too high and adding noise if you guess too low-- 24 bits allows an operator to set an approximate level and not worry too much about it. Missing the optimal gain setting by a few bits has no consequences, and effects that dynamically compress the recorded range have a deep floor to work with.

An engineer also requires more than 16 bits during mixing and mastering. Modern work flows may involve literally thousands of effects and operations. The quantization noise and noise floor of a 16 bit sample may be undetectable during playback, but multiplying that noise by a few thousand times eventually becomes noticeable. 24 bits keeps the accumulated noise at a very low level. Once the music is ready to distribute, there's no reason to keep more than 16 bits.

I need to therefore point out that by simply converting any master into a higher format will NOT magically add in the details that we are concerned with here (assuming your original was already a digital format).

i'm pretty sure no one is saying this. :confused:
 
Thanks for your reply Marc,

Hope you don't mistake my passion for frustration with this post.

The high-res file for distribution we should all welcome with open arms. If the original source was recorded at these higher sample rates and bit depths and the signal chain from mix to master is intact higher res distribution files will absolutely sound better. Hands down.

What you need to remember is I know a few pros who like to sum their mixes out from digital land into analogue land and then back into the DAW. And thats just at the MIXING stage!

Equally, a lot of mastering engineers will prefer to use outboard analogue gear for mastering again requiring the conversion of binary code into analogue current and back again in DAW of choice.

Why do you think people like me spend thousands of pounds on Apogee, Lynx and ADA converters? Just to sit there and look pretty in my rack?! (they do in fact look mighty pretty!) lol.

And I'm going to have to disagree about Californication - I had to reproduce this track with a student very recently and it was a tough job getting the energy, punch and midrange of the original. Admittedly, I heard a lot of distortion on the source but as mentioned earlier in the thread you can thank Brickwall Limiters (aka loudness wars) for this! Ooorah.

As for the capturing at source thing - I highly recommend you guys do some empirical testing of your own using an acoustic instrument (or even a DIed guitar) and switching between samplerates whilst capturing the same source. Remember, it only gets worse further down the signal chain so why not capture the source in 'HD' if we know that the audio is gonna be converted a lot?!

And yes, I am an audio snob but I sit here unashamed of the fact! haha :D

Peace and love peeps!
 
I also note that recently Neil Young has been actively involved in this debate by pioneering the Pono Player. Whilst I have some reservations about this 'new' technology; if it does indeed bring the original high-fidelity, well captured, professionally mixed and mastered records to the masses without the need for dithering etc then I'm all for it!

The videos of Neil promoting Pono are a bit disturbing. Renown musician after renown musician proclaim it the best they've ever heard—extremely unlikely, and the apparent fact that they just auditioned it in car makes it even more unlikely.

Yet all these guys showed up. Because Neil is such a good friend that they want to lend their names to his effort? Or because it's CD-vue—people get to re-purchase their catalogs in another, higher cost format?

One thing is for sure with Pono: Any improvement will likely be lost on 99% of the potential listeners, the iPod (etc.) crowd, who will be listening with their marginal headphones and in their cars. Now, you may counter that this is fine, since it gives audiophiles a better source material, sponsored by the unwashed—OK.

If I have an instrument that I wish to capture with a microphone accurately then 44,100 samples a second of this source (with a 16-bit bit depth integer) does NOT accurately capture all of the details, nuances and harmonic content inherent within the original wave disturbance of the air molecules.

Of course, nothing does. Not analog, first, nor any part of the analog path that we'll be using for digital, especially loudspeakers. But digital is inherently an approximation. With 96kHz, you're giving yourself one measly octave more frequency headroom. If "nuance" were the issue, you'd probably want something more like 5MHz.

If this mix then gets mastered in the SAME audio format and distributed as a lossless file of the same parameters (in a 96/24 FLAC for example) then this will sound better.

I would think that the "lossless" part is the most important. (How much depends on the lossless format, of course.)

Admittedly, it will take a while for your ears to become trained but I absolutely guarantee you there is a marked difference and one that you should all be able to hear.

The trouble with binary comparisons is that you have a 50% of picking the right one. People that I respect have asserted that they can hear whether a 24-bit audio track is dithered or not. Allow me to understate it, out of that respect, and say that this is unlikely. Poking about on the web, I read a thread where one of these mastering engineers said he could hear the difference when a certain switch was enabled—it came out later that the switch was not to enable or disable dither, as he had thought, and didn't affect the audio—he admitted that perhaps he was letting his imagination get the better of him. And elsewhere he was also honest enough to say that he could hear 24-bit dither or not "on a good day". Well, if something's 50-50, on a good day you'll be lucky enough to guess right several times in a row.

That's why I'll be skeptical right up till the point where double blind tests show that the higher sample rate is preferred consistently. I haven't seen that yet, so if you have, please point me to the study.

Anyway, i'm just commenting on your comments, not implying that you can't hear the difference, not really aiming the comments at you. One problem is that there are differences in playing back at 44.1k and 96k in the converter alone, so when a difference can be heard, it's hard to be certain it's the source material.
 
Last edited:
On mine (Win 8.1)

Playback Devices > Select source > Properties > Advanced > Select from the dropdown

The dropdown menu should have all permutations of 16/24bit and supported sample frequencies.
 
On mine (Win 8.1)

Playback Devices > Select source > Properties > Advanced > Select from the dropdown

The dropdown menu should have all permutations of 16/24bit and supported sample frequencies.

i'll try but i'm sure that is exactly where i have been looking and it's not there.

not that it makes any real difference i thought i may as well enable it if i have access to it.
 
Well as far as I know that's the only place you can select it, I've looked in the hardware manager and can't find anything in there.
 
Back
Top Bottom