24bit/96/192KHz vs 16bit/44.1/48KHz for music playback - A scientific look.

  • Thread starter Thread starter mrk
  • Start date Start date
Can we sum this up then?

Say for example, I have 11.5GB of FLAC music on my mobile (which I do).

What format should I compress said music into to retain perfect audio to the human ear yet reduce the file as much as possible?
 
Keep it as flac if it's at 16bit and 44.1KHz.

You could convert them to mp3 using the LAME encoder and the quality would be indistinguishable from the lossless format in most instances though but that defeats the objective of having lossless music in your archive.
 
indistinguishable from the lossless format in most instances

and that's under ideal listening conditions when you can give it your 100% attention. all that goes out the window if you're walking down the street or on a bus/train.

i always used to ridicule people who carried lossless on a portable device because it is a massive waste of space but as storage gets larger and cheaper, i guess it's not so mad. if you have room and don't have to constantly add/remove files because you can't fit everything on at once, i suppose it's acceptable to do it.
 
Yup, My 64GB card on the phone has 7GB odd free as most of it is used by music, some of that music is in Flac format where I've been able to rip or get it in that format. Syncing with the PC is just easier this way, no need to dedicate time to converting each new album to mp3 just for the phone and I would imagine most people have in ear buds that isolate outside noise quite well :D
 
probably because it's all subjective to the end user, i once tested some basic cheap bell wire speaker cables against some expensive qed brand cables connected to the same seperate hi-fi system and i couldn't really tell a difference except that one set of speaker cable looked better than the other cables, so was it aesthetics over sound quality?, metaphysics and the laws of perception is a curious thing :D. By the looks and sounds of things you might as well stick with 16/44 for playback/listening, but another question is "cheaper DACS v expensive DACS whilst listening in 16/44.1?, ha :D.

The difference being that cables are proven to not colour sound, and 24 bit audio playback doesn't improve quality, but people still desperately clutch to the idea that their own audio equipment runs on magic and fairy dust and isn't like anyone elses, so fancy cables and 24bit audio DO make their systems sound better.
 
^ Generally true but don't get it mixed up with genuine cable issues though. Look at the X1 thread, the stock cable does decrease sound because it has unusually high resistance (1.5ohms) whereas a normal cable has typically 0.5ohms.

It's not common but it does happen.
 
Of course, cables can degrade sound quality due to them being insufficient for the load placed upon them, but I'm talking about the sort of fluff where people claim different cables make the treble/mids/bass more pronounced compared to other cables.

I've heard on these very forums, someone claiming that a set of cables were "too toppy" in reference to top end. I cringed when I read it, then felt sad.
 
Oh indeed, the fact is marketed cables are priced insanely high for no reason whatsoever. A digital cable is exactly that. Tesco sell a 1 metre strand of optical "premium" cable which is fine, it's well built and terminated with metal ends and they charge a tenner. The exact same cable, but longer, on eBay is £7 cheaper.

In an all analogue system I can appreciate some cables do deliver a different sound to others but the equipment being connected has to be supremely good to be sensitive enough to tell the difference and we're probably talking amps/speakers that cost thousands if not tens of thousands of £.

Everything else is so minuscule it falls into the doesn't matter category so buy something built well that will last for the price as opposed to "much better bass from this cable!!!" type stuff :p
 
What gets me the most are gold plated TOSLink cables, I die inside a little bit when I see one of those.

About analogue stuff though, even they aren't coloured by cables, if you look inside any of these high end speakers and amps, it's copper all the way through anyway. That's what's always got me about the claims that people make about cables, if you just take the cover off you'll see metres upon metres of bog standard PVC sheathed copper cabling and copper traces on PCBs and so on. Even if these magic cables DID do anything, the magic would only be present inside the cable and would die as soon as it entered the speaker. :p
 
Yeah, I stopped bothering with them when they carried on their silly claims with HDMI cables. I found it unbelievable that they were seriously genuinely claiming that the expensive HDMI cables they were reviewing had different picture properties.

I couldn't decide if they were living in pretend land, or they were just being shills, or a mixture of both.

Special sound transparency copper sounds intredasting, maybe you're on to something? :D
 
Now higher sample rates have a use...By using a higher sample rate you are moving the aliasing and filter further above the audible range so this pitching down never becomes an issue.

True to a degree, but note that 96 kHz and even 192 kHz isn't enough oversampling for many kinds of processing. The bottom line is that if it's a high fidelity plug-in or process that needs frequency headroom, it will have its own oversampling built in, and the overall ample rate need not be more than 44.1/48 kHz.

wickfut said:
16 bit has a dynamic range of 120db. That means that it can go from silent to deafening (literally) in 65536 steps. Which is once again the full level a human can hear.

16-bit would be 96 dB (20-bit is 120 dB—6.02 dB per bit).

wickfut said:
When recording audio at 16 bit, to get the full dynamic range people generally set the loudest part of the music to 0db. But if any transient sound accidentally goes over that level it goes into clipping, which is digital distortion.

When recording, people usually use something like -12 to -18 dB for the loud parts, to allow room for transients. Now, they may compress the heck out of the mix during mastering, and set the peaks just under 0 dB ("loudness wars"), and I'm sure that's what you mean. Sure for something like that, 16 bits for the final product is fine. But if you want dynamics...

OK, for a typical passage of the music, you may be 12-18 dB down. For classical, you can have extended quiet passages, for which you need to allow a lot of headroom for some other point point in the music. Sixteen bits is only "all you'll ever want and need" under ideal circumstances. And, of course, during the recording phase 16-bit is unnecessarily restrictive. There, it's a no-brainer—a 50% increase in data for a huge increase in dynamic range and flexibility. For distribution, it's less important to have 24 bits in most cases, but again 24 bits does allow some flexibility, and again only costs another 50% (really, 20 bits would be plenty at only a 25% increase, but 24 bits is the practical choice for recording/mixing for obvious computer-related reasons, so…). Sure, for most pop songs you won't hear the difference. But it is a substantial win for the record/mix phase.

Higher sample rates, by comparison, are a terrible deal. Double the data rate for…theoretically nothing, although in practice it's implementation dependent and may be a slight improvement, or it may be worse.

wickfut said:
You only need 16/44.1 for audio playback. Any more than that and you're going beyond the level of what your ears can handle.

So, while I can agree with you that 44.1 is sufficient for playback, I'd only agree on 16 bit "for the majority of cases" or similar disclaimer. ;)
 
Wowzers, on searching if to set my Xtreme music card to a higher frequency as times have moved on it has answered my question and filled my head with some good sound knowledge.
44.1hz, bit matched it is then :0)
 
I compared Michael Jackson Thriller CD ripped to flac to HD Tracks download 24bit 176Khz and the 24 bit version is better, I also blind tested a friend across the album and he picked the 24bit track every time/.
 
Nick, that's two different sources, and doesn't take into account volume-matching or, indeed, any unmentioned post-processing that HD Tracks may have done in order to make their service appear 'more awesomer'*.



* Not that I'm saying HD Tracks deliberately mislead with some sneaky post-mastering; it might be that they just normalise levels and the FLAC rip didn't. Comparing waveforms would highlight any processing.
 
They must have been taken from the same master though, the CD I used was a Japanese release that most regard as the best sounding.

The level was pretty much the same, I can assure you it was more than just a difference in volume we could hear.
 
Not necessarily taken fom the same master. The fact that there's a Japanese version which is regarded as the best sounding demonstrates that there are multiple 'masters' out there, due to reissues and new formats, each one processed by different mastering facilities using different techniques at different times and stages of technology.

See/hear this YouTube clip, which plays three different mastering treatments of Thriller, each audibly different. Nothing to do with bit depths or sample rates.
 
Last edited:
forget the bought CD and create your own "CD quality" 16bit/44100Hz copy from the HDtracks download. then compare that against the original and see if you can tell the difference. i won't believe you if you say you can. :p
 
I'm sorry to come wading here but whilst this might be true to some of you guys, people like myself (albeit taking up a small percentage of the 'market' if you want to term it like this) CAN and DO hear the difference between

44.1kHz / 16-bit

Vs.

96kHz / 24-bit

It is NOT simply a marketing ploy by the industry as has been claimed on here.

I also note that recently Neil Young has been actively involved in this debate by pioneering the Pono Player. Whilst I have some reservations about this 'new' technology; if it does indeed bring the original high-fidelity, well captured, professionally mixed and mastered records to the masses without the need for dithering etc then I'm all for it!

What you need to try and understand peeps is this:

If I have an instrument that I wish to capture with a microphone accurately then 44,100 samples a second of this source (with a 16-bit bit depth integer) does NOT accurately capture all of the details, nuances and harmonic content inherent within the original wave disturbance of the air molecules.

If this mix then gets mastered in the SAME audio format and distributed as a lossless file of the same parameters (in a 96/24 FLAC for example) then this will sound better.

I need to therefore point out that by simply converting any master into a higher format will NOT magically add in the details that we are concerned with here (assuming your original was already a digital format). Unfortunately, to make things confusing for you it is different if the original was analogue based but don't worry too much about this for the second.

If you guys want to test this properly for yourselves, grab a copy of Audacity (or any other DAW for that matter), buy a half decent audio interface (like an RME or equivalent), beg/borrow or steal a budget large diaphgram condenser microphone and actually do some recording of your own to test these different formats. Be sure to make it as fair as possible. Exactly the same microphone, player, room and mic placement etc etc.

N.B, you will also need to set the project to the desired bit depth and sample rate for this. Bounce both down once recorded WITHOUT dithering and listen on some half decent speakers. Your gaming cards with Logitech Z-5500s are NO GOOD for this.

Admittedly, it will take a while for your ears to become trained but I absolutely guarantee you there is a marked difference and one that you should all be able to hear.

Saying this, some of my mates think I have crazy ears so go figure.
 
Last edited:
Back
Top Bottom