CD quality?

EAC is better if the disk is damaged. if it isnt its pretty much moot. Besides, that really makes no difference here.
squiffy said:
Copy lame_enc.dll to the Audiograbber directory.

using the 'external encoder' route is better because it allows him to set the encoder up correctly, if you just copy the dll you dont get the option to pass command lind arguments as far as can see:)
 
Last edited:
Bloody hell, you can tell I'm not a big sound quality man. Can't tell the difference between 128kbps and 320kbps on my speakers :o .
 
Did the above, I've now got it as an external encoder, Predefined Arguments are User Defined and in the field I've got %l--alt-preset 128%l%h--alt-preset standard%h %s

but it doesn't encode!!

If I put the Predefined onto Lame 128 etc it will encode but if I then put the argument in the field then it won't encode !!

1st and 3rd don't work, 2nd one does.

lame1.jpg
 
Two things that are really significant here are the style of the music converted and the convertor used. Some things do sound ok at 128kbps, but others will sound terrible.
 
dmpoole said:
Did the above, I've now got it as an external encoder, Predefined Arguments are User Defined and in the field I've got %l--alt-preset 128%l%h--alt-preset standard%h %s

but it doesn't encode!!


ahh. sorry, thats entirely my fault. It's the parameters i used for the 3.96 beta which dosn't work.
Code:
-V 0 --vbr-new --add-id3v2 --pad-id3v2 --ta "%a" --tt "%t" --tl "%g" --ty "%y" --tn "%n" %s %d

that should work fine now, i've tried it with both EAC and audigrabber and it works here, although they do produce different filesizes..anyway it should produce VBR lame mp3's with a bitrate ~230k:)
 
Last edited:
excellent, thank you:) that's a much better result than the 192vbr test, but the 'polyphase lowpass filter, transition band: 19383 Hz - 19916 Hz' is evidently in work there. i wonder why that on as default anyway, and i wonder if it can be changed...


edit: ive done a bit of reading. apparently if you add -k to the argument field it will disable all filters and allow full bandwidth encoding at the expense of efficiancy and possible artifacts.
 
Last edited:
In those graphs what are we looking at ?? If we are looking at a full spectrum analysis, then how many points are we using because currently the sound sample you are using seems to cover every single frequency from 20Hz to 19kHz ?? Or is it a sample tone thats supposed to do that. If so then a compression of such a sound sample would not get rid of any unwanted noise as the interpolation of such a sample would be pretty obvious. In music it would be hugely different as its not a continous sweep. :confused:
 
This is the test wav and it is a portion of a song by prog rock band KANSAS.
I chose this because its one of the best recorded albums I've heard. Test Wav

It takes in the full bandwith and I thought it would be interesting to see how different compression rates would affect it.
 
dmpoole said:
This is the test wav and it is a portion of a song by prog rock band KANSAS.
I chose this because its one of the best recorded albums I've heard. Test Wav

It takes in the full bandwith and I thought it would be interesting to see how different compression rates would affect it.

I appreciate that :) What I dont understand is the graph. It seems to show a plot of a set of frequencies from 20Hz to 19kHz which is fair enough as its pretty much the audiable spectrum.

What is odd about the graph is that it shows the that all the freqencies of the spectrum are being hit, like a full spectrum sweep. A piece of music isnt like that.

If it was a true spectral analysis I would expect to see a set of frequencies in discrete form with variable amplitudes. Im just not understanding what the graphs represent and how it proves anything but then it could be just my lack of understanding of what you are showing us, hence the questions. :)
 
Bear said:
If it was a true spectral analysis I would expect to see a set of frequencies in discrete form with variable amplitudes.

If for example you had a bass drum, snare and cowbell then you could possibly see 3 peaks within the full spectrum however add floor tom, mid tom, hi tom, hi hat, crash etc and you should now see all those frequencies of each instrument joining into each other over the full spectrum.
If all instruments are playing in a rock band or orchestra then you should see the full frequency spectrum and hopefully if the mix is correct you should see all the same volumes at all the frequencies. The only time you'd see a peak of a certain frequency would be when one instrument is playing on its own.

In the old days of mixing bands I used to carry two hand held devices. One was a decibel meter and the other was a Spectrum Analysis meter.
I would get every instrument hitting the 110 db mark eg I'd get the drummer to play his bass drum and the meter would read 110, get him to play the snare and it reads 110, get the guitarist to play his guitar and it reads 110db. When all the instruments are then played together the mix should be spot on and I still use this method every week with my own band and we never do a soundcheck.
I would then get the bands (not mine) do a soundcheck and using the spectrum analyser I would get the stereo 48 band graphics and make sure the analyser was equal all across the frequencies and then I could be 99% sure that the mix would be correct. Obviously you can't allow for a drummer who decides he's getting knackered, a vocalist who suddenly stands further away from his mic halfway through the set and a guitarist who's lead sound isn't setup properly.
 
I understand what you are saying but there will always be gaps in the spectrum as not every single Hz will be hit. If it is as you say then if every single frequency was hit at the same given level then it wouldnt be music, it would be white noise. Obviously as you know its the changes which makes the music.

If you take analogue music, perform an FFT to it, you will get loads of discrete points of varing magnitude and dependant on how many points you use it will determine how accurate the reproduction is. Obviously the more points used the less interpolation is needed (fill in the gaps between the points). With compression, you reduce the amount of points used and cleverly interpolate hoping what you simulate between the points is as faithful as possible. The less points you use the more likely you are going to miss some transient inbetween. If you then change it back to an analogue signal, then some of that will be lost.

Hence why Im puzzled at your graphs, as all they show (to my understanding which could be wrong :p ) is white noise and each reduction in bit rate just seems to narrow the bandwidth. Apologies if I seem to go on, its just I like to understand things and it seems to me that you are saying that bit reduction doesnt seem to affect the music that much with your graphs as proof. Its just I dont understand the graphs as it doesnt seem to show anything that I can take anything from :confused:
 
Bear said:

Careful, you went over my head a bit :D

In my opinion (and not my expert opinion), the graphs are showing the drop in the treble frequencies the more you compress and I think we can all hear that anyway.
Now somebody on another forum (AVFORUMS) would probably be more up your street and you'd be able to understand him. When I put the results on there he said something about it shows the treble drop but it doesn't show the information that is now missing out of the other frequencies.
I'll see if I can find the article but he eventually got banned because of his constant agenda against DAB Digital Radio.
 
Bear said:
I understand what you are saying but there will always be gaps in the spectrum as not every single Hz will be hit. If it is as you say then if every single frequency was hit at the same given level then it wouldnt be music, it would be white noise.
While that might be true for 1 sample, you seem to neglect the fact this is a spectrum analysis of a continous piece of music. If it had obvious peaks in the spectrum, then that too 'wouldn't be music'. It's be one long, boring chord.
 
csmager said:
While that might be true for 1 sample, you seem to neglect the fact this is a spectrum analysis of a continous piece of music. If it had obvious peaks in the spectrum, then that too 'wouldn't be music'. It's be one long, boring chord.

What the graph represents isnt music, it is a capture at one moment in time or sample as you say otherwise there would be a time axis, not just a dB and Hz axis
 
dmpoole said:
Found the post by this guy who knows his stuff but the only problem is he talked down to everybody and ended up getting his ass banned -
http://www.avforums.com/forums/showpost.php?p=3214129&postcount=51


Thanks, Ill have a read :)

edit: Just read that post you linked to and some of it was the same as what I said earlier, although Im not sure I agree with him saying that reducing the audio bandwidth does not reduce audio quality or perhaps all he was saying was that audio bandwidth isnt a measure of audio quality.

Having read some of the thread, I think the guy says the same thing, the graphs dont show anything but that it reduces the bandwidth, it doesnt show any differences in sound quality
 
Last edited:
fini said:
Can I ask what hi-fi you have?

fini

My Hi-Fi. :p I can tell the difference between MP3 and OGG, even from iriver and Grado headphones. I got a Hi-Fi mate (into Naim, records and valve gear) to do a blind ABX testing between MP3 and OGG VBR of identical file size, and he preferred the OGG encodes.

Now hard drive space/cost has gone up/down per GB I can now afford to store my music in lossless. Couldn't do that when 80GB's were the norm.
 
Back
Top Bottom