• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

hd 5870 IS the fastest single gpu in the world (technically)

if they can insult then i can too, you're all pricks! i told you I KNOW it doesnt mean its faster then the gt480.. i was talking about speed, and core hence the engine bit, not all the other things. all this was just a funny little thought that if someone blimming understood, you'd get it. you're children over-reacting a little i must say. where on earth did i mention drift etc.. i said round a track, = true performance, as not many roads are a few miles straight.. im not even going to bother, gee..

Because it's clear you're confused about what you're talking about.

The very fact that you say it's "not faster but has a higher clockspeed" shows that.

So now you're either contradicting yourself, or you're trying to save face by changing what you meant.

The point was that clockspeed itself doesn't have anything to do with "speed" compared to different chips, so it's kinda made your thread pointless because it appears that you either didn't know that, or you did and didn't really think your post through.
 
Read the GTX480 reviews, so much is now running on the shader clock or around it that really its more important than the core clock now.

There isn't a core clock, EVERYTHING is based off the only actual real clock, the shader clock, the "uncore" parts run at a 1/2 divider, the main clock is 1.2/1.4Ghz on the 470/480gtx.
 
Because it's clear you're confused about what you're talking about.

The very fact that you say it's "not faster but has a higher clockspeed" shows that.

So now you're either contradicting yourself, or you're trying to save face by changing what you meant.

The point was that clockspeed itself doesn't have anything to do with "speed" compared to different chips, so it's kinda made your thread pointless because it appears that you either didn't know that, or you did and didn't really think your post through.

And you spent the time writing this...you got sucked into the madness as well as so many others. :D
 
after reading all the replies, i still dont understand what the op is trying to state and the point of this thread...
 
ok, think of the gpu as a car almost, engine suspension etc.. the core of the gpu is going to be the engine obviously, and all the rest of the bits, well you get the point. At present, hd5870 is 850mhz core stock, while gf480 is 700mhz.
Think of the bugatti veyron on top gear, not nearly the fastest round the track but still claims the title of fastest production car in the world (it had). the gf would have all round better parts, but the hd would still have the engine and speed, so hd5870 = veyron, technically the fastest

Massive problem here, GPU isnt a car and dosent work like that heh. If anything a GPU currently is a collection of engines where nvidia has V8's and ATI has a ton more of V4's. In the end the car analogy dosent really work.

Think back to the pentium 4, one of those at 3.2ghz was crushed by a 2.0ghz athlon 64 in everything.
 
Massive problem here, GPU isnt a car and dosent work like that heh. If anything a GPU currently is a collection of engines where nvidia has V8's and ATI has a ton more of V4's. In the end the car analogy dosent really work.

Think back to the pentium 4, one of those at 3.2ghz was crushed by a 2.0ghz athlon 64 in everything.

Actually the Nvidia would be a bunch of V4's, the AMD has a bunch of V8's, but the, errm, actually the computer timing is off so most of the time you're only using half the cylinders on the bigger AMD engines.

IN reality a shader is a shader is a shader for Nvidia, it has 480 of them now, they aren't that powerful and easy to get working at max efficiency. AMD has 320shaders, but they are uber powerful and each one can do up to 5 instructions per clock(320x5=1600 ;) ). THe problem is worse case scenario its using less shader instructions than Nvidia, at half the clock speed(or that was the idea with the targeted 1700Mhz core clocks Nvidia wanted).

Really its VERY easy to get two instructions per clock, which puts AMD a little under Nvidia performance 640 instructions at half speed = 320 instructions at what would be double the clock speed, vs Nvidia at 480. A majority of the time AMD can do 3 instructions, so you're up to 960 instructions. Do the math and you realise thats almost identical to 480 if they were double the clock speed, shock horror.

But it becomes very difficult to get a lot of 4/5 instructions per clock, if they could get 100% efficiency they'd get essentially 4+1 instructions per clock, 4 "normal" instructions and one special one, the 4 normals would be 1280 instructions( or 640 at double speed) and then a bunch of extra special accelerated instructions.

Thats why AMD has such MASSIVE theoretical power compared to Nvidia's, but why Nvidia's is massively easier to leverage. If AMD could maintain max output its a clear 50% faster minimum than Nvidia's architecture.


Nvidia = faster clocks, AMD = quite massively faster architecture(50% faster, at half the size? its ingenius architecturally) software = massively easier to run on Nvidia hardware and the reason AMD are prevented from realising its full power, at least in gaming.

Its pretty obvious to do the calculations and ignoring minor effiiciency issues you'd realise AMD average just under 3 instructions per clock on their shaders to get performance a little lower than Nvidia would have. IN reality the limited bus, and less space for core logic means they are probably at a little over 3 instructions per clock but other limits effectively brings that back down. So its only at some 60% of its maximum performance.

I'm fairly sure Nvidia HAVE to move in the same direction in terms of GPU design and smaller cores with more efficiency thats harder to code for. When both gpu makers are making chips for PC's and consoles that require more difficult coding but not alternative, they'll find clever ways to increase the ultilisation in the future which will likely bring us some fantastic performance boosts a generation or two down the line.
 
Last edited:
Technically speaking, clock for clock ATI cards are far far far worse than Nvidias are in terms of performance :)
 
I know the OP is wrong (the 5870 ISN'T the fastest single GPU in the world), but I also think everyone else misunderstood him.

I think he means that the numbers aren't everything. For example, even though the Veyron has 1000hp, that doesn't necessarily make it the fastest car round the track. However it may have the highest top speed, so it could "technically" be the fastest car, even if it isn't the fastest in real world use (i.e. round a track).
 
I know the OP is wrong (the 5870 ISN'T the fastest single GPU in the world), but I also think everyone else misunderstood him.

Technically speaking, it actually is.

AMD could release a 5890 tomorrow that is basically a 5870 with a new bios that sets the clocks to 1000/1300 with the appropriate voltages.

Would you then say it IS the fastest GPU, despite being identical to the 5870s in terms of hardware?

You couldn't overclock a GTX480 enough to beat a 5870 at 1000/1300.
 
Thanks OP - that's truly profound.

Whilst we're on the subject of meaningless frequency comparisons, I wondered if anyone else found the following noteworthy...

Radio 1, at 98.8 MHz, is faster than Radio 2 at 88.8 MHz, yet Classic FM blows them both out of the water at 100.4 MHz. Discuss.

Red is the slowest colour at only 430 THz - somewhat at odds with Ferrari's success in motorsport. Discuss.


Well it makes as much sense as the OP :)
 
Actually the Nvidia would be a bunch of V4's, the AMD has a bunch of V8's, but the, errm, actually the computer timing is off so most of the time you're only using half the cylinders on the bigger AMD engines.

IN reality a shader is a shader is a shader for Nvidia, it has 480 of them now, they aren't that powerful and easy to get working at max efficiency. AMD has 320shaders, but they are uber powerful and each one can do up to 5 instructions per clock(320x5=1600 ;) ). THe problem is worse case scenario its using less shader instructions than Nvidia, at half the clock speed(or that was the idea with the targeted 1700Mhz core clocks Nvidia wanted).

Really its VERY easy to get two instructions per clock, which puts AMD a little under Nvidia performance 640 instructions at half speed = 320 instructions at what would be double the clock speed, vs Nvidia at 480. A majority of the time AMD can do 3 instructions, so you're up to 960 instructions. Do the math and you realise thats almost identical to 480 if they were double the clock speed, shock horror.

But it becomes very difficult to get a lot of 4/5 instructions per clock, if they could get 100% efficiency they'd get essentially 4+1 instructions per clock, 4 "normal" instructions and one special one, the 4 normals would be 1280 instructions( or 640 at double speed) and then a bunch of extra special accelerated instructions.

Thats why AMD has such MASSIVE theoretical power compared to Nvidia's, but why Nvidia's is massively easier to leverage. If AMD could maintain max output its a clear 50% faster minimum than Nvidia's architecture.


Nvidia = faster clocks, AMD = quite massively faster architecture(50% faster, at half the size? its ingenius architecturally) software = massively easier to run on Nvidia hardware and the reason AMD are prevented from realising its full power, at least in gaming.

Its pretty obvious to do the calculations and ignoring minor effiiciency issues you'd realise AMD average just under 3 instructions per clock on their shaders to get performance a little lower than Nvidia would have. IN reality the limited bus, and less space for core logic means they are probably at a little over 3 instructions per clock but other limits effectively brings that back down. So its only at some 60% of its maximum performance.

I'm fairly sure Nvidia HAVE to move in the same direction in terms of GPU design and smaller cores with more efficiency thats harder to code for. When both gpu makers are making chips for PC's and consoles that require more difficult coding but not alternative, they'll find clever ways to increase the ultilisation in the future which will likely bring us some fantastic performance boosts a generation or two down the line.

Hmm interesting stuff, i never knew the ati cards used less but extremely efficient shaders. So i guess the 5850=1440 shaders and the other shader counts that get thrown around are similar to amd's old athlon xp performance ratings in a way? In that theyre not entirely accurate but they reflect the potential of the hardware so people dont dismiss it due to actually having less shaders.
 
I know the OP is wrong (the 5870 ISN'T the fastest single GPU in the world), but I also think everyone else misunderstood him.

I think he means that the numbers aren't everything. For example, even though the Veyron has 1000hp, that doesn't necessarily make it the fastest car round the track. However it may have the highest top speed, so it could "technically" be the fastest car, even if it isn't the fastest in real world use (i.e. round a track).

No - we all get that. In fact that's so blindingly obvious that everyone here knows it already and it really didn't need to be posted :). What people are taking him to task on is the flaw in his analogy where he presents MHz core speed as some sort of standard performance indicator analogous to an engine's BHP. Thing is, it isn't at all. Whilst you can meaningfully compare, for example, Ford and Honda engines by their BHP figures, GPU core speeds are not comparable between GPUs of totally different architectures as they work so differently. In other words the HD 5870's high 850 MHz clock speed is not really analogous to a Veyron's 1000bhp monster engine. Some sort of instructions-per-second figure would be a meaningful way to compare two GPUs and be more analogous to a car engine's BHP, as both are an actual rate of doing work/something useful.

Liam
 
Last edited:
picard_facepalm.jpg
 
Hmm interesting stuff, i never knew the ati cards used less but extremely efficient shaders. So i guess the 5850=1440 shaders and the other shader counts that get thrown around are similar to amd's old athlon xp performance ratings in a way? In that theyre not entirely accurate but they reflect the potential of the hardware so people dont dismiss it due to actually having less shaders.

Well, 'theoretically' speaking, each of the quoted shaders on the 5800 series cards can do the same amount of work that one 'cuda core' on a Fermi card can do - they can each execute one instruction worth up to two floating point operations. In reality it's not like that because AMD's architecture effectively groups those shaders into blocks of 5s that have to be doing the same thing to the same data (save for that fifth 'fat' unit which can be doing more complex things like trigonometric functions). Really though, they're not comparable, it's just a different way of doing things. They employ different architectural paradigms, if you want to read them up, the Radeon cards use a superscalar very-long instruction word architecture and the Geforce cards use a scalar architecture.

If it makes sense, AMD's shaders act kind of like a big group of SIMD units that you find on CPUs, those SIMD units all have to be doing exactly the same thing to the same 'group' of data (we can call this vector data). Whereas Nvidia cards are a bit more flexible, in that they're more like their own 'cores', but still not really in that most of the time they all have to be doing the same thing, but they can be doing it to different blocks of data. Although with the Fermi architecture it's even more complicated, in that higher level blocks of units can be working on completely different things at the same time.
 

Whilst that one is far better worded - more universally applicable - this one has a far superior Piccard facepalm.

facepalm.jpg


The creased brow and the glimpse of the frown of dispair on his mouth greatly increase the desired effect. If only I could be bothered to devote the necessary 5 minutes to create the perfect hybrid Piccard facepalm poster...

:)
 
Back
Top Bottom