• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi has 480 shaders, 675MHz+ clock

i love these threads the same people spouting the same crap..fermi will be quicker but hotter and more expensive.
The current 5000 series of cards are not a great leap forward over last generation and tbh im not expecting Fermi to be either.
 
So, okay, I've been thinking about dual-issue instructions under GF100. In G80 and GT200, they had an ALU for interpolation purposes on every one of their shader cores, from what I can gather - this is what allows them (or at least GT200) to dual-issue a MADD+MUL potentially every cycle, allowing for three FLOP per shader per cycle. However, in GF100, the interpolation instruction has been moved into a 'special function unit', of which there are 4 per 32 shader cores (there are four SFUs per SM, and thus there are 60 in a GF100 with 480 shaders), which as far as I can tell still run at the whole shader clock speed. This SFU can execute one instruction (potentially worth up to 2 FLOP, if it can issue MADD or FMA, I don't know) per cycle.

Therefore, if I'm right, presumably you should be able to work out the shader throughput like this:

480*1.4*2 + 60*1.4*2 = 1512 GFLOPS

I'm not sure if I've got that entirely right, though. Anyone want to comment?

Edit: Actually in the white paper it says you can dual-issue to either 16 of the shader cores in the SM or 4 of the SFUs. So what, there's no inherent performance gain to be had but a flexibility gain in being able to execute a different instruction at the cost of losing some potential throughput? :confused:
 
Last edited:
33333gz.jpg
 
5870 on par and beats when clocked Nvidias current highest performing part, the dual chip GTX295, not a great leap..muhahaha.:o

? so new generation almost beats the old generations top card, i wouldn't call that a leap more of a stride :), im enjoying my 5850 but to be honest im more impressed at my 5770 for the cost its excellent.
 
so
given this information, what are you guys thinking about fermi ?
good or not so good ?
its all over my head really, is it going to walk over my xfire setup i wanna know ?
 
i love these threads the same people spouting the same crap..fermi will be quicker but hotter and more expensive.
The current 5000 series of cards are not a great leap forward over last generation and tbh im not expecting Fermi to be either.
Common sense plays no part in this forum. Be gone with ya! ;)
 
The irony is they are now getting the same from ATI - spending £100s for cards that are overkill for DX9 games, very few games use DX10 and by the time DX11 hits proper the 5 series simply won't be upto the job - if games really did use tessellation proper they'd just sit in a corner and cry :D

Rroff, your sinking lower by the post...
 
? so new generation almost beats the old generations top card, i wouldn't call that a leap more of a stride :), im enjoying my 5850 but to be honest im more impressed at my 5770 for the cost its excellent.

For the record, not for a decade or so has the new generation beaten the old generation x2.

The 8800gtx, widely given as a "massive leap forward", is quite soundly beaten by 2x 1950xt's, as in, the 8800gtx was faster in like 2 games out of 10 in general. It wasnt' far behind(in general) and when it was ahead, it wasn't far ahead. The only difference to the way distant past(but not too distant ;) ) is that we have dual gpu setups being a lot more common. Before dual gpu setups we got between a 60-80% improvement per gen. When you whack in a second GPU which in many games will give you a 60-80% bump anyway, the new gen will basically not beat it.

This generation SEEMS like its not that much faster, but have you really looked at benchmarks of the latest games.

Metro 2033, one of the toughest games to run yet, the 5770 is FASTER than the 4980, in average and max framerates. The 5870 is twice the speed of a 4890.

What we need, and isn't entirely new, is newer tougher software to really push newer hardware, and as we're getting that the gap between last and this gen is showing, doubling the performance (which hasn't at all happened in all games) is starting to happen in tougher games and thats a fantastic generational jump in performance.
 
http://www.brightsideofnews.com/news/2010/3/3/sapphire-ready-to-launch-radeon-hd-5990-4gb.aspx pretty overkill if you ask me, but still one hell of a card.

Sapphire_HD5970_01_675.jpg


Sapphire_HD5970_02_675.jpg



QUOTE:
In our opinion, Sapphire's "HD5990" has everything enthusiast might want - for the non-scalable titles, Sapphire's HD5970 OC will perform as a HD 5870 until AMD gets the drivers right, rather than suffering HD 5850 performance. With this board, you get what you don't get with a regular one: custom tailored eight-heatpipe heatsink by Arctic Cooling will keep the board more cooler than the standard ATI heatsink, yet it supports higher clocks.

The clocks on this "HD5990" are 850 MHz [realistically, 853 MHz] and 1200 MHz QDR for the 4GB of GDDR5 memory. Grand total bandwidth of the board is 307.2 GB/s - just like the Ares we described earlier. However, unlike ASUS HD 5970 Ares, Sapphire didn't physically enlarge the product - so the PCB is of standard height and should have no clearance issues even in narrow cases. If you can fit an HD5970, you can fit this board.

The only noticeable change from the standard stock HD5970 is the fact that Sapphire built their own PCB and placed two 8-pin power connectors. That's right, this puppy can eat 375W of juice - Dan told us that there is even overclocking headroom, as these parts actually consume around the same amount of power as two separate 5870's. We don't think that 15-20 Watts of extra power will give you any major GPU clock jumps as you're pushing the term as it is. However, you should be able to significantly overclock the video memory, just like AMD told us in HD5970 pre-launch briefing.

As we mentioned that this board comes with 4GB of GDDR5 memory, there is a situation with 32-bit operating systems. In real world, this board should be used only with 64-bit operating system but unlike nVidia's products, you should not have major issues on 32-bit operating systems. Then again, if you decided to pay a price premium over HD5970 and install a 32-bit operating system, something is definitely not right with you.

Officially, this product will be named just as every other overclocked Radeon HD 5970 but in reality, we're talking about "HD5990", AMD's double whammy to respond to nVidia's GeForce GTX 480. Given that we managed to learn partner allocations for GTX 470 and GTX 480, it is not surprising to see AMD lifting the lid on the overclocked parts. According to our information, partners plan to compete against nVidia on 1:1 ratio between overclocked 5970 boards and GTX 480, which is a pretty interesting plan. Gotta love the competition, right?





All in all LOL perhaps?? And before I get the fan boy rant about me being an ati lover I am happily sat with 2x gtx295 with a gtx285 physx :)


Andy
 
Memo to Rroff... DX11 has already hit.

Dirt 2
Pripyat
Metro 2033
AVP
BC2
Battleforge

With many more staggered across the year. Publishers don't just bung the games on shelves all at once you know. We already have several AAA titles in our hands... I'm enjoying the hell out of them on my Eyefinity setup. In fact I'm having trouble keeping up there's that many already.

:cool: :cool: :cool: :cool: :cool: :cool:
 
Back
Top Bottom