• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi possible pricing

lol!

Personally I don't have the biggest of hopes for the GTX480, but IMO the design is strong and going forward I think we will see some great cards off it. These are being rushed really because AMD caught nVidia with its pants down (plus TSMC screwing them over).
 
hmmm, with no benchmarks leaked by nvidia im betting my money on the card sucking badly.

dx11 tacked on?? wtf?? its either compliant to the standard or not lol

It can be compliant to the standard and still a hacked up, not very optimal implementation.
 
Probably the opposite, since the Fermi architecture is much better at balancing workloads through a unified architecture for tessellation and shading. As games get more advanced the Fermi architectures will pull ahead further in performance.
By then the next ATi chip should be out, and hopefully ATI have a new architecture rather than DX11 tacked on.

what nonsense, AMD has been largely DX11 since the 2900xt, you do realise most of what was missing in DX10 after it was ripped out, was merely replaced in DX10.1, and the difference between dx10.1 and 11 is fairly minimal. Fact is things like tesselation aren't tacked on, they've had them, since the xbox 360 chip, and since the 2900xt in desktop.

Tacked on, dx11 in terms of shader type, performance and 98% of the workload done on the gpu is no different to dx10, you're talking a few extra registers, a couple minor hardware features and very little else. A full new DX11 architecture, will still have dx11 "tacked on".

As for much better workload balancing, you have precisely nothing to base that on, at all. As for that sythetic workload, I could be wrong but I'm guessing only Uniengine demo will be optimised by AMD, and considering games that use tesselation, like DIrt 2, are set to get a 25% performance bump in the 10.3 cat's and I've seen people say their uniengine performance has gone up a lot on the 10.3 beta's, I wouldn't assume Nvidia will have a massive performance advantage there anyway.

Add on top the other benchmarks they used, are Nvidia ones, so I'd guess they went out and bought(because its available) a 5870 and compared them in a test Nvidia have optimised for, for a year(because they've had nothing else to do while waiting on respins to go through), so not even close to representitive of final performance.

Likewise demo's designed by AMD/Nvidia themselves always are coded perfectly for their specific hardware so always the single best possible circumstance, games, never, I mean never replicate these perfect conditions.

In reality, load a lot of stuff into the rasterising engine where the tesselator would appear to be sharing a lot of power with other things, and tesselator performance is very likely to go down.


Rroff, again saying they are being rushed, they weren't, nor is the 480 gtx design strong, its a pretty awful design. Its the old netburst architecture "lets get as high as we can on Ghz, and as many bruteforce shaders as we can on board, be damned that we can't produce it".

Its not a difficult, or elegant, or efficient design. LIterally anyone can take a very basic shader and make loads of them without caring about the space they take up. If Nvidia went and made a card with 480gtx levels of power, on 60% of the die space, with 33% less transistors, that would be a design feat........ also manufacturable.

480GTX has NO WHERE to go moving forwards, 40nm sucks, if 28nm is late from TSMC(it is, and will be even later than they say now) you think with sub 10% yields(apparently on both 470 and 480gtx parts, so salvaging included) they'd be able to bump that up to a 768/1024 shader part, not even close to a chance. Even on 28nm, the same design doubled in power/specs, which isn't unusual as a design goal, would be basically impossible to produce.

A 40nm refresh of the same design would take, well, to account for the crap 40nm it would have to grow some 15% in die size to add the extra via's to be added and some space in critical area's to account for variable transistor size. Even at 15% die size increase yields would improve pretty significantly, but still be no where near what a 5870 can yield, so cost wise they have zero chance of competing till they design a small core.
 
Last edited:
what nonsense, AMD has been largely DX11 since the 2900xt, you do realise most of what was missing in DX10 after it was ripped out, was merely replaced in DX10.1, and the difference between dx10.1 and 11 is fairly minimal. Fact is things like tesselation aren't tacked on, they've had them, since the xbox 360 chip, and since the 2900xt in desktop.

Tacked on, dx11 in terms of shader type, performance and 98% of the workload done on the gpu is no different to dx10, you're talking a few extra registers, a couple minor hardware features and very little else. A full new DX11 architecture, will still have dx11 "tacked on".

As for much better workload balancing, you have precisely nothing to base that on, at all. As for that sythetic workload, I could be wrong but I'm guessing only Uniengine demo will be optimised by AMD, and considering games that use tesselation, like DIrt 2, are set to get a 25% performance bump in the 10.3 cat's and I've seen people say their uniengine performance has gone up a lot on the 10.3 beta's, I wouldn't assume Nvidia will have a massive performance advantage there anyway.

Add on top the other benchmarks they used, are Nvidia ones, so I'd guess they went out and bought(because its available) a 5870 and compared them in a test Nvidia have optimised for, for a year(because they've had nothing else to do while waiting on respins to go through), so not even close to representitive of final performance.

Likewise demo's designed by AMD/Nvidia themselves always are coded perfectly for their specific hardware so always the single best possible circumstance, games, never, I mean never replicate these perfect conditions.

In reality, load a lot of stuff into the rasterising engine where the tesselator would appear to be sharing a lot of power with other things, and tesselator performance is very likely to go down.

Its time for the latest version of the bible again I see :D
 
hmmm, with no benchmarks leaked by nvidia im betting my money on the card sucking badly.

dx11 tacked on?? wtf?? its either compliant to the standard or not lol

According to Turing, you could make a GPU from cogs and gears that is DX11 compliant, so what.
 
But Roff you were saying that stalling is going to be a unique problem for ATi, i can see this being a problem for nVidia also.

Their processing may not have to wait long because of this "load balancing". But as i mentioned before Physx is going to effect this greatly, most likely exacerbate it. Nvidia trying to bite off more than they can chew?
 
Last edited:
According to Turing, you could make a GPU from cogs and gears that is DX11 compliant, so what.

inb4woodscrews.

I don't reall see these two cards for nVida being any good, but maybe some later 400 series will be better, as they would have less pressure trying to release them.
 
We'll see what happens, wether there are a handful of parts at a premium which will be bought up by the most extreme fan boys (some of which reside here) and used as part of their Nvidia CEO worshipping ritual by rubbing the PCB on their genitals, or there is steady availability and the card is a belter, rubbishing claims from "the other side".

Regardless, as a consumer, I want price competition again :cool: However, the signs so far don't exactly look too promising....
 
But Roff you were saying that stalling is going to be a unique problem for ATi, i can see this being a problem for nVidia also.

Their processing may not have to wait long because of this "load balancing". But as i mentioned before Physx is going to effect this greatly, most likely exacerbate it. Nvidia trying to bite off more than they can chew?

I'm not really sayings its uniquely an ATI problem but I'm trying to show the contrast.

PhysX is another story - you can't really compare that against ATI as theres no way to run it on ATI GPUs currently.
 
Back
Top Bottom