• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

TessMark

Soldato
Joined
6 Oct 2007
Posts
22,260
Location
North West
i Haven't been on this forum for a while, but raven wasn't you anti Nvidia (well the 480 at least) can i ask what changed your mind or did you just get one.

Well, having done some research on the 480 at stock compared to my 1000 core 5870, I was surprised how fast the 480 was, now I knew if I overclocked the 480 I would gain even more performance, I'm very happy with it.
 
Soldato
Joined
20 May 2007
Posts
10,597
Location
Location: Location:
captureicq.png
 
Soldato
Joined
6 Oct 2007
Posts
22,260
Location
North West
Great scaling there with 5870 xfire, nearly double a highly clocked 5850 result, maybe quad xfire could match a 480 in this bench. :confused:

ATI cards maxing out GPU usage in this test?

Edit. yeah 99% going by TB's afterburner monitor.


Thought I would run the test on insane tessellation and it's very intensive, feel free to compare.

zzze.jpg
 
Last edited:
Associate
Joined
28 Jun 2005
Posts
2,173
Location
Behind you
Most benchmarks (and games) will max out a GPU if Vsync is disabled and the CPU can feed the GPU engough data. Evan SF4 gives my 5970 a high usage with no AA or vsync as it runs stupid fast at 360fps.
 
Associate
Joined
30 Mar 2010
Posts
581
Something i just dont get.
I have currently a ati card but before that i had nvidias.
I am impressed with the 5970 but I have`nt been impressed with the nvidia options this time round and i dont agree with some of nvidias descisions as a company but i dont spend time slagging them off, i personally chose ati and if people prefer to choose nvidia fair play to them.
I just dont get why some people choose one over the other then feel the need to **** off the card they chose not to get or swapped from???????????????????
 
Last edited:
Soldato
Joined
22 May 2010
Posts
11,685
Location
Minibotpc
Tessellation explained and why Nvidia's GTX 400 series cards perform so well in these kinds of benchmarks.

Quote taken from another forum from a member that explained it pretty well in my opinion.


"Tessellation on the 400 series as its rumoured to work reminds me of PhysX.

PhysX uses the same resources used to render the graphics.

When running the Vantage CPU PhysX test and using the same card thats rendering to do PhysSX also you get a good score, compareable to that of using a dedicated PhysX GPU of the same make/model. this is because the card has to do very little rendering during that benchmark which means most of the resources (which are shared) are free for the PhysX side of things.

It seems the same applies to Tessellation, it uses the same resouirces as those used for rendering (shaders). During the Heaven benchmark the rendering side of things is light leaving most of the resources free for Tessellation so you get very good results.

In games though Rendering will be heavy and a lot of resources will be used up by the rendering side of things meaning there wont be as much resources for the Tessellation side of things. Which is exactly the same as when you use the same card to render aswell as run PhysX, because most of its resources are used up rendering the PhysX calculations take a back seat so the performance is not as good as the Vantage benchmark would suggest. Its much better to use 1 card for rendering and another for dedicated PhysX, even though both Vantage runs with dedicated PhysX and without got a pretty similar score when running actual games with PhysX the dedicated PhysX card route is the way to go and gives much better performance.

I wonder how long it will be before Nvidia starts promoting the "Dedicated tessellation card". So you will need 3 GTX 480's for Tri SLI (rendering), a GTX 470 for Dedicated Tessellation and possibly your old 200 series card for PhysX.

ATI on the otherhand have a hardware Tessellator, which means its dedicated to Tessellation and doesnt share its resources with rendering, i guess you could liken this to a "dedicated Tessellation card", its not ideal because the other 2 stages in the Tessellation pipeline are still done by software using shared resources but it may work out better for actual games than Nvidia's solution in the long run."


Both has its advantages and disadvantages as he has explained but least we now know why ati 5000 series card perform so poorly in these benchmarks and why nvidia do so well. As he has explained though during game play the nvidia cards will have less resources to play with and hence it will not perform as shown in the benchmarks above.
 
Caporegime
Joined
22 Nov 2005
Posts
45,167
sounds like a bunch of crap some guy guessed up which probably isnt the case at all.

atis hardware tesselation will it actually be supported if nvidia dont have it?
http://en.wikipedia.org/wiki/TruForm

i bet ati have to do tesselation on the shaders.

EDIT
yup its bs someone on a forum guessed up heres how nvidia really do it
How GeForce GTX 400 GPUs handle Tessellation

Traditional GPU designs use a single geometry engine to perform tessellation. This approach is analogous to early GPU designs which used a single pixel pipeline to perform pixel shading. Seeing how pixel pipelines grew from a single unit to many parallel units and how it gained dominance in 3D realism, we designed our tessellation architecture to be parallel from the very beginning.

GeForce GTX 400 GPUs are built with up to [B]fifteen tessellation units, each with dedicated hardware for vertex fetch, tessellation, and coordinate transformations. They operate with four parallel raster engines which transform newly tessellated triangles into a fine stream of pixels for shading[/B]. The result is a breakthrough in tessellation performance—over 1.6 billion triangles per second in sustained performance. Compared to the fastest competing product, the GeForce GTX 480 is up to 7.8x faster as measured by the independent website Bjorn3D.
taken from http://www.nvidia.com/object/tessellation.html

nvidia call it "PolyMorph Engine" on their cards
 
Last edited:
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
EDIT
yup its bs someone on a forum guessed up heres how nvidia really do it

taken from http://www.nvidia.com/object/tessellation.html

nvidia call it "PolyMorph Engine" on their cards

Yes, this is what the design of the GF100 was based around. Although the architecture of the GF100 shares some similarities with GT200, it was fundamentally restructured to allow a rapid flow of geometry data through the pipeline. In true marketing style, nvidia named this methodology "the polymorph engine".

It was also (purportedly) the main reason for the massive delays in production, with this new paradigm for data-flow needing several tweaks to get a stable product.

In a way, I guess you can say that they have succeeded in their goals. Certainly the GF100 has extremely impressive tessellation performance - a true "black and white" difference from the ATI cards (which is rare these days). On the other hand it delayed them several months, which has cost them dearly in terms of market share, and has contributed to a greatly increased power consumption.

On balance you could say that it would have been worthwhile if there were enough games using tessellation in a meaningful way. But the fact is there aren't. So really, it's just an example of nvidia "biting off more than they needed to chew". Or perhaps producing a product which is ahead of its time. I guess it depends on which way you choose to look at it...
 
Soldato
Joined
24 Jun 2004
Posts
10,977
Location
Manchester
doubt we get many tesselation heavy games until a console does it.

Unfortunately this seems to be a fact of life these days :(

I wish Sony and Microsoft would hurry up and produce these next gen consoles sharpish! Although when they do, we might have to kiss goodbye to expecting our games to run at a solid 60fps, at super-high resolutions with all the opulent bells and whistles switched on :p


... and I think that whatever goes into the new x-box will need to be a little more power-efficient than Fermi!
 
Back
Top Bottom