• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Fermi NDA ends today

Also while its possible that extra shader workload pulls down the tessellation performance - its might not bring it down enough to impact overall performance - as with other efficency increases the penalty on the shaders might not be that high overall we don't know yet...

Personally I don't think we will see anything using tessellation to a degree that will produce negative results during the life time of this generation of GPUs.
 
Also while its possible that extra shader workload pulls down the tessellation performance - its might not bring it down enough to impact overall performance - as with other efficency increases the penalty on the shaders might not be that high overall we don't know yet...

Personally I don't think we will see anything using tessellation to a degree that will produce negative results during the life time of this generation of GPUs.

I agree, by the time one architecture significantly shows its advantages over the other, the performance gap may potentially be around 60-70%, but they'll both be competitive to cards that, at the time, retail for £60. I think by the time that these features are used to an extent that causes a large performance gap, we'll be arguing about better cards. Of course that's not to say we won't see some games in the near future that make access to those features worth it, even if their use is more restricted than what we'll see in the future, but I like to call that progress.
 
Also while its possible that extra shader workload pulls down the tessellation performance - its might not bring it down enough to impact overall performance - as with other efficency increases the penalty on the shaders might not be that high overall we don't know yet...

Personally I don't think we will see anything using tessellation to a degree that will produce negative results during the life time of this generation of GPUs.

Depends on how much dedicated logic there is for GPGPU scenarios. It's a good idea to be flexible in how much resource is dedicated to the tessellation. Adding physx to the mix, i am not so sure that this card is going to be able to pull what you might call impressive numbers against an ATi counterpart.

Also ATi must have looked into using shaders as a means of computing the tessellation, i wonder what kind of conclusion they came to?

Could the redundant areas of the GPU be used for tessellation, if they are not being used for GPGPU?
 
The problem is, what cards we see, a 60% advantage in one benchmark may or may not take a huge hit in tesselation performance in a real game, we've yet to see. THe problem comes from if the actual release cards aren't anywhere near as powerful as the "architecture" we've been shown, not cards, the architecture, then a 448sp part with signifcantly lower clocks won't have anywhere near a 60% advantage to begin with, it will be losing over 10% of its raw tesselation power with 2 less clusters, with the remaining also at lower clocks, if its be believed that we'd be looking at 25% clock drops and 448sp's for any "available" parts, that could be a good 40% hit in raw tesselation power BEFORE any real world performance loss on top of that.

AS everyones said a million times, without card specs we have NO idea about anything. THeres nothing to suggest we'll see a 512sp card at whatever clocks were used in those benchies at all.

I think the problem with non fixed function tesselation comes in the dev's not knowing easily how much they can add. AMD's implementation pretty much lets the dev's know to an exact degree how much tesselation every single card in the series can handle, and therefore they can optimise games knowing exactly how much they can add without harming performance elsewhere. While with a variable output ability, and a changing amount of power from one card to another it will be far harder to scale Tesselation.

A game with AMD cards might find they can do all characters to X depth, buildings but leave ground flat for this generation but it will work on most hardware and won't give you changing framerates depending on what area of the game you're in.

This is the problem it will be very hard power wise for any card to just tesselate every last thing like in the uniengine demo, a fixed level to work to should make it fairly easy to implement smoothly.

But I've said before, it will be great if tesselation becomes a massively used thing, is definately not going to be completely unused as in AMD's case since the 2900xt, so next gen they can know they want tesselation, all game dev's want it(which seems to be the case) and so dedicating an extra X amount of transistors isn't a huge risk at all, while this gen it was.

If both companies move up to 28nm next rather than 32nm, theres going to be a HUGEEE increase in the number of transistors they can stuff in and still end up as tiny cores compared to this generation. We should be back in that process to good yields, tiny cores and low prices which simply aren't possible on 40nm like they were at 55nm. Even with a vastly increased tesselator unit and a huge bump in raw shader power in the next gen, they'll be small cores if they skip 32nm.
 
Last edited:
Every time I read one of Drunken's post, it just makes complete sense, respect to you man.

Really??? I was beginning to 'Feel' he may be wrong because those them plucky managers & engineers @ Nv never messed up before.
Hell there's no way on this earth DM can be talking sense.

"WHY"? Because, that's why!
 
Every time I read one of Drunken's post, it just makes complete sense, respect to you man.

It's funny, the only people who rant about him being wrong are the ones who we'd call the biggest nVidia fanboys.

They're looking more and more deluded as time goes on.

Not
Very
Innovative
Deluded
Intimidating
Asshats

Seems very appropriate and

Not
Very
Intelligent
Deluded
Idiotic
Asshats

Seems very appropriate for the top nVidia fanboys around here.
 
I thought we were finally beyond this...

Did something go on the other day? I noticed the thread was closed but I hadn't actually seen why?

I'm just responding to the likes of "Duran" and "Deuse" who go "nah, anything negative about nVidia is just nonsense".

I don't like nVidia, obviously, I am looking forward to FERMI actually coming out though for many reasons.

Still, I do think nVidia are a joke and their fanboys need to calm down, some of them have even admitted they just play devil's advocate to get a rise or because it's "unfair" on nVidia.

They deserve the stick they get, I don't want them to collapse though of course, that'd be bad for everyone, still, poking fun at them is just that, fun.
 
Last edited:
I don't go around picking fights... I try to make serious points - but I will also stick up for myself.

Theres a few people on this forum who just make childish post after childish post - pull whole threads down/derailed and yet somehow nothing happens...
 
I don't go around picking fights... I try to make serious points - but I will also stick up for myself.

Oh come on Rroff, you moan so much about me needing to lighten up, what you don't get is I'm doing it very light heartedly.

You most probably assume I dislike you, I don't, it's just fun and banter to me.

Lighten up and loosen up.

To be fair, the ones I'm talking about are the ones who get threads closed, I've noticed the mods are getting tired of them now. "another good thread ruined by these idiots" is a comment I read the other day from a mod who'd closed a thread, again, it was the resident nVidiots who derailed it and wanted to rant about why nVidia are great.

As much as I call you a fanboy Rroff, I do and will appreciate that realistically, you aren't, you at least try to reason and give a examples of why you think what you do.

The nVidiots just go "because it is, and anything you say that goes against what I'm trying to say is simply not true and you're lying".
 
Last edited:
so thats your excuse for wrecking threads with petty personal insults?

whatever I'm done this threads been derailed... have "fun" ruining another thread in the name of "fun".
 
Did something go on the other day? I noticed the thread was closed but I hadn't actually seen why?

I'm just responding to the likes of "Duran" and "Deuse" who go "nah, anything negative about nVidia is just nonsense".

It was funny, for a time, in that one thread. There's no reason to drag it out, though. Sure there have been a couple of tongue in cheek comments in this thread but that came across as really obnoxious in a conversation that for once was largely about graphics cards rather than those who own those graphics cards, and it was a pleasant change, for almost like, a whole page!
 
I don't go around picking fights... I try to make serious points - but I will also stick up for myself.

Theres a few people on this forum who just make childish post after childish post - pull whole threads down/derailed and yet somehow nothing happens...

Why do you continuously fight Nvidia's corner tooth and nail, even during the whole debacle that was the last 4 months? :confused: ;)
 
so thats your excuse for wrecking threads with petty personal insults?

whatever I'm done this threads been derailed... have "fun" ruining another thread in the name of "fun".

Ah, so that's how you respond to that? Have a moan when you can't respond with something that makes sense?

No worries, still though, you should lighten up, I've already gave examples of the people who really wreck threads, which I knew you'd ignore.

Oh well. :rolleyes:
 
Back
Top Bottom