• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

First DX11 GPUs from NVIDIA - 1st half 2010

Intersting, is windows7 meant to bring us dx11 at launch or in a later service pack?

Be interesting to see how developers enbrace it, they're still playing catchup on dx9-dx10
 
Anyone else seeing a connection with that old article on the inquirer about GT300 being too GPGPU oriented, to the point of it not even having a hardware tessellation unit (when that is one of the main features of DX11)?
 
I wonder if they will bring out a new card before ATI release their 6800 series - and yes that isn't a typo - I really did mean 6800 series!

I dunno, the way it looks, I think ATi may be well on their way to getting a refresh of the 5800 series out of the door before Nvidia get their next high end card out. Nvidia do have a couple of mid-range DX10.1 40nm cards in the pipeline until then, though.
 
I think the main concern is not how long before nvidia releases its new range, but how it will compare performance wise.
It seems very likely that ati will have a fair bit of time to dominate between q4 and 2010 q1/2, but if nvidia dont beat them clearly in performance when they do release.....it could get bad for the green team.
 
Anyone else seeing a connection with that old article on the inquirer about GT300 being too GPGPU oriented, to the point of it not even having a hardware tessellation unit (when that is one of the main features of DX11)?

Tessellation is not one of the main features of DX11 - its just been hyped up as one... if the GPGPU capabilities of the GT300 were even half what they are supposed to do - it could quite capably handle tessellation on that.
 
Tessellation is not one of the main features of DX11 - its just been hyped up as one... if the GPGPU capabilities of the GT300 were even half what they are supposed to do - it could quite capably handle tessellation on that.

Err no ;)
 
Lets put it another way... (not my quote)

So, tessellation is the big new feature of Direct3D 11—or at least the one that’s easiest to sell for non-specialists

GPGPU Of the order of processing magnitude on the GT300 could quite easily handle any tessellation workload likely to be seen in games during the duration of the useful life of the card - sure it might not stack up as well compared to a proper tessellation unit such as ATI's - but by the time thats needed I'm sure nVidia would have a faster solution out.

ATI has had tessellation in one form or another for over 7 years... and well... it makes physx seem like a raging sucess in contrast - no small feat.
 
Last edited:
sorry, what you clearly MEANT to say is that you THINK the GT300 has the gpgpu capacity to handle the overhead of doing tesselation in software and not hardware due to what you THINK the specs are. Of course you're taking the overall specs of the cards supposed gpgpu capability when it has NO GRAPHICS load. SO even estimating it will be able to do it alongside the gfx load with power to burn. Every single step you've taken there is a massive leap in unfounded rumours and utter guesses.

The rumour is it will be doing possibly quite a bit more than just tesselation in software under dx11.

But as per usual you jump in any ati thread and put the worst possible face on ATi and the best possible face on Nvidia. AFAIK the first tesselation unit was in their console based chip. Their next tesselation units have been in the 2900 and up, so 7 years is a MASSIVE stretch considering the only real support was in their FIRST dx10 card which ONLY didn't support tesselation as Nvidia screwed up hugely and had the dx10 spec neutered.

http://developer.amd.com/gpu_assets/Real-Time_Tessellation_on_GPU.pdf


tesselation, both images you see, the key is that both use the SAME amount of power and effort and the same amount of geometry memory you can produce a higher detail level with tesselation. AS in, given the exact same hardware the tesselation unit can increase detail without any performance hit. Its not world changing, but its not miniscule, the ONLY reason we don't have it working now is Nvidia's butchering of DX10, there is NO other reason.

Its not particularly clear whats meant in the op's link by "redesign", it could range in anything from a resign for to a dodgey or leaky early silicon, to drastic changes. Though drastic changes, unless implemented a long time ago(which would be weird to switch to it at this late stage, IE the change in a drastic design would have been made 6 months ago at least) are very unlikely I think. SO a simple(ish) respin is the most likely, and suggested a half dozen times already that Nvidia's first silicon was no good in the past couple months and would need a respin or two which would take it easily into 2010 for actual release, which seems fairly accurate by all accounts.
 
sorry, what you clearly MEANT to say is that you THINK the GT300 has the gpgpu capacity to handle the overhead of doing tesselation in software and not hardware due to what you THINK the specs are. Of course you're taking the overall specs of the cards supposed gpgpu capability when it has NO GRAPHICS load. SO even estimating it will be able to do it alongside the gfx load with power to burn. Every single step you've taken there is a massive leap in unfounded rumours and utter guesses.

I've done enough programming of geometry manipulation, mesh sub-division and polygon decimation to have a rough idea of what I'm talking about - I admit I'm not an expert - but I'm quite capable... I'm also not completely uninformed about the capabilities of the GT300.

But as per usual you jump in any ati thread and put the worst possible face on ATi and the best possible face on Nvidia. AFAIK the first tesselation unit was in their console based chip. Their next tesselation units have been in the 2900 and up, so 7 years is a MASSIVE stretch considering the only real support was in their FIRST dx10 card which ONLY didn't support tesselation as Nvidia screwed up hugely and had the dx10 spec neutered.

I'm a bit tired of uninformed people hyping up the tessellation unit and holding it up as the big savior of ATI - when its no such thing. ATI have had tessellation type capabilities (truForm et al) in their GPUs since 2002 - I suggest you do a little research.

tesselation, both images you see, the key is that both use the SAME amount of power and effort and the same amount of geometry memory you can produce a higher detail level with tesselation. AS in, given the exact same hardware the tesselation unit can increase detail without any performance hit. Its not world changing, but its not miniscule, the ONLY reason we don't have it working now is Nvidia's butchering of DX10, there is NO other reason.

I'm quite aware of what tessellation is capable of and its advantages - in its own right its no bad thing... however the ONLY reason we don't have it working now is nothing to do with nVidia or DX10 - and all to do with the fact that its not how game developers currently work... and not the path that most video game developers are taking. We may see some incidental use in the short term - but we are unlikely to see any useage of it that would tax the GT300 throughout the useful lifespan of that GPU. It may become more popular in the long term as games move towards more cinematic experiences and technology changes... but by that time the GT300 will be long forgotten. It simply isn't the way game developers work.

To repeat again a quote from JC - and a view shared by most of the "greats" in video game development:

"No, tessellation has been one of those things up there with procedural content generation where it’s been five generations that we’ve been having people tell us it’s going to be the next big thing and it never does turn out to be the case. I can go into long expositions about why that type of data amplification is not nearly as good as general data compression that gives you the data that you really want. But I don’t think that’s the world beater"

You can continue to fly in the face of current trends - but until you start making games yourself, using tessellation, thats all you'll be doing - barking up the wrong tree.
 
Last edited:
procedural content generation where it’s been five generations that we’ve been having people tell us it’s going to be the next big thing and it never does turn out to be the case.

I'd go as far as to say that's unfair to procedural content generation in its current form, with middleware such as Speedtree that relies on procedural generation. And that you can see newer more user content creation based systems, ones often heralded as more innovative than a lot of contemporary games, are also using procedurally generated content to help players customise characters, environments and so on. If you saw Sony's E3 demo, Mod Nation Racers in particular demonstrated this, the most obvious example being the 'village tool', which spawned well, villages around the track as a part of the environment. Of course there are other, better examples, but I wanted something up to date, I could've cited 'Collect Blue Spheres', which would be almost as valid if we were in 1994.

Edit:

Also I feel that point is kind of irrelevant, mostly owing to the fact that procedural content generation is a vague description of a method of generating content on the fly, as opposed to hardware tessellation which is a specific 3D rendering technique.

To clarify, I don't think this is a huge, groundbreaking development. As you say tessellation has been an option for many years, but many have chosen not to adopt it. I propose that one of the key benefits that tessellation provides is that it's fairly convenient in that it allows you to relatively easily remove detail going into the distance with a very minimal performance impact (referring to the Adaptive tessellation mode here) - whilst this can be done in software, it's a bit slower and probably slightly limits your draw distance. However, nobody is going to implement a convenience if it is not, er, convenient. The only reason developers will bother to adopt the technique will be because it's implemented in a standard API as opposed to AMD's proprietary extensions. Also, possibly, because Microsoft's tutorials and code samples are usually a lot easier to read than AMD's (which often seem to be needlessly verbose).
 
Last edited:
well nvidia probably will release dx11 in 2010 q1 as they dont have enough time, how ever amd/ati stated in Computex conference that dx11 gpu from ati will be this year.

also there is a leaked picture of rv840 from computex running dx 11 demos
 
Rroff, you post that Carmack quote in every ATI thread. :rolleyes:

I wonder if I emailed him today, would he still think the same.

Its the most readily available quote, glad someone noticed ;)

There are others but it would involve pasting large parts of blog posts or chat logs.

I'd go as far as to say that's unfair to procedural content generation in its current form, with middleware such as Speedtree that relies on procedural generation. And that you can see newer more user content creation based systems, ones often heralded as more innovative than a lot of contemporary games, are also using procedurally generated content to help players customise characters, environments and so on. If you saw Sony's E3 demo, Mod Nation Racers in particular demonstrated this, the most obvious example being the 'village tool', which spawned well, villages around the track as a part of the environment. Of course there are other, better examples, but I wanted something up to date, I could've cited 'Collect Blue Spheres', which would be almost as valid if we were in 1994.

Edit:

Also I feel that point is kind of irrelevant, mostly owing to the fact that procedural content generation is a vague description of a method of generating content on the fly, as opposed to hardware tessellation which is a specific 3D rendering technique.

To clarify, I don't think this is a huge, groundbreaking development. As you say tessellation has been an option for many years, but many have chosen not to adopt it. I propose that one of the key benefits that tessellation provides is that it's fairly convenient in that it allows you to relatively easily remove detail going into the distance with a very minimal performance impact (referring to the Adaptive tessellation mode here) - whilst this can be done in software, it's a bit slower and probably slightly limits your draw distance. However, nobody is going to implement a convenience if it is not, er, convenient. The only reason developers will bother to adopt the technique will be because it's implemented in a standard API as opposed to AMD's proprietary extensions. Also, possibly, because Microsoft's tutorials and code samples are usually a lot easier to read than AMD's (which often seem to be needlessly verbose).

You make a fair point.

Sim City 3000(?) had that realtime terrain-carving years ago.

Sim City 3000 was a basic isometric implementation and its world manipulation can't really be equated to tessellation today.
 
I'm still waiting for a single game to be announced that will make one of these cards worth buying.
 
Back
Top Bottom