• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

7970 vs GTX 580 in nVidia’s Endless City tessellation demo

Are you joking?
Pixar's rendering technique is VERY efficient.
The issue is that without tessellation, it is impossible to reach anywhere near that kind of geometric detail. You just don't have enough memory to store the geometry and not enough bandwidth to process it.
It is also far more efficient than that other 'infinite geometry' solution: raytracing. Which is why Pixar's RenderMan has been the industry leader for many years.

Clearly this is exactly where we are going. Tessellation is introduced because games are already reaching the limits of video memory size and bandwidth. Tessellation takes geometry detail and efficiency a big step forward even on today's hardware.
Future GPUs will likely have optimized rasterizers which can handle micropolygons more efficiently. Eventually, we will no longer need larger polygons at all anymore, just like Pixar.

Or can you build a stronger case?

You're seriously proposing that in 10 or even 20 years time there will a single GPU powerful enough to render a Pixar film in real-time when rendering farms are not even close to doing it now?

Be serious now. Tessellation has a future but lets be a least a little realistic about it. You're coming to me with an argument of nothing but "what ifs".
 
Nvidia put massive effort into designing a scalable tesselating unit that is indeed very strong.

However you will find certain issues with games over tesselating objects (such as flat surfaces) or objects off screen being tesselated.

This is not needed and is likely put there to overstress the AMD unit for no good reason. To be honest it harms Nvidia too since the GPU has to do extra work still.
 
From your own rambling above: don't generalise people's opinion across a forum. I'm sure people would agree as well but that's hardly the point.

Im not because if someone doesn't make any sense a lot of the time they are told so most of the time which is not the case.

Bad grammar and typos is one thing, but the point is the most important part that matters.

My points make sense even if they are not as well written as they could be.
 
Last edited:
You're seriously proposing that in 10 or even 20 years time there will a single GPU powerful enough to render a Pixar film in real-time when rendering farms are not even close to doing it now?

Oh dear... where to begin...
First of all, we are talking about a rendering algorithm. This in itself does not say anything about how long it takes to render a Pixar film.
As a matter of fact, the rendering time of Pixar films has been relatively constant over the years, despite them getting bigger render farms with faster processors over the years.
How is that? Well obviously because they've used this extra processing power to improve the detail and realism in their movies. Toy Story is not Brave, if you know what I mean.

I am merely saying that GPUs will converge to similar rendering methods as RenderMan. Obviously that will not mean that they will render at the same resolution as Pixar does, or with the same level of antialiasing, the same level of detail in geometry and animation, shaders or anything. Clearly there will always be a gap between a single GPU and a renderfarm.

On the other hand... Toy Story was 17 years ago. Are we that far away from doing anything like that in realtime? Well...
Years ago, nVidia already showed us Timbury:
Already a very nice step in the direction of Pixar-style animation.
And a few months ago, AMD presented Leo:

Well... I dunno, Toy Story in realtime may not actually be that far away? I don't think we need to wait 20 years, perhaps not even 10, until that is possible.

Be serious now. Tessellation has a future but lets be a least a little realistic about it. You're coming to me with an argument of nothing but "what ifs".

No, you're the one saying Pixar is inefficient, when they aren't.
You've yet to give a good alternative to tessellation/REYES rendering.
 
Nvidia put massive effort into designing a scalable tesselating unit that is indeed very strong.

However you will find certain issues with games over tesselating objects (such as flat surfaces) or objects off screen being tesselated.

This is not needed and is likely put there to overstress the AMD unit for no good reason. To be honest it harms Nvidia too since the GPU has to do extra work still.

At the end of the day, if you have two competing DirectX 11 cards, and one can render the scenes with acceptable performance, and the other can not, one of them is not doing it right.

And I think you have to look a bit beyond tessellation of flat surfaces. The problem today is that no game with tessellation has been designed for tessellation. So the geometry is not fine-tuned for tessellation. It was added as an afterthought.
But what if it were? The videocard would still need to render those polygons.
So the whole argument doesn't make sense. You need that performance, and AMD can't deliver it. Trying to find excuses for why you could render less polygons (which is ridiculous in itself, when do we ever want to settle for LESS detail?) just misses the point. Sure, you could render less polygons. You could even turn off tessellation! None of that changes the fact that you can't use tessellation for what it was designed to do.
 
At the end of the day, if you have two competing DirectX 11 cards, and one can render the scenes with acceptable performance, and the other can not, one of them is not doing it right.

And I think you have to look a bit beyond tessellation of flat surfaces. The problem today is that no game with tessellation has been designed for tessellation. So the geometry is not fine-tuned for tessellation. It was added as an afterthought.
But what if it were? The videocard would still need to render those polygons.
So the whole argument doesn't make sense. You need that performance, and AMD can't deliver it. Trying to find excuses for why you could render less polygons (which is ridiculous in itself, when do we ever want to settle for LESS detail?) just misses the point. Sure, you could render less polygons. You could even turn off tessellation! None of that changes the fact that you can't use tessellation for what it was designed to do.

When less is not visually noticeable with in reason and the performance gained is.
 
Last edited:
When less is not visually noticeable with in reason and the performance gained is.

*IF* there is performance gained... Which depends on how efficient the implementation of tessellation is.
There are tons of other examples in graphics where you may intuitively *think* that it would be faster to draw less, but in reality it takes longer to decide what to draw or not, rather than to just draw everything without checking.

For example, in the old days, you'd perform backface culling and triangle clipping on each individual triangle before sending it to the video card.
These days, it's highly inefficient to do it that way. You just send the whole batch to the GPU and let the GPU sort it out.
Or what about occlusion queries? In the old days you'd do all sorts of computational geometry on the CPU to figure out if something was occluded or not. These days you just render the bounding box on the GPU and count the number of pixels that passed the z-test. Because it's faster to render hundreds of thousands of pixels on the GPU than to test a handful of vertices on the CPU.

So is it faster to render less polygons? Probably not.
 
*IF* there is performance gained... Which depends on how efficient the implementation of tessellation is.
There are tons of other examples in graphics where you may intuitively *think* that it would be faster to draw less, but in reality it takes longer to decide what to draw or not, rather than to just draw everything without checking.

For example, in the old days, you'd perform backface culling and triangle clipping on each individual triangle before sending it to the video card.
These days, it's highly inefficient to do it that way. You just send the whole batch to the GPU and let the GPU sort it out.
Or what about occlusion queries? In the old days you'd do all sorts of computational geometry on the CPU to figure out if something was occluded or not. These days you just render the bounding box on the GPU and count the number of pixels that passed the z-test. Because it's faster to render hundreds of thousands of pixels on the GPU than to test a handful of vertices on the CPU.

So is it faster to render less polygons? Probably not.

Well like most things sometimes it does sometimes it does not.
Whatever is faster doing on the GPU should be done on the GPU as long as it got the resources free to do it otherwise its the CPU.

In game there are varying degrees of settings for AA & AF and tessellation levels and they also have driver override options as far as AA & AF, i see no reason why tessellation should not be a driver override option for people who want it regardless of AMD possible motives, it really is no different to AA/AF levels as many can't use maxed out AA/AF levels either and depending on the brand and model of card, what levels are playable to the user will vary depending on the game and resolutions and yes there could be possible side effects with driver override tessellation but i have not seen any comments of users who use the option suffering from any up till now.


Tessellation should not be tacked on like i have said many times in the past and is no better than tacked on Physx, both have had examples of ground up implementations which have impressed me which sadly one is a Benching program and the other was a Cellfactor.

While both techs have impressive possibilities, what's put on the table is what counts and is judged on those merits and i could go without until they are used as they should, DX11 itself is sometimes used just as badly.
 
Last edited:
Well like most things sometimes it does sometimes it does not.
Whatever is faster doing on the GPU should be done on the GPU as long as it got the resources free to do it otherwise its the CPU.

I'm not sure why people are scared of polygons.
Ever since the first T&L cards, GPUs have been capable of handling millions of polys. A few polys more or less really isn't the issue. They are often 'free', because of the additional overhead of setting up each call. A call to draw 1 polygon is no faster than a call to draw 1000 polygons.
3DMark2001 already included a high-polygon test which rendered 1m polygons, and a high-end GeForce2 would churn through that with little problems. Just because games never pushed the hardware anywhere near their capabilities doesn't mean you should be afraid of it.

In game there are varying degrees of settings for AA & AF and tessellation levels and they also have driver override options as far as AA & AF, i see no reason why tessellation should not be a driver override option for people who want it regardless of AMD possible motives, it really is no different to AA/AF levels as many can't use maxed out AA/AF levels either and depending on the brand and model of card, what levels are playable to the user will vary depending on the game and resolutions and yes there could be possible side effects with driver override tessellation but i have not seen any comments of users who use the option suffering from any up till now.

Which is where I as a developer would disagree. I design the application, I make the choices of what level of detail is applied where, when and how. There is no need for a driver to override any of my choices, because I give the user plenty of choice already. The only driver features that are left are hacks and cheats, and those should be banned. I don't want AMD, nVidia or anyone else tweaking my applications behind my back. They need to be run as they were designed, not as some random driver hacker thinks they should look.

Also, on my blog I linked to some images that clearly showed rendering issues when enabling AMD's tessellation 'optimization' in the driver: http://forums.anandtech.com/showpost.php?p=31992121&postcount=89
They limit tessellation, in a not-so-clever way, I might add... Resulting in things like brick walls turning into weird pyramid shaped bricks. If I tessellate a brick wall as a developer, I want the user to see bricks! Not weird pointy thingies!

While both techs have impressive possibilities, what's put on the table is what counts and is judged on those merits and i could go without until they are used as they should, DX11 itself is sometimes used just as badly.

Well, as a developer I obviously have a different view on things. I need proper hardware support so I can use DX11/tessellation/etc as it was meant. Endless City is pretty much the only example that uses tessellation the way it should be. And that is exactly the scenario where Radeons still fall apart.
I mean, sure, the 5000 series was the first DX11 hardware. So AMD didn't get it right the first time, fair enough... Then the 6000 seriess came... and AMD still didn't get it right... hrm...
But then they come out with the 7000 series, an entirely new architecture (Graphics Core Next, yay!), and they STILL have the same lousy tessellator they've been peddling for years. Not acceptable people! nVidia's tessellator from 2 generations ago is still much faster than AMD's latest at the tessellation ranges where it matters (there's a reason why DX11 was designed with a range of tessellation factors from 1...64. AMD can only handle 1..10 properly, then drops off exponentially, making everything higher excruciatingly slow and useless... I can't sell it to the public when their super-duper 7970 card performs worse than a cheapo GTX560Ti... but that's what happens).
 
Last edited:
I'm not sure why people are scared of polygons.
Ever since the first T&L cards, GPUs have been capable of handling millions of polys. A few polys more or less really isn't the issue. They are often 'free', because of the additional overhead of setting up each call. A call to draw 1 polygon is no faster than a call to draw 1000 polygons.
3DMark2001 already included a high-polygon test which rendered 1m polygons, and a high-end GeForce2 would churn through that with little problems. Just because games never pushed the hardware anywhere near their capabilities doesn't mean you should be afraid of it.



Which is where I as a developer would disagree. I design the application, I make the choices of what level of detail is applied where, when and how. There is no need for a driver to override any of my choices, because I give the user plenty of choice already. The only driver features that are left are hacks and cheats, and those should be banned. I don't want AMD, nVidia or anyone else tweaking my applications behind my back. They need to be run as they were designed, not as some random driver hacker thinks they should look.

Also, on my blog I linked to some images that clearly showed rendering issues when enabling AMD's tessellation 'optimization' in the driver. They limit tessellation, in a not-so-clever way, I might add... Resulting in things like brick walls turning into weird pyramid shaped bricks. If I tessellate a brick wall as a developer, I want the user to see bricks! Not weird pointy thingies!



Well, as a developer I obviously have a different view on things. I need proper hardware support so I can use DX11/tessellation/etc as it was meant. Endless City is pretty much the only example that uses tessellation the way it should be. And that is exactly the scenario where Radeons still fall apart.
I mean, sure, the 5000 series was the first DX11 hardware. So AMD didn't get it right the first time, fair enough... Then the 6000 seriess came... and AMD still didn't get it right... hrm...
But then they come out with the 7000 series, an entirely new architecture (Graphics Core Next, yay!), and they STILL have the same lousy tessellator they've been peddling for years. Not acceptable people! nVidia's tessellator from 2 generations ago is still much faster than AMD's latest at the tessellation ranges where it matters (there's a reason why DX11 was designed with a range of tessellation factors from 1...64. AMD can only handle 1..10 properly, then drops off exponentially, making everything higher excruciatingly slow and useless... I can't sell it to the public when their super-duper 7970 card performs worse than a cheapo GTX560Ti... but that's what happens).

Its not about being a afraid of anything but its about the users choice of what they like and if performance is an issue then what they are willing to sacrifice for it.

Allot of things i turn down or off because i simply don't like it.
I tend to turn of Motion Blur,DOF and bloom, not found a game where i like them on yet, some shadow settings and techniques like HBAO i don't like, SSAO is sometimes ok. Some post processing also gets killed off when possible if its not to my liking, so really there are few games that i really max out even though i have the power to do so.

Games where i can run at 8xAA or even 16xAA through the driver and not get any dips bellow 60fps, i will instead run at 2xAA or even 4xAA if i cant notice the difference in quality.
Adjustments stop at where i can't notice the difference no matter how much overkill in power i have.
 
Last edited:
Also, on my blog I linked to some images that clearly showed rendering issues when enabling AMD's tessellation 'optimization' in the driver: http://forums.anandtech.com/showpost...1&postcount=89
They limit tessellation, in a not-so-clever way, I might add... Resulting in things like brick walls turning into weird pyramid shaped bricks. If I tessellate a brick wall as a developer, I want the user to see bricks! Not weird pointy thingies!

Crsiys2 is the only game that i have tested it on to see what would happen i don't remember seeing that as that's the only part of the game i have played, just that bit at the beginning in the building.

Before or after patch/ what driver version, card generation specific bug. ect...

I will test it again when i can eventually be bothered to fire that game up and at least play what i have paid for.
 
Did you even read his post?

Yes i did.
And my point is it does not matter to me if AMD could handle tessellation better i would still be turning it down no matter how much extra power i had if it was not obviously noticeable.
 
Last edited:
Its not about being a afraid of anything but its about the users choice of what they like and if performance is an issue then what they are willing to sacrifice for it.

Problem with giving users choice is that they can make the wrong choices.
Back in the days of the GeForce FX, no one in their right mind should have bought one. But they did, so developers had to cater for the broken ps2.0 implementation, and explain to angry end-users why their GeForce FX cards didn't perform anywhere near as well or looked anywhere near as good as the Radeon 9000 series.

Today's 'GeForce FX' is the 7970. Nobody should buy one, period. The GTX680 is a better card all around, even the GTX670 outperforms it in many cases, while also using less power, and not having problems with standard DX11 functionality like tessellation.
But, because end-users have choice, they make the wrong choices. Which means we can't just develop for decent DX11 cards. We can't even add optional features that only decent DX11 cards can run. No, the people who bought the wrong hardware expect us to continue the fairytale.
And when we don't, then AMD will give them a driver 'optimization' so they can still run a game at the highest graphics settings, even though the driver waters everything down so it looks pathetic anyway. Who are you trying to fool, really?
 
Both Nvidia and AMD improve when excessive tesselation is removed, it just so happens AMD improve more.

I don't want AMD to improve. I want every developer to tessellate the heck out of everything.
Perhaps then the engineers at AMD will finally wake up and realize that their next generation of GPUs had better be really good at tessellation, else it's game over.
 
Both Nvidia and AMD improve when excessive tesselation is removed, it just so happens AMD improve more.

The same augment applies to AA at times and again i only run it at a level to where i notice the improvements as its irreverent to me whether NV could handle more AA in a particular game when its not even making my setup sweat at the level i like.
 
Problem with giving users choice is that they can make the wrong choices.
Back in the days of the GeForce FX, no one in their right mind should have bought one. But they did, so developers had to cater for the broken ps2.0 implementation, and explain to angry end-users why their GeForce FX cards didn't perform anywhere near as well or looked anywhere near as good as the Radeon 9000 series.

Today's 'GeForce FX' is the 7970. Nobody should buy one, period. The GTX680 is a better card all around, even the GTX670 outperforms it in many cases, while also using less power, and not having problems with standard DX11 functionality like tessellation.
But, because end-users have choice, they make the wrong choices. Which means we can't just develop for decent DX11 cards. We can't even add optional features that only decent DX11 cards can run. No, the people who bought the wrong hardware expect us to continue the fairytale.
And when we don't, then AMD will give them a driver 'optimization' so they can still run a game at the highest graphics settings, even though the driver waters everything down so it looks pathetic anyway. Who are you trying to fool, really?

I will say that i have to disagree and im going to leave it at that.
 
Last edited:
You disagree because it cuts deep into your AMD loving.

Actually no! I'm aware of his views and bias before he even started posting here to which has earned him a ban or 2 on other forums and that post was his typical anti AMD form.
When i see a valid point i will reply to it. the only valid point he said in that post is that AMD have weaker tessellation which we already know, everything else is pure biased opinion as if the reason why there is now good use of tessellation is because of AMD.
Its like saying that there is no good PC games because there are weak low end GPUs available and a game wont be able to run at high settings on them.

And no i don't have love for AMD or any manufacture.
 
Last edited:
Back
Top Bottom