• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

New Nvidia card: Codename "GF100"

I don't think you're understanding how this unit works. There are NO "specialised" units of any kind. It's all floating-point arithmetic. Tessilation will be performed in exactly the same way as rendering, or physics, or any other GPGPU computations. This is the main strength (and also possibly the main weakness) of the new architecture.

It's not a "software" solution any more than rendering or physics is software.

I dont think you understand. To make it compatible with DX11 tessilation they are going to have to run it in software.

"Performance = X without tessellation. On Cypress, performance with tessellation on = X * a number greater than one. On G300, it is * a number less than one"
http://www.semiaccurate.com/forums/showthread.php?p=7186ne."

Hence a big hit. TWIMTBP yeah right!
 
Last edited:
I dont think you understand. To make it compatible with DX11 tessilation they are going to have to run it in software.

"Performance = X without tessellation. On Cypress, performance with tessellation on = X * a number greater than one. On G300, it is * a number less than one"
http://www.semiaccurate.com/forums/showthread.php?p=7186ne."

Hence a big hit. TWIMTBP yeah right!

:confused:

That doesn't make any sense at all. Why would tessilation be run in software? It's all done as calls to the various GPU floating point units. Which is hardware. Just the same as rendering (i.e. application of shaders).

Did you read the literature on the new chip? Why would it be less powerful on this card than on Cypress? If the total floating-point performance is higher, then subject to architectural efficiency it will be able to perform more tessilation operations, just the same as rendering operations, or any other kind of arithmetic.

I suggest you read a little more.
 
I know everyone is really chuffed with the whole C++ support, but writing a compiler is not easy feat, AMD doesn't make it's compilers does it, only Intel does for the x86 architecture.
 
:confused:

That doesn't make any sense at all. Why would tessilation be run in software? It's all done as calls to the various GPU floating point units. Which is hardware. Just the same as rendering (i.e. application of shaders).

Did you read the literature on the new chip? Why would it be less powerful on this card than on Cypress? If the total floating-point performance is higher, then subject to architectural efficiency it will be able to perform more tessilation operations, just the same as rendering operations, or any other kind of arithmetic.

I suggest you read a little more.

Your clearly quite confused.
This card can run a PS3 / XBOX game if the software is able to tell it what to do. You need software to tell GT300 to do DX11 tessilation. It wont do it by itself. This is just one big programmable brick with a lot of power.
You need software to tell the hardware how and what do! Same like any hardware. where as 5870 doesnt need that software but is hardware based. Hence a spanking once we get tessilation in the games.

Cant do any more, than actually draw you a diagram..................................:confused:
 
:confused:

That doesn't make any sense at all. Why would tessilation be run in software? It's all done as calls to the various GPU floating point units. Which is hardware. Just the same as rendering (i.e. application of shaders).

Did you read the literature on the new chip? Why would it be less powerful on this card than on Cypress? If the total floating-point performance is higher, then subject to architectural efficiency it will be able to perform more tessilation operations, just the same as rendering operations, or any other kind of arithmetic.

I suggest you read a little more.

Actually I don't believe the tessellation unit is added into Cypress' FLOPS rating, it's a dedicated piece of silicon so I don't think any comparison can be made.
 
There still has to be fixed-function hardware in there somewhere - discreet hardware can normally do a much more power-efficient job than a software-based version. For instance texture fetching filtering is far faster and easier in hardware than spending the time coding it, especially anisotropic filtering!
Doesn't anyone remember how slow AA was in the ATI 3xxx series since it was done in the shaders?
 
Your clearly quite confused.
This card can run a PS3 / XBOX game if the software is able to tell it what to do. You need software to tell GT300 to do DX11 tessilation. It wont do it by itself. This is just one big programmable brick with a lot of power.
You need software to tell the hardware how and what do! Same like any hardware. where as 5870 doesnt need that software but is hardware based. Hence a spanking once we get tessilation in the games.

Cant do any more, than actually draw you a diagram..................................:confused:

You're assumption is that a dedicated piece of silicon running microcode to do tessellation is going to be faster then CUDA cores performing the same task is erroneous. In fact, if there is a lot of tessellation in the game, the architecture of Fermi would allow the GPU to assign more CUDA cores to the problem.

It is a powerful design which could result with stunning performance.
 
Your clearly quite confused.
This card can run a PS3 / XBOX game if the software is able to tell it what to do. You need software to tell GT300 to do DX11 tessilation. It wont do it by itself. This is just one big programmable brick with a lot of power.
You need software to tell the hardware how and what do! Same like any hardware. where as 5870 doesnt need that software but is hardware based. Hence a spanking once we get tessilation in the games.

Cant do any more, than actually draw you a diagram..................................:confused:

When you want to render a shader, you send an instruction to the GPU, which executes it. When you want to tesselate a mesh you send a different set of instructions to the GPU. In both cases, the computations are performed within the GPU. In hardware. There is no more of a software component with either piece of hardware.

ATI may have a tesselator which sits separate from the rest of the shader pipes, at the backend, but so does the fermi (check the 'special function unit' - a description is given in the anandtech article). In the case of cyprus this performs only tessilation, but in the case of fermi it can be configured to perform a variety of interpolation arithmetic.

Whether or not the effect of the tessilator is included in the total FPU performance number of either piece of hardware is irrelevant (especially since we don't have floating point performance numbers from fermi yet). In both cases the extra unit will remain inactive if tessilation is not used, and become activate when it is used. There is nothing to suggest this would have an impact on performance for fermi.

Of course, it might turn out that the tessilation unit on cyprus is more powerful than the special function units on fermi, but then again it might turn out to be the other way around. Since fermi is currently vapourware, there is no way to tell.


By the way, I won't rise to your flame-baiting. I already know that I know what I'm talking about, I use GPUs for coding almost every day. You can try to tell me otherwise, but it changes nothing.
 
There still has to be fixed-function hardware in there somewhere - discreet hardware can normally do a much more power-efficient job than a software-based version. For instance texture fetching filtering is far faster and easier in hardware than spending the time coding it, especially anisotropic filtering!
Doesn't anyone remember how slow AA was in the ATI 3xxx series since it was done in the shaders?

This is a fair point, but it applies to *everything* about the fermi architecture. Having a more general and programmable architecture will tend to increase flexibility at the cost of reduced performance.

Consider though, that when G80 was announced as having programmable pipes, people were worried that it would not be able to compete with the old fixed-form GPUs at traditional pixel and vertex shading (for exactly these reasons). We all know how that turned out.
 
You're assumption is that a dedicated piece of silicon running microcode to do tessellation is going to be faster then CUDA cores performing the same task is erroneous. In fact, if there is a lot of tessellation in the game, the architecture of Fermi would allow the GPU to assign more CUDA cores to the problem.

It is a powerful design which could result with stunning performance.

That'd be great if tessellation were the only thing it was doing, or if the GPU had unlimited resources - unfortunately if the scene were large and complex, doing a lot of tessellation on the shaders may just bog down the GPU and hence hurt performance.
 
That'd be great if tessellation were the only thing it was doing, or if the GPU had unlimited resources - unfortunately if the scene were large and complex, doing a lot of tessellation on the shaders may just bog down the GPU and hence hurt performance.

If the special function units are used for anything else in-game, then yes - it could take away from the performance. However, they are not likely to be used in rendering. Hardware physics calculations are the only other things which are likely to need interpolation or transcendental math calls.
 
I'm just amazed that we will be able to buy a card for our PC's that basically turn it into a supercomputer for basically peanuts next year.

I'm looking forward to seeing some impressive CUDA powered software next year. Faster then real time good quality high definition video encoding, near real time ray tracing, hundreds of tracks of software synthesizers and real time audio effects for digital audio workstation software etc... The possibilities are endless.

And it will play video games brilliantly. Just amazing really...
 
And it will just play video games to a satisfactory framerate (crysis excluded). Just amazing really...

Fixed :p

Although I am also impressed by all the other things it can do as well. Guess that's the way the future is going.

Although as just a gamer I would have preferred a stonking gaming card which was 2 to 3 times quicker than the current fastest card but that's just my viewpoint.
 
Although as just a gamer I would have preferred a stonking gaming card which was 2 to 3 times quicker than the current fastest card but that's just my viewpoint.

The same view point many on here will be taking too no doubt. As it happens it seems Nvidia see a bigger picture than just gamers, regardless of whether they think they're the most important peice of the puzzle or not. (Not saying you think you're the most important, but opinions of that nature will no doubt surface if Fermi doesn't smack ATI's efforts ten ways to Sunday come release)
 
The same view point many on here will be taking too no doubt. As it happens it seems Nvidia see a bigger picture than just gamers, regardless of whether they think they're the most important peice of the puzzle or not. (Not saying you think you're the most important, but opinions of that nature will no doubt surface if Fermi doesn't smack ATI's efforts ten ways to Sunday come release)

Totally agree. Nvidia is looking longterm at all markets, not just gamers.
 
So giving that nVidia have designed Fermi as a general purpose microprocessor first and a 3D accelerator second, nVidia obviously feels that the future of PC's as a gaming machine is in question? I wonder what timeframe period they're projecting for the death of PC gaming?
 
Give it five years and people will start saying 'what if we had a card designed and built solely for accelerating 3D Graphics'
 
Back
Top Bottom