• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Forecast on the next gen Nvidia?

Associate
Joined
29 Nov 2005
Posts
1,171
I'm currently running a 30" display at 2560x1600 on a 1950XTX Crossfire setup which struggles on some games.

Having waited to see what R600 was to bring to the fold, it looks like ATI are no longer going to be in the top-end market for some time (but stranger things have happened). I thinking that I'll hold on for the next generation Nvidia card if it's not too far off.

So... when is the next gen Nvidia card expected to arrive and what can we expect to see over the current 8800GTX?
 
As far as I know, it's expected around October time.

As for the specs, I haven't seen anything really. But given the situation from the past 3 generations, and the fact that it's most likely going to be a 65nm part, I would expect close to double the theoretical performance of the 8800GTX.

In the past 3 generations we have generally seen the new top-end card slightly outperforming the previous top-end card in SLI, in most circumstances.
 
just how the hell do they get double the power every year?

Do they sit around in a room and say "hang on!! I know something we didn't do on the last one!!" and hey presto, double the performance??

8800GTX is the first card I've owned since I started building (4 years ish) where I've seriously questionned whether they will pull off a significant difference in 1 generation.
 
hmmm tbh, i think it will take a long time to double the performance of the 8800GTX... Its such a massive leap forward in technology.. I wouldnt expect the performance of it to be doubled for atleast 3 years...
 
FrannoBaz said:
hmmm tbh, i think it will take a long time to double the performance of the 8800GTX... Its such a massive leap forward in technology.. I wouldnt expect the performance of it to be doubled for atleast 3 years...

Techonology is progressing so fast now, its sure going to take less than 3 years for a card double or more performance than an 8800GTX.
 
FrannoBaz said:
hmmm tbh, i think it will take a long time to double the performance of the 8800GTX... Its such a massive leap forward in technology.. I wouldnt expect the performance of it to be doubled for atleast 3 years...

what if they decide to go mad and make a GTX style core with something stupid like 384 stream processors and 72 ROPs or something clocked at 750Mhz, that would be something wouldn't it ;)
 
FrannoBaz said:
hmmm tbh, i think it will take a long time to double the performance of the 8800GTX... Its such a massive leap forward in technology.. I wouldnt expect the performance of it to be doubled for atleast 3 years...

It's nvidia's first card using programmable pipelines. There is nothing to suggest that they can't drastically improve the efficiency of such a technique with the next iteration. In addition, switching to a 65nm process from an 80nm process allows them to fit just over 1.5 times the number of transistors into the same space. That alone would allow for a 50% increase in the number of shader pipelines. Add to the mix faster memory and wider memory interfaces and you can easily see how the overall performance of the card could double.

I think that we will continue to see exponential performance increases in graphics card performance over the next few years. Graphics cards are not constrained in the same way as CPUs are, in that they are based on parallelism - rather than serial performance.

One thing that MUST continue to increase in order to allow this to happen, however, is the power requirements. The laws of thermodynamics tell us that decreasing entropy locally (ie in this case the ordering of electrons to output data) MUST bring with it a global increase in heat output (and equivalently a higher power input requirement). Hence, the price for this continued performance improvement in the high end will always be increased power requirements, and I feel that we will eventually see the high-end limited more by cooling than by micro-process level design limitations (again, unlike CPUs).

Those of you looking for lower power requirements and cooler running cards will more and more have to look to the mid range.
 
Last edited:
Duff-Man said:
It's nvidia's first card using programmable pipelines. There is nothing to suggest that they can't drastically improve the efficiency of such a technique with the next iteration. In addition, switching to a 65nm process from an 80nm process allows them to fit just over 1.5 times the number of transistors into the same space. That alone would allow for a 50% increase in the number of shader pipelines. Add to the mix faster memory and wider memory interfaces and you can easily see how the overall performance of the card could double.

I think that we will continue to see exponential performance increases in graphics card performance over the next few years. Graphics cards are not constrained in the same way as CPUs are, in that they are based on parallelism - rather than serial performance.

One thing that MUST continue to increase in order to allow this to happen, however, is the power requirements. The laws of thermodynamics tell us that decreasing entropy locally (ie in this case the ordering of electrons to output data) MUST bring with it a global increase in heat output (and equivalently a higher power input requirement). Hence, the price for this continued performance improvement in the high end will always be increased power requirements, and I feel that we will eventually see the high-end limited more by cooling than by micro-process level design limitations (again, unlike CPUs).

Those of you looking for lower power requirements and cooler running cards will more and more have to look to the mid range.

I think someone works for Nvidia ;)
 
If the exact same circuits were used in the 65nm process as with the 80nm process then yes heat and power consumption would be reduced. However as said above they will most likely redesign the circuits and fit 1.5 times (or whatever they feel like) into the area currently used.

This will use more heat and more power then the 80nm process as there will actually be more of the area taken up by circuitry due to them being able to fit the circuits closer together.

Possibly edging towards the assumptious but that would logically explain why we are still getting hotter and hotter chips while seeing process sizes drop.

Basically the smaller 65nm allows them to cram more on which means the heat and power consumption will be higher then before.
 
Gashman said:
what if they decide to go mad and make a GTX style core with something stupid like 384 stream processors and 72 ROPs or something clocked at 750Mhz, that would be something wouldn't it ;)


sounds like the x2900xt specs to me, and we all know how that card performed. ;)
 
Cyber-Mav said:
sounds like the x2900xt specs to me, and we all know how that card performed. ;)
Except for the ROP's the 2900xt just has to little of them compared to the GTX.
 
Darg said:
If the exact same circuits were used in the 65nm process as with the 80nm process then yes heat and power consumption would be reduced. However as said above they will most likely redesign the circuits and fit 1.5 times (or whatever they feel like) into the area currently used.

This will use more heat and more power then the 80nm process as there will actually be more of the area taken up by circuitry due to them being able to fit the circuits closer together.

Possibly edging towards the assumptious but that would logically explain why we are still getting hotter and hotter chips while seeing process sizes drop.

Basically the smaller 65nm allows them to cram more on which means the heat and power consumption will be higher then before.

Pretty much spot on :)

And remember, by the laws of thermodynamics, 4 small transistors will always produce more heat (and so require more power) than 1 larger transistor of the same design taking up the same area. Hence, when trying to increase computational power and simultaneously reduce heat output we're always fighting a losing battle against the basic laws of physics.

Also, the 1.5x simply comes from: (80/65)^2 = 1.515 - ie. the same area of silicon could fit around 1.5 times the number if transistors. Of course there is nothing to say that nvidia won't also increase the overall die size to include even more transistors - which they have in previous generations.
 
Last edited:
Cyber-Mav said:
sounds like the x2900xt specs to me, and we all know how that card performed. ;)

The biggest difference between the GTX and the r600 is that the shader pipes on the r600 work at GPU clock speed (~740mhz), whereas those on the GTX work at a separate shader clock speed (~1350mhz).
 
Duff-Man said:
The biggest difference between the GTX and the r600 is that the shader pipes on the r600 work at GPU clock speed (~740mhz), whereas those on the GTX work at a separate shader clock speed (~1350mhz).

should in theory be in the advantage of the r600 since it has nearly triple the number of shader running at half the speed. overall performance should be a thrid faster than the gtx.
 
Cyber-Mav said:
should in theory be in the advantage of the r600 since it has nearly triple the number of shader running at half the speed. overall performance should be a thrid faster than the gtx.

Yep - theoretical floating point performance is higher on the r600. However, transforming this to real-world gaming performance isn't so easy.

In general, by doubling the clockspeed a microprocessor works at you double the amount of data it can process. However, by doubling the number of microprocessors doing the same thing you incur a parallelisation overhead which stops you from obtaining quite the same doubling as with clockspeed. This overhead is strongly related to the type of data being processed, as well as internal efficiency of data distribution.

Given this, I think the r600 has a greater potential to improve as drivers improve internal efficiencies.

I don't know what else to say. Given the specs I was hoping / expecting the r600 to be much more powerful than it appears to be. Im hoping it just provides a sound base for ATIs next generation.
 
I want a DX 10 card with 8800 GTX power in a small space :mad:

Cards are getting too big these days. Each new release makes me dread how big the darned thing is going to be. I want a Computer with a Graphics CARD in it.. Not a Graphics washing machine with a computer built around it
 
Back
Top Bottom