• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

So what does this mean then ?, that the g80 is still the better chip.......

Joined
27 Jul 2005
Posts
13,141
Location
The Orion Spur
Sourced from www.geforce9.com ;)

PCGH: Why did you re-integrate NVIO-functionality into the main die and how much die space (or transistors) did that roughly cost?
NVIDIA: Going to 65nm allows us to integrate more transistors into the chip and reduce routing complexity.
PCGH: Does the G92/GF8800 GT chip support double-precision to be used in the Tesla-line of cards? If so, what about the number of cycles compared to single-precision?
NVIDIA: The Geforce 8800 GT does not support double precision DX10.1.
PCGH: Did you improve on the number of registers per Thread? If i remember correctly from your CUDA-documents, there were about 10,6 for every thread.
NVIDIA: Registers per thread are the same.
PCGH: How many Quad-TMUs does GF8800 GT use? Are the TMUs still coupled to the 7 TPCs or is their freely scalable now?
NVIDIA: Up to 8 Texture address ops and texture filtering ops can be done per clock, per TPC. 7 * 8 = 56 Units for the chip. The texture units are coupled to TPCs.
PCGH: Did any of the filtering characteristics change compared to G80? Does G92 filter some formats now single-cycle which G80 couldn't?
NVIDIA: G92 can do up to 2x more bilinear filtering ops per clock, per TPC.
PCGH: What is the minimal thread size? Did it change compared to G80?
NVIDIA: It's still the same.
PCGH: Are the TPC still organized as 16x SIMDs or did you improve on granularity?
NVIDIA: They are the same.
PCGH: Any important changes to the ROPs? Still single-cycle 4x MSAA, 4 pixels each / alt. 8 Z-Values per clock?
NVIDIA: ROP Compression system is improved for high resolutions with AA. Compression now covers scenes up to 2560 x 1600 with 4xAA (thought it may be shader bound at this point)
PCGH: Did you improve the geometry shader amplification performance, esp. under heavy load?
NVIDIA: No, it's the same as G80.
PCGH: Did you improve on triangle-setup? How many tri/s can you setup before and now?
NVIDIA: Up to one triangle per clock, like G80.
PCGH: Is the GF8800 GT's VP identical to the one used in G84/G86?
NVIDIA: Yes
PCGH: Is the mentioned TDP of 100 watts the absolute maximum on stock clocks? Can you give a rough estimate as to what percentage of the TDP one would typically experience in a 3DMark-run?
NVIDIA: 110W is the max board power.
PCGH: Will the G80 chip continue to be used only in the highest-end cards, e.g. GTX and Ultra?
NVIDIA: Yes, absolutely. GTX is more than 20% faster than the 8800 GT at 25x16 4xAA.

PCGH: What about AGP? Is there a new brigde-chip coming to support GF8800GT chips?
NVIDIA: Sorry, no comments on unannounced products.
PCGH: SLI: Did you improve on inter card bandwidth? What kind of protocol do you use for SLI communication? Something like PCI-Express?
NVIDIA: We use a combination of PCI Express and the SLI bridge. This remains the same on GeForce 8800 GT.
 
Last edited:
That's a bizarre statement really.

Yes the GTX is roughly 20% faster the GT in benchmarks, however migrating it to the G92 would surely provide an even a larger benefit?
 
The question was "Will the G80 chip continue to be used only in the highest-end cards"

As opposed to using the G80 in lower end cards.

I guess this means the 8800gtx is gonna stick around, maybe drop in price.

It doesnt rule out a new G92-esque top end card... 8850 or so.
 
however migrating it to the G92 would surely provide an even a larger benefit?
Perhaps, but the very high temps that people are seeing with the GT makes me think that the GT is pushing the G92 architecture pretty close to the limits anyway.

Personally, I'm waiting to see what the RV670 is like before making a commitment either way. After all the G92 is effectively a die-shrink of the G80 - just look at the number of answers in that interview are "the same as G80". If the RV670 (as a die-shrink + some extra bits) gives the equivalent improvement over the R600 then perhaps this hysteria over the GT is misplaced.

Just my tuppence worth. I reserve the right to be proven wrong :)
 
Reading that interview makes me laugh.


"PCGH: Does the G92/GF8800 GT chip support double-precision to be used in the Tesla-line of cards? If so, what about the number of cycles compared to single-precision?
NVIDIA: The Geforce 8800 GT does not support double precision DX10.1.
PCGH: Did you improve on the number of registers per Thread? If i remember correctly from your CUDA-documents, there were about 10,6 for every thread.
NVIDIA: Registers per thread are the same.
PCGH: What is the minimal thread size? Did it change compared to G80?
NVIDIA: It's still the same.
PCGH: Are the TPC still organized as 16x SIMDs or did you improve on granularity?
NVIDIA: They are the same.
PCGH: Did you improve the geometry shader amplification performance, esp. under heavy load?
NVIDIA: No, it's the same as G80.
PCGH: Did you improve on triangle-setup? How many tri/s can you setup before and now?
NVIDIA: Up to one triangle per clock, like G80.
PCGH: Is the GF8800 GT's VP identical to the one used in G84/G86?
NVIDIA: Yes
PCGH: Will the G80 chip continue to be used only in the highest-end cards, e.g. GTX and Ultra?
NVIDIA: Yes,

PCGH: SLI: Did you improve on inter card bandwidth?
NVIDIA: We use a combination of PCI Express and the SLI bridge. This remains the same on GeForce 8800 GT."


How about asking, "Have you actually improved anything?".
 
Perhaps, but the very high temps that people are seeing with the GT makes me think that the GT is pushing the G92 architecture pretty close to the limits anyway.
I reserve the right to be proven wrong :)
I would have said that's down to the dinky little single slot cooler, the TDP is way down. The performance/watt ratio is miles better than the G80. How about all the 700Mhz overclocks on the stock GT cooler? Doesn't sound borderline tbh.

PCGH: Will the G80 chip continue to be used only in the highest-end cards, e.g. GTX and Ultra?
NVIDIA: Yes, absolutely. GTX is more than 20% faster than the 8800 GT at 25x16 4xAA.
You'll notice thats where the GT's memory frame buffer gives up, nothing to with the core. Limit the G80 to 256bit and see what happens then, of course its quicker at that res+AA.
 
Last edited:
I would have said that's down to the dinky little single slot cooler, the TDP is way down. The performance/watt ratio is miles better than the G80. How about all the 700Mhz overclocks on the stock GT cooler? Doesn't sound borderline tbh.


You'll notice thats where the GT's memory frame buffer gives up, nothing to with the core. Limit the G80 to 256bit and see what happens then, of course its quicker at that res+AA.

Most GT's are hitting 700mhz??, from most reviews I can see the max overclocks are averaging much lower.

Also seems the GT becomes shader bound up to a certain point which really aint good.
 
Most GT's are hitting 700mhz??, from most reviews I can see the max overclocks are averaging much lower.

Also seems the GT becomes shader bound up to a certain point which really aint good.

would this be the case with a superior cooler strapped to the top?
i really cant wait for the first aftermarket coolers maybe something along the lines of a gts cooler.
 
Where did I say reviews?

You didn't but I presumed you meant those since as AFAIK no one here has hit that 700 yet.

I quote - "all the 700mhz overclocks"

700 is a push for these cards not a walk in the park everyone thought it might be.

A third party cooler may change the situation if they are thermally limited.
 
700/1700/2000 seems fairly common. I don't think it will be all that unusual, One guy I was reading tried four from stock and they all made over 700.

http://www.xtremesystems.org/forums/showthread.php?t=163620&page=14
http://www.xtremesystems.org/forums/showthread.php?t=163960
http://www.xtremesystems.org/forums/showthread.php?t=163774

I was more referring to the card/cores architectural and potential overhead, 'even' with the stock cooler. But they do run hot and poor case ventilation combined with the card dumping its heatload right in the case won't help at all. What I think you'll see is most 750+ results are outside the case, i.e. desktop. That's leads to a pretty obvious conclusion, the coolers probably not much good.

Asgard made 700 on the stock cooler btw.

3D Mark 06 running at 700/2000. First stab just dialed it in :D
E6600 running at 3.6

card temp 89'c fan totally silent !!


3DMark Score 13,278 3DMarks


SM 2.0 Score 6158 Marks
SM 3.0 Score 5811 Marks
CPU Score 3243 Marks
 
Last edited:
Back
Top Bottom