• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

*** Official Nvidia 9xxx Series Thread***

Doesn't make it new imo, the process is new, i.e down from 90nm to 65nm, that is new yes, but that new 65nm process still has the same old 2006 8800's tech on it that the 90nm process did.
 
Last edited:
Doesn't make it new imo, the process is new, i.e down from 90nm to 65nm, that is new yes, but that new 65nm process still has the same old 2006 8800's tech on em.

Thats your opinion, were not engineers, we dont know how complicated or easy this procedure is. But i doubt its along the lines of hit this button to achieve die shrink. Going to different die sizes in the gfx adn cpu industry is usually a pretty big deal and a lot of noise is made about it.
 
What loadsa is saying is that the actual core is really just the exact same thing as the current 8800 it is just smaller.

Yes by making it smaller the G92 takes less power and runs cooler, but performance wise it is nigh on exactly the same clock for clock as the older 80nm GPU's.

It's like the new 45nm Intel Core2's, it's still a Core2, it is just smaller. The architecture is exactly the same and clock for clock it is the same as the 65nm Core2's.


So you can say on one hand the G92 is new tech with regards to power usage and heat output efficiency.

But performance wise, clock for clock they are exactly the same as the older GPU's. Their performance gain comes about souly from the fact they clock higher (as a side effect of the die shrink process and heat/power efficiency gains).

The rest of a G92 card is actually inferior to the older part (256bit bus, less memory etc. etc.).


As the 9800's are once again based on the 8800 architecture with some slight tweaks here and there, it can't really be classed as new tech, it is just old tech refreshed (again).

And with regards to the 9800GX2, that isn't even refreshed again, it is just two G92's slapped together.
 
Last edited:
Why do people keep on thinking that either nVidia or ATI have to have the fastest GPU in order to survive? All this talk of ATI are finished, blah blah makes no sense when the bulk of the company's profit comes from the embedded and low-mid range.

Intel make way more money in the GPU market than either ATI or nVidia and they don't even have a discrete solution.

This is a performance discussion forum, so by all means bash ATI for not being able to play catch up to nVidia for the past year or so, but please, don't spell doom for the company just because it can't corner a tiny % of the market.
 
What loadsa is saying is that the actual core is really just the exact same thing as the current 8800 it is just smaller.

Yes by making it smaller the G92 takes less power and runs cooler, but performance wise it is nigh on exactly the same clock for clock as the older 80nm GPU's.

It's like the new 45nm Intel Core2's, it's still a Core2, it is just smaller. The architecture is exactly the same and clock for clock it is the same as the 65nm Core2's.


So you can say on one hand the G92 is new tech with regards to power usage and heat output efficiency.

But performance wise, clock for clock they are exactly the same as the older GPU's. Their performance gain comes about souly from the fact they clock higher (as a side effect of the die shrink process and heat/power efficiency gains).

The rest of a G92 card is actually inferior to the older part (256bit bus, less memory etc. etc.).


As the 9800's are once again based on the 8800 architecture with some slight tweaks here and there, it can't really be classed as new tech, it is just old tech refreshed (again).

And with regards to the 9800GX2, that isn't even refreshed again, it is just two G92's slapped together.

Thats it. :)

What i don't get as well is, how the lower power consumption etc... is a big deal, if it was, then why did everyone still buy these cards back in 2006 when they were released, why did they not just say "ill hang on and buy this card in 2008 (or whenever it gets released again) when it uses less power, and gives out less heat".
 
TBH the power and heat efficiency is just a side effect of a die shrink, which is only done to cut production costs.
 
Loadsa and Hex, you show me what 8800GTX, Ultra can do even just 700Mhz on the core without a voltmod or extra cooling and I'll support what you are saying.

Oh yeah, in fact what about shader clocks?, even memory clocks?. I can't see the GTX or Ultra competing with the G92 in these regards but that wouldn't make sense now would it if it was all the same old tech :rolleyes:. Just check the 3dmark and Crysis threads as they support what I'm saying. The G92 with a bigger memory bus would be pretty darn decent and that's what's holding the new G92 from beating the GTX/Ultra.
 
Loadsa and Hex, you show me what 8800GTX, Ultra can do even just 700Mhz on the core without a voltmod or extra cooling and I'll support what you are saying.

Oh yeah, in fact what about shader clocks?, even memory clocks?. I can't see the GTX or Ultra competing with the G92 in these regards but that wouldn't make sense now would it if it was all the same old tech :rolleyes:. Just check the 3dmark and Crysis threads as they support what I'm saying. The G92 with a bigger memory bus would be pretty darn decent and that's what's holding the new G92 from beating the GTX/Ultra.

Did you even read what I wrote?
 
Did you even read what I wrote?

Fair doo's I skimmed past most of your post after reading that you were backing up what loadsa says which just isn't true.

You say clock for clock they are the same but that isn't true because even at the highest clock on the GT/GTS the GTX will still beat it with AA/AF at for HD gaming. I know it's down to the memory interface but that doesn't mean clock for clock they are the same.
 
Yeah GTX and Ultra are still the top cards, so really what were getting now is, that same old 2006 tech, but made even worse by having their bus's cut down to 256bit. :p
 
Last edited:
Yeah GTX and Ultra are still the top cards, so really what were getting now is, that same old 2006 tech, but made even worse by having their bus's cut down to 256bit. :p

Loadsa, you show me what 8800GTX, Ultra can do even just 700Mhz on the core without a voltmod or extra cooling and I'll support what you are saying.

Oh yeah, in fact what about shader clocks?, even memory clocks?. I can't see the GTX or Ultra competing with the G92 in these regards but that wouldn't make sense now would it if it was all the same old tech . Just check the 3dmark and Crysis threads as they support what I'm saying. The G92 with a bigger memory bus would be pretty darn decent and that's what's holding the new G92 from beating the GTX/Ultra.

It's not old tech m8. It's just your ramblings that are old :p.
 
All the critics dissing loads are wrong, making a card faster in all those regards(clockspeed etc J.D) won't help if it's completely bottlenecked by the bus and RAM speed, which to be fair, is the only clock which can't be clocked a lot higher on the new cards.

I've gone from a GT to a GTS, new ones, and tried a lot of overclocking and it's obivious that the GTS is an extremely extremely bottlenecked card.
That's not some stupid opinion cause i'm jealous of anything, i have the card sitting right here in my own machine, and i'm telling you, it's not as good as it's made up to be, a GTX should NOT be replaced for the time being.
 
Last edited:
You are all completely missing the point, making a card faster in all those regards won't help if it's completely bottlenecked by the bus and RAM speed, which to be fair, is the only clock which can't be clocked a lot higher on the new cards.

I've gone from a GT to a GTS, new ones, and tried a lot of overclocking and it's obivious that the GTS is an extremely extremely bottlenecked card.
That's not some stupid opinion cause i'm jealous of anything, i have the card sitting right here in my own machine, and i'm telling you, it's not as good as it's made up to be, a GTX should NOT be replaced for the time being.

Great post.

The 8800GTS with a bigger memory interface would have beaten the GTX by a fair bit I reckon. Imagine the top GTS overclocked cards that you can see on the Crysis benchmark thread and add the core and shader speed to the GTX and boy would it fly.
 
Then it would be the same 2006 tech, just faster. :p



It is old, the G92's, and the upcoming 9 series are what, yup, and when were those cards released, yup, 2006 wasn't it. :p

Loadsa. Stop avoiding the question and just answer it for crying out load.

Find me a GTX or Ultra that can do 700+Mhz on the core without a volt mod.

Find me a GTX that can even get near 1800Mhz on the shaders, nevermind the 2000+Mhz that i have seen on the GTS cards.


If you can do this then I'll support what you are saying and please stop avoiding the question as I've had to write this three times now. If you can't then will you agree that it isn't old tech?.

I've already said that if you can find an Ultra or GTX than can do as mentioned above that I will support what you say. But if you can't then will you be man enough to support what I say as I have asked the question that puts down your theory? ;).

If the memory interface was the same then the GTS and even the GT would be beating the GTX with the core speeds and shader speeds.
 
Loadsa. Stop avoiding the question and just answer it for crying out load.

Find me a GTX or Ultra that can do 700+Mhz on the core without a volt mod.

Find me a GTX that can even get near 1800Mhz on the shaders, nevermind the 2000+Mhz that i have seen on the GTS cards.


If you can do this then I'll support what you are saying and please stop avoiding the question as I've had to write this three times now. If you can't then will you agree that it isn't old tech?.

I've already said that if you can find an Ultra or GTX than can do as mentioned above that I will support what you say. But if you can't then will you be man enough to support what I say as I have asked the question that puts down your theory? ;).

If the memory interface was the same then the GTS and even the GT would be beating the GTX with the core speeds and shader speeds.

Can we sort this out with some handbags? :D

Frankly, I don't give a **** if it is old technology or not. If I can equal or improve upon my 8800GTX performance (and I'll wait for proper benchies from decent drivers) with the added benefit of HDMI and Optical audio, I'll more than likely buy the card. Unless of course, it's stupidly priced, then I wont!
 
Can we sort this out with some handbags? :D

Frankly, I don't give a **** if it is old technology or not. If I can equal or improve upon my 8800GTX performance (and I'll wait for proper benchies from decent drivers) with the added benefit of HDMI and Optical audio, I'll more than likely buy the card. Unless of course, it's stupidly priced, then I wont!

Lol I know it's a bit extreme but you try asking a question three times and getting ignored ;).

I agree with your way of thinking about replacing the GTX.
 
I said it before and I will say it again. Not having functional SLI on intel chipset boards is hurting nvidia big style.

I'd have bought a second ultra a long time ago if they did. I don't want an nforce 780i my X38 is just fine and more reliable imho.

So I would consider a 9800 GX2 but honestly it seems like it might be another 3870 X2 and in 6 months time we will have what we all really want.
 
If nvidia had their g100/200 ready wouldn't it be better to launch that against the 3870x2 instead of spending time and money on the 9800x2 at some point they need to change over to the new tech and they have 3 or 4 months lead on ati at least if ati launch the r700 in q2?.
3870x2 is here because r700 not ready so could it be the same for the g100/200.?
 
Last edited:
Back
Top Bottom