• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Why no GTX 300 series?

Associate
Joined
29 Jun 2009
Posts
161
Location
Cork
The fact that Nvidia moved straight from the GTX 200 to the GTX 400 series seems strange to me.

I'm aware that there are some Gt 300s but I believe they are OEM only parts and are just rebranded 200 series.

Anybody know why this is?
 
In a nutshell, nVidia originally intended a 200 series refresh based on 40nm DX10.1 cores to replace the current 200 series and go up against ATI's Evergreen but they didn't make the grade for high end cards and once ATI released DX11 cards were essentially obsolete. But nVidia salvaged enough from it to release a new low/mid range of cards. So they bumped the Fermi range to 400 series to make space for them.
 
Because a jump of a mere 100 (GTX 380 for example) is not nearly enough to quantify the awesomeness of Fermi.

But seriously, I stopped trying to make sense of Nvidia naming schemes a LONG time ago.
 
I always assumed that nVidia would start populating the 300 series with re-brands and re-hashes at some point.

Maybe after the GTX400 mid-range?
 
It already is, the 300 series is a mixture of G92b cards renamed (no kidding) and 40nm DX10.1 cores (basically renamed GT240s, etc.).
 
They are already released to OEMs and the US market not sure if/when they will be available retail in the UK.
 
Link. 300 series cards at the bottom left. If you look at this wikipedia page, it confirms what Rroff says about taking G92b and Gt200b parts, tweaking them slightly and selling them on to Dell, HP etc.

I think I read somewhere that this was at the request of the PC makers, as they wanted to advertise "new" cards to (uninformed) customers since Nvidia wasn't moving very fast to get Fermi (and more importantly the budget/midrange variants) out to the market.
 
It already is, the 300 series is a mixture of G92b cards renamed (no kidding) and 40nm DX10.1 cores (basically renamed GT240s, etc.).

THe worst thing about it is the where the g92b is in the range, its not like the ridiculously old core is the GT310gts, and the rest above it are the newer higher spec, the gt310-320, and 340 are gt200b shrinks, and the gt330 is the g92b based card.

Everything except that single card, and the higher end mobile stuff(also g92b) are dx10.1, while the g92b stuff is obviously still dx10, so they only class the GT300 series stuff as DX10, while everything that is 40nm was dx10.1 branded under the 200 series names.

Utter madness.
 
We don't know why there are no high end 300 cards but an educated guess would be something on the lines of Nvidia wanted it's new directx 11 cards to have there own brand identification i.e the 400 series. If they had Fermi as the 'GTX 380 and GTX 370' then it potentially confuse customers as they don't want repeat of the TI 4200 situation.
 
From what I can make out a good number of people at nVidia thought the new GF100 cards would be 3x0 at retail until literally days before they announced them as the 400 series. (So I do kinda believe it was done at the behest of OEMs).

As to why theres no high end DX10.1 parts thats down to the inability to get the design working on the process to that complexity coupled with ATI releasing DX11 parts forcing them to switch effort into getting a DX11 card working. People talk about Fermi being late... but the design itself is insanely "on time" as design implementation goes especially with the difficulties getting the design onto the 40nm process... which is another reason I'm not rushing to be an early adopter - in any other time/situation they wouldn't be bringing these out for another 4-5 months minimum.
 
Last edited:
People talk about Fermi being late... but the design itself is insanely "on time" as design implementation goes especially with the difficulties getting the design onto the 40nm process... which is another reason I'm not rushing to be an early adopter - in any other time/situation they wouldn't be bringing these out for another 4-5 months minimum.

I completely agree, all things being equal - Nvidia should have worked on Fermi some time yet. Refine their design, wait for TSMC to sort out their 40nm yield problems and generally produce a GPU that is very fast, power efficient and cool. However, due to market reasons (AMD having a DX11 part since Q3 2009), Fermi was pushed out of the door as fast as possible, with lowered clocks, disabled cores and a high power usage.

Hopefully some GTX 485 and 475 chips will come out in a few months time with some refinements. At the minute I would recommend most people to avoid them.
 
I completely agree, all things being equal - Nvidia should have worked on Fermi some time yet. Refine their design, wait for TSMC to sort out their 40nm yield problems and generally produce a GPU that is very fast, power efficient and cool. However, due to market reasons (AMD having a DX11 part since Q3 2009), Fermi was pushed out of the door as fast as possible, with lowered clocks, disabled cores and a high power usage.

Hopefully some GTX 485 and 475 chips will come out in a few months time with some refinements. At the minute I would recommend most people to avoid them.

They're still late though aren't they? 6 months after their competition is late.

By the time any sort of 485 and 475 comes out, ATi will already have their Southern Islands chips out.

nVidia are just currently playing catch up.
 
They're still late though aren't they? 6 months after their competition is late.

By the time any sort of 485 and 475 comes out, ATi will already have their Southern Islands chips out.

nVidia are just currently playing catch up.

Again, completely agree.

They are late to the market (ie consumers expect them to have GPUs equivalent to HD 5***) but at the same time, the tech they did (eventually) bring to market is not what they wanted to release - just something to prevent the competition entirely cannibalising the market.
 
It is what they wanted to release. The design just did not mesh well with the manufacturing requirements.

Nvidia taped out A1 and other test chips long before launch and the design architechture did not change one jot.
 
Back
Top Bottom