• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

NVIDIA Announces New GeForce 400M GPUs, Geared for Next-Gen Optimus, 3DVision Laptops

the direct competition for the GT420 from ATI is the 5550 , 650 vs 700 mhz core , both using gDDR3 ram at 900mhz , but the ATI part is 39w under load with many many IHV`s going passive cooled on them.


saying that - whats the chance OCUK can get 1 in? am very willing to throw some benches and games at it ;)
 
Last edited:
I was commenting on this;



very very few people are going to be buying a laptop based around a "470" mobile chip with any regard for battery life.

Not sure how hard to understand this is, people were saying, nice range, I said, no it won't sell because the "performance" segment of laptops is about 1/1000000th the size of the performance segment in desktops. High power parts fail. Very few people will buy the 470m AT ALL, they'll hope to sell 10million mobile gpu's, of which they'd be happy if 50k were 470m, 99% of people buying laptops DO NOT BUY high end high power stuff, those that do don't care about power, whats your point, I haven't said they do. The point, for the umpteenth time is sales are ruled by volume, the LOW end parts make ALL the profit, their low end parts are pretty abysmal.

As for direct competition Harlequin, a 5570 is essentially 1/4 of a 5870, a 420gt is pretty much 1/10th of a 480gtx, they aren't even close to direct competition, the 5570 should be pushing 3 times as fast as the gt420 and it uses less power, thats a truly awful situation to be in. The 80shader part won't be significantly slower. Basically a 420gt will be horrible for gaming, as 80shader parts are, a 400shader part will be probably faster than the 96 shader Gt420, and the 400 shader 5570 uses less juice than the 48 shader version.

I'm also expecting, not sure if the minimum 6xxx series card will have 160shaders, but AMD really need a filler card between the 80 and 400 shader mark. A 420gt performance wise will be probably on par with a 160shader AMD part, they just don't make one.

With Sandy bridge equalling the 80shader part now, I'm not sure if its better to have an extra SKU at the 160 or even 240shader part(which should destroy a gt420), or make the baseline performance 160shader. Its very difficult because performance itself isn't even remotely required on the low end, the idea is to make something cheap enough to get a display working and thats it. So moving from 80-160shaders decreases profit. But performance is necessary when comparing two low end cards, people like value for money and won't turn down twice the performance at the same cost, so in terms of the lowest end AMD part being twice as fast, or matched by Sandy Bridge, it looks FAR better if its twice as fast.

Of course the 6 series could change the number of shaders in a cluster, in which case the minimum might be different to 80 anyway. I think its the time to see them bump the minimum performance up for the good of everyone though.


Anyway, its an even bigger failure than I thought earlier, as the little tidbit they left out was they are retaining the gt310/315 parts, I wondered why then it hit me.

They can't get a 5-10W part out because, they changed the granularity of the core from 16 shader clusters, to basically a 3x shader cluster with altered core logic for each cluster, mostly adjusting the rop/tmu ratio to each cluster.

THe reason the lowest mobile GPU has 48 shaders is, the gf104 doesn't scale down below 48 shaders, which is either a laughable oversight or they actually planned to have to rely on last gen's low end, which again is a laughably bad situation to be in.

SO I was wrong, the gt310/315 will sell the entire 4xx mobile parts by a factor of 10,000 to 1 because they are the low end parts. The problem is, Dell and co love new cards, they like selling "new" computers with brand new, latest gen parts. Relying on last gen low end for the bulk of sales is just bad business, even more so because the gt310/315 have been losing market share hand over fist as guys like Dell move towards AMD, more so as Apple move away from them in the mobile sector which is seemingly happening in the next year.

The naming gets even more odd because they are now going to have a line up of GT310/315, gt415 up to gt470. I think normally you'd have Nvidia rebrand those bottom two as gt410/415 and make the new gen from GT420 and up. Though I guess because the gt310 was already rebranded from gt210, and the gt3xx range is a joke with dx 10.1, and dx10 parts mixed up, that gt4xx with varying dx support wouldn't be justifiable.


Seriously though, designing a new generation but forgetting to include the ability to make a truly low end part is very very odd.
 
Last edited:
actually on reflection i would say the 5550 (320SP) rather than the 400SP 5570 , the limitation of the 5550 is the same as the GT420 - pixel rate if 5.6GPix/S , although with 320SP it likely slightly ahead of the 48cuda GT420 - not withstanding that this wont be a gaming card :D

ofc it does point to at least 3 more cards - 96CUDA and 128 - and maybe 160 cuda - depending on how much mileage nv want.


if any of the GTS450 numbers are true then you could scale that down to the GT420 - faster in somethings and slower in others - and aweful above 1280x1024 ;)
 
I got the point of what your saying, I'm countering one very small part of what your saying not disagreeing with the whole.

Seriously though, designing a new generation but forgetting to include the ability to make a truly low end part is very very odd.

Its not so odd when you realise that Fermi was not intended to go up against the ATI 5 series and the 200 series 40nm refresh was - which does have plenty of application to low end cards. Fermi wasn't designed for that generation and hence its not easy to make a true low end part for it on a process its already difficult to shoe horn it onto... doesn't look so odd when you see it from the right perspective. I don't think people realise just how badly the design sits on the process either - i.e. the polymorph engines and several other parts of the core/shaders are only running at half clock due to problems with power useage and leakage.
 
Fermi was never meant full stop. Its a truly awful architecture that was humped out of the door because Nvidia got complacent.
 
actually the architecture is quite good - what the problem was , and still is , is TSMC and there failed 40nm process.

ati realised early on that TSMC had messed up , so to compensate for the fabs failing , did a base layer redesign and worked upwards - all within the poor 40nm process. This pushed back the 5xxx series but meant they would be useable from day1 - nvidia didnt do a base layer redesign m, they just redid the metal layer - twice if the rumour mill is true and here we have fermi A3 as it is today. actually the G104 core is quite good - using less power and having better than expected performance (well noise and heat and ofc tessallation)


is just thats its late - and with ATI going to GF for 28nm (they also binned 32nm early when TSMC ran into troubleas well) nv have to play catch up.
 
Fermi was never meant full stop. Its a truly awful architecture that was humped out of the door because Nvidia got complacent.


Theres some truth to that... I wouldn't call it truly awful, it was never intended for performance gaming useage - atleast not this generation - and it did get kicked into play because nVidia got complacent. Not sure of the exact details but from what I've heard the design was originally on the back burner as having potential for future GPGPU/high performance computing useage.

The architecture as above is actually fairly decent as a design despite what certain people say it just doesn't work well on the process its been rolled out on.
 
Theres some truth to that... I wouldn't call it truly awful, it was never intended for performance gaming useage - atleast not this generation - and it did get kicked into play because nVidia got complacent. Not sure of the exact details but from what I've heard the design was originally on the back burner as having potential for future GPGPU/high performance computing useage.

The architecture as above is actually fairly decent as a design despite what certain people say it just doesn't work well on the process its been rolled out on.

You design and build to whats available Rroff. Fermi might work on 22nm in two years. Untill then its dung, and by then it will be old dung.
 
kings20of20leon202.jpg


"your laptop's on fireeeeeeeeeeeeeeeee!"
 
Back
Top Bottom