• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

More bad news for nvidia if true.

There are a lot of 'if's' in the story though.

If they cut the shaders in half, maybe that will allow a much higher clockspeed?

THats not how it works, you cut shaders in half, you cut the size, well close to in half. ITs NOT the power output or requirement that stops the 480gtx clocking higher, its the inability of the individual transistors to do so accurately. 300W over 23mm^2 will yield the same power, and the same heat per 1mm^2 as a chip that runs at 150W, but is only 15mm^2 (roughly).

The temps will be marginally better, but not much as you end up with cheaper cooling aswell as less overall power to shift.

Cutting the chip in half, won't raise the ability of it to clock higher, well not noticeably anyway.

As I've been saying for a while Nvidia is going to be completely uncompetitive the whole generation. A high end core that is 60% bigger than the competition with less than half the yields, will also end up as a mid end core thats 60% bigger than the competition and likely significantly worse yields(but probably not quite as bad comparitively).

mid end is basically always half the high end, you don't under any circumstances bring out a whole new architecture for the mid end. They might cut off some of the uncore, and some DP stuff, the DP stuff is marginal die space saving, and remember AMD makes the same savings, the cache if its gone or reduced might save 5% that AMD can't make. 55% bigger, lower yields, it won't be cheaper and it won't have any more of a performance improvement than the 480gtx has over the 5870(ie little and not worth the price difference).

They can make them, they can sell them, but long term they can't make a good profit on a low yield massive core. They'll likely be staying with TSMC, TSMC's 28nm is likely to be a good 20% larger than the same chip made on GloFo's 28nm due to the design path they've gone down. Nvidia HAVE to go small core to be competitive long term and be in the market beyond a couple generations away. You can survive a couple unprofitable products, you can't sell products that make a loss forever and Fermi will simply not yield them good profits. Its a production nightmare due to size, and when the competition is leveraging 90% of the performance from 55-60% of the die size, you're screwed. When the TSMC/GLoFO split happens that could be a core of under half the size with far far higher yields.
 
Last edited:
He's a journalist, he doesnt know anything unless his contacts tell him. A classic has to be, the 480 is rated at 300W, oh wait, we were wrong its 295W the engineers changed it at the last minute lol, ends up being 250W. If your going to fail like that, at least fail with a little more dignity than.. 'We'll guess at 300W, and when were wrong, we will say they changed it at the last min, so were covered'

Now add that reasoning to every article he writes and there you have it. I'll give it a rest when he gives it a rest.

Get it right, EVERY SINGLE WEBSITE has said the 480gtx is not 250W. Every single last one, even the most fanboy of fanboy websites shows under the same load the 480gtx pulling between 110-130W more under load than a 180W 5870. DO the math, its not a 250W gpu. Look at me, I'm going to go get a label, write 8W on it, and stick it on my P2 quad core box......... that doesn't actually mean it uses 8W though does it.

WHen every single website in the world who reviewed one and tested power shows a 100W + difference in power, its clear without an doubt, that the 480GTX uses much more than 250W.

If I'm wrong, or he's wrong, please link to a review that shows power draw less than 100W different to a 5870.
 
Get it right, EVERY SINGLE WEBSITE has said the 480gtx is not 250W. Every single last one, even the most fanboy of fanboy websites shows under the same load the 480gtx pulling between 110-130W more under load than a 180W 5870. DO the math, its not a 250W gpu. Look at me, I'm going to go get a label, write 8W on it, and stick it on my P2 quad core box......... that doesn't actually mean it uses 8W though does it.

WHen every single website in the world who reviewed one and tested power shows a 100W + difference in power, its clear without an doubt, that the 480GTX uses much more than 250W.

If I'm wrong, or he's wrong, please link to a review that shows power draw less than 100W different to a 5870.

It wasnt the power draw figures, it the idle power I'm talking about.

Lets say hypothetically there is a 100W+ difference in power draw like you say, well he still fails as he wasnt 'accurate' and that pun was intentional.
 
there's 2 possibilities:

1) the 5870's dont actually pull around 180w and the 480 really does pull 250w
2) the 5870's pull a lot more than 180w and the gtx really pulls closer to 300w


a lot of confidence is being placed on the a figure regarding the draw of a 5870 without much in the way of proof it would appear. i'm going to sit down and work it out later, i reckon i can get to a figure from a card i KNOW has been tested correctly - ie, the card itself and not the pc's draw.
 

Add to that list the times hes been wrong about or misunderstood the details on tessellation and how the different architectures would perform with it.

That said tho, hes generally in the right ballpark if not 100% correct just doesn't always tell the whole story :D
 
Wasn't it after 480 "launch" that everyone was like damm, SA is actually quite close to the truth.

3 weeks later everyone has forgotten and is slating him again, fanboys, from both sides, crack me up.

Hope that competition keeps up so we get best bang for buck, green or red :)
 
i havent forgotten any of it. I havent forgotten that he presented the information that he was given, that newer information was different in key areas to what he'd said previously. I also havent forgotten that he was the first to point these discrepancies out in his own articles (which he is good at).

i mean good god, things change. info changes, and it's his job to report it as it comes. and by LARGE he's got it right. Again i will say, far closer to the truth than anybody else has been.

There are a lot of people who are slating charlie simply because he's the most prominent figure in the 'anti-nvidia' camp when, actually, he's just reporting the news. the problem is people dont like what they dont want to here.

There are many people on here who absolutely slated charlie in the beginning, yet as it gets closer to launch its suddenly 'oh he's wrong, but he's mostly about right'. Give me a break.
 
He's a journalist, he doesnt know anything unless his contacts tell him. A classic has to be, the 480 is rated at 300W, oh wait, we were wrong its 295W the engineers changed it at the last minute lol, ends up being 250W. If your going to fail like that, at least fail with a little more dignity than.. 'We'll guess at 300W, and when were wrong, we will say they changed it at the last min, so were covered'

Now add that reasoning to every article he writes and there you have it. I'll give it a rest when he gives it a rest.

I don't think even Nvidia knew the specs until a week before "launch" to be fair. There's no way on gods green earth that card only draws 250w as well.
 
Wasn't it after 480 "launch" that everyone was like damm, SA is actually quite close to the truth.

3 weeks later everyone has forgotten and is slating him again, fanboys, from both sides, crack me up.

Hope that competition keeps up so we get best bang for buck, green or red :)

The guy got Fermi pretty much spot on from what I've seen of his news articles. Ive never understood why he gets such a slating either.
 
Last edited:
I've already been over it once in another thread - he misunderstood the implementation of the polymorph engine and the load balancing aspect and how it would stack up against the dedicated unit on the 5800 series - which was then peddled on by other people as fact.
 
I've already been over it once in another thread - he misunderstood the implementation of the polymorph engine and the load balancing aspect and how it would stack up against the dedicated unit on the 5800 series - which was then peddled on by other people as fact.

oh, you mean this http://forums.overclockers.co.uk/showpost.php?p=16092141&postcount=104

Rendering 1 millions polygons via any method has the same rendering time penalty no matter if its the output from tessellation or an original run time loaded high detail mesh - tessellation does not give you faster performance at the same quality as such - it just makes it easier to implement (and as such gives better performance) high detail on the fly with seamless dynamic LOD adjustment. Which results in better performance unless someone writes some complex visibility algorithms and doesn't have the popin/out effects that alternative methods would suffer from.

a question then. if the polymorph engine dynamically adjusts the LOD, does that mean that, benchmarks such as unengine, are actually a load of tosh?

you could dynamically balance any set amount of tessellation and keep it running at probably any frame rate you want. correct?

so, given that there's no really discernible difference between stupidly-high and high tessellation in said benchmark, does that mean that actually the gtx 480 is just doing what it should be doing - ie adjusting the LOD?
 
Your misunderstanding the dynamic LOD aspect - the engine adjusts LOD based on things like distance and whats being feed into it by the software (heaven). The hardware does the same thing (it would have to) - for the same input data you'd get the same output data regardless of the brand of hardware. The software controls the LOD used.


I was actually wrong (kinda - depends on the method hardware/software and some other aspects that are in play) on one point - the tessellation product are based on data thats already been transformed from world space so it saves some time compared to computing millions of polygons raw. Although this point is argueable as to the actual performance gains due to other factors.
 
Last edited:
Back
Top Bottom