drunkenmaster said “As per usual Pottsey tends to ignore logic and have a glorified take on things he really really likes.”
Ok I might something take a little glorified approach on things. But I never ignore logic. Can you point out one point in this thread where I ignored logic? In fact any thread. Just because you don’t agree with me or don’t understand something it does not mean I am ignoring logic. Thats the one thing I never do and if I am wrong I admit it.
drunkenmaster said “You're now trying to compare future generations of mobile products, to current desktop stuff, which will double in power roughly speaking, every generation.”
That’s my point. By 2012 mobiles will have hit today’s mid/high end desktop cards power. Desktops are only doubling every generation while mobiles are more than doubling every generation. Mobiles chips are now right behind the better mid end desktops and look to catch up if not overtake.
Mobiles are advancing at a rapid rate far faster than desktops and with a far more efficient design then desktops.
drunkenmaster said “Before making such bold claims maybe you should stablish the fact that the current competition is close, because if the current gpu is 20times slower, getting 20 times faster won't even keep up with a Tegra that doubled its gpu power”
But the current competition isn’t close. How do you work that one out? The Tegra 2 next gen chips are barely faster than PowerVR’s last generation if you can even argue they are faster as so far the games benchmarks show them as slower. It’s only a few synthetic benchmarks that slow then as a little faster. Take a closer look at NVidia’s marketing hype. When NV say they are faster they are comparing this year’s NV gen against PowerVR old chips.
NV being 20% faster than PowerVR old chip isn’t very impressive when PowerVR’s new chip is x4 to x8 faster depending on device.
drunkenmaster said “Likewise, you're basing all of this on the likely figures of a full 16 core gpu, what power will a 16 core version use, will it fit into mobiles, or will a 16 core version only be a tablet gpu.”
No one ever said the series 6 chip was 1 core or 16. They only gave performance numbers. The power numbers are significantly enhanced performance per square millimetre (mm2) and per milliwatt but no finale details. Assuming this is all true we should be talking well under 100 milliwatt’s. Very impressive technology.
drunkenmaster said “But its not in the ballpark of desktop gpu's, pretending so is silly, really really silly.”
How is 2660 million polygons per second and a fill rate of 80GPixel/s a second not in the ballpark of desktops? Even this year’s chips at 133 million polygons per second and a fill-rate of four billion pixels per second (4GPixel/s) are in the ballpark of desktop GPU. Granted not the high end GPU’s but they have caught up with desktops instead of being years behind.
drunkenmaster said “Tegra is a SOC, Powervr is not, Tegra could use a powervr gpu inside its SOC”
That would be great for me as I would make even more money. But I don’t see it happening. As for saying Nvidia could bring out a new GPU. What’s the chance of them breaking the roadmap and bringing out something much better then what’s on the roadmap? Have they even ever done that before?
drunkenmaster said “Die size, power usage, actual speed, actual features, none of these things are known.”
Pretty sure I posted all that for the Series 5 MP chip although very little is confirmed for the Series 6 chip. What do you wawn to know for the Series 5 and I will post it if I can.
drunkenmaster said “Will it only be 210gflops in 16 core version which will use 2-3W, that would be almost certainly worse performance/flop than the gpu in bobcat. Is it 210gflops per 2 core pair, and only uses 0.1W and fitting 8 into a mobile is possible, I don't know that, you don't know that, no one seems to know that. “
Well they said it’s got improved power consumption so worse case should be 100mW for one core and 1600mW for 16 cores. Most likely much better.