• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Where's Pottsey?

What difference does your edit make, at all?

The first line, pre edit is suggesting basically K1 is winning when on screen. Your edit includes the fact that you realised it refers to if a screen is attached or not, nothing more or less.

It doesn't suggest you realise on screen is vastly different resolutions, again your first line implied a K1 win, you edit doesn't even particularly indicate that the offscreen test is a resolution equaliser to some degree. Your edit doesn't effect or change your first line nor is it relevant to Pottsey or my posts.

k1 got stomped, badly, a couple months after it became available...... which almost everyone on earth except Nvidia knew would happen. T3 was "fast" because no one else was making a new gen 40nm device, it beat the new gen 28nm devices to market by a few months but was utterly destroyed by the first 28nm devices on power and performance. Nvidia think they can beat physics, again, while also believing ploughing resources into a product that will last 3-4 months is better than being early to a 28nm product that can last 18-24 months.

It was predictable with T3, with everyone saying putting resources and time into a 40nm chip when 28nm/A15 was months away was retarded, it turned out to be retarded. An ounce of sense could see the problem coming, and hindsight and the ability to learn proved it was a bad decision after the fact, that Nvidia went and did exactly the same thing again, when everyone else predicted the same problem, and the outcome is identical.... how anyone at Nvidia thought it was a good idea I don't know.
 
Last edited:
hFZTSGW.png
JAWHbEF.png

http://anandtech.com/show/8666/the-apple-ipad-air-2-review/4 I think it's fair to say Anandtech does the best SoC testing in the business.

K1 seems to put up a pretty incredible showing against Apple's brand new 20nm part. Imagine how much of an actual (not a made up because I hate Nvidia one) kurb stomp it will be when nvidia move to 20nm.

If you look at another article on Anandtech, it seems Apple actually had to combine two seperate GPU's with their own gluelogic to get that performance.
 
Last edited:
What difference does your edit make, at all?

Well, apart from making your reply redundant... lol


The first line, pre edit is suggesting basically K1 is winning when on screen.

Indeed as that's what the benchmark appeared to show, then when I realised the mistake I added the edit to point out that on/off screen doesn't mean what it sounds like, and thus the Apple is actually doing much better when no screen is connected (only hindered in the screen tests due to having a higher resolution one).
 
Last edited:
That isn't what your edit suggested at all, you added a line that stated you realised what offscreen meant, not that it had anything to do with resolution differences or that the two tablets had different on screen resolutions. There is zero indication of this not least because you didn't actually edit the line which still implied you believed K1 won in games/on screen.

Stating you realise that the offscreen test meant it was connected to another screen doesn't change the rest of your post. You may have meant that, but your edit only, with no other implication at all, says you understood what offscreen meant.

If you wanted to edit the intention of your post you would edit out the bit where you were suggesting the K1 was faster, you didn't.

That is ignoring that onscreen/offscreen don't necessarily scale. A predominantly gpu maker who is making a "gaming" tablet which a primary use may be hdmi out to a big screen tv might design a soc which has better bandwidth and/or is designed around 1080p performance. Another tablet may not run at 1080p and isn't scaled to perform well at 1080p at all. Thus the performance difference at effectively 1080p vs 1600p could be significantly different to the performance offscreen when both are running at 1080p.
 
I think it's fair to say Anandtech does the best SoC testing in the business.

K1 seems to put up a pretty incredible showing against Apple's brand new 20nm part. Imagine how much of an actual (not a made up because I hate Nvidia one) kurb stomp it will be when nvidia move to 20nm.

If you look at another article on Anandtech, it seems Apple actually had to combine two seperate GPU's with their own gluelogic to get that performance.

In practice, while we see that the A8X's GPU is extremely close to Nvidia's single Kepler SMX across the board, with the sole exception of 3DMark which seems to be due to the dependencies present in the physics test that dramatically lower the overall score. NVIDIA's Tegra K1 seemed incredible for its GPU performance earlier this year, so it's just as surprising to see Apple successfully rival NVIDIA in this area.

Gaming company focusing on gpu performance for a gaming tablet... gets beaten by a first gen not particularly radical 20nm soc in gaming. You managed to not mention the summary of the gpu performance is that, 3dmark is basically out of the ordinary. Nvidia who have been optimising for a benchmark for donkeys years and Apple who don't care about it, it's also the physics test which shows the real difference.

However

As one can see, the iPad Air 2 is one of the best performers on this test, considering its frame rate and runtime. While NVIDIA's GK20A GPU in Tegra K1 can get close to the GX6650 for short periods of time, over a long workload it's pretty clear that the GX6650 on 20nm has better sustained performance and significantly superior efficiency as it doesn't throttle until the 200th iteration of the test. It's important to note that the iPad Air 2 is running at a higher native resolution here, so relative to SHIELD Tablet a scaling factor needs to be estimated in order to get an idea for performance at the same resolution.

This is key from the battery/longer runs test. K1 performance drops significantly, NOT due to battery life but because it's basically running WAY beyond what it can long term due to thermal limits. The Ipad Air 2 performance is effectively sustained performance, it is designed to run that fast without the device burning you, the Nvidia Tablet is starting off at a speed it simply can't sustain, using way more power, and as the temp of the device increases the performance drops significantly.

WE're comparing apples to oranges here. It's like saying the first 1 minute of some FPS game on both tablets performance is similar, but from the 5th minute till the third hour of playing the Air 2 gives the same performance while the K1 drops 30-40% performance. 2 minute long tests for a company making overpowered chips that consistently designs things to look good in benchmarking but drastically degrade in real world is a good way to skew benchmarks. Real world the Ipad Air 2 is designed to run at sensible clock speeds, not to give you 5 minutes of double clocks before it has to back off.

It's getting a spanking any way you look at it. Even when in unsustainable power usage it's getting beaten in everything but 3dmark... it might be being beaten in 3dmark, I'm not sure if it tests at native resolution or not, if it does, it's losing even at higher short term clocks(as the ipad is pushing 50% more pixels). It's losing everything else at the higher unsustainable clocks, the performance difference increases the longer it's in use.

Anandtech also really isn't the best benchmarker because it shows these results then at the bottom of the gpu results states effectively "oh, btw, when it comes to sustained run/power usage, that might change the results entirely". No where in the review does it actually do benchmark comparison of k1 vs gx6650 after a long run.

That page of results should be the 20th run or something of the tests, not show peak(and basically not at all real world) usage then put a disclaimer at the bottom basically saying "you might not get this performance at all", then later in the review saying oh yeah, looks like K1 basically can't compete. It's giving two entirely different pictures to readers, and considering 90% of readers will go straight to benchmarks and ignore things like power/battery. Anandtech also mention the importance of scaling the numbers for resolution to get a "real picture" of performance...... but fails to do this in the benchmarks at all.

Anandtech manager to highlight every reason the numbers aren't very representative, yet still show the unrepresentative numbers painting the k1 in a much better light than it should be, while failing to actually show the numbers they suggest are effective real world usage, which stay almost identical for the Ipad but drop drastically for the k1.

That is bench marking almost at it's worse, knowing the numbers drop, stating why they will drop, stating why the numbers need scaling and failing to do any of it is pretty pathetic.
 
Last edited:
There is zero indication of this not least because you didn't actually edit the line which still implied you believed K1 won in games/on screen.

I didn't edit the line because I added a correction, that's how you're supposed to do it as opposed to ninja editing the original post. It's not my fault if you failed to understand it, it's perfectly logical, if the Apple only lost because of the screen attached, than means it had a higher resolution screen and it's faster GPU vs GPU /facepalm.
 
Last edited:
Usual DM wall of text with very little substance behind it

I see the reality of K1's performance not matching the narrative you had built up in your head has got to you.

Now, lets take a look at those off screen results that seem oh so important to you.

PrjYPt0.png
e6XcqIM.png
0hrZrZ8.png
MmWR5vc.png

To be fair, that's a 3/1 win to the A8X, but in each case, a very narrow win. I'd say certainly not even 10% in advantage of the Apple part.

Now, Anandtech did a further bit of digging on the GPU in Apple's latest and greatest 20nm SoC. It turns out that the GPU is anything but "not particularly radical". It seems Apple have gone way beyond, and have crafted their own custom GPU design by glueing 2 high end GPU's together http://anandtech.com/show/8716/apple-a8xs-gpu-gxa6850-even-better-than-i-thought . So it seems Apple have shot for the moon this time on the GPU, yet after all that, it's pretty much a wash between it, and the 28nm K1. So... Once again, imagine how much further ahead the K1 will be once it too drops to 20nm, it wont be pretty for the A8X.
 
Well done for not reading, read the bit where the K1 is running at much higher clocks, benches higher for a few minutes than will drop in performance, the Ipad Air 2 is running at clock speeds that are actually usable long term. If you play a game for 30 minutes the performance in those benchmarks you get from the K1 is simply something you WILL NOT SEE. The Ipad Air 2 you'd see the same or very close to it.

You can jack up the clocks in the Air 2 gpu by 50-100% and get impressive benchmarks, but long term would have the same drop Nvidia does.
It's an apples to oranges comparison. Apple using realistic clocks, Nvidia using stupid clocks that aren't sustainable. Again Anandtech say this in their article, but fail to actually take this into account for the main benchmarking giving an entirely stupid and unrealistic expectation.

Shooting for the moon, by sticking two existing lower end gpu's together is not radical, it's something that has been done for years and sticking together is a particularly basic way of Anandtech to put it. Architecturally it's last gen, in fact architecturally(per shader, per cluster performance) several years old. The same space used on a 7 series would be significantly faster. It's an old gpu architecture that absolutely is NOT radical. Scaling up an old architecture isn't even close to radical, it's extremely conservative. GPu architectures are pretty much by design scalable, with almost every gpu being effectively a collection of smaller blocks. a 2 cluster block, or for 4 cluster use 2 x2 cluster blocks, for 6 cluster use 3 x 2 cluster blocks.

Apple radically, amazingly, made a 8 cluster gpu which is..... 4 x 2 cluster blocks..... OMG. I don't think anyone even used a 6 cluster gpu at 28nm because that would have been pretty big, I believe apple went with the 4 cluster version, then with a new process where you can to a degree double everything, they went with doubling the gpu, which is pretty predictable. The conservative part is they went with a pretty old architecture and design. 7 series will be significantly faster and more efficient per mm^2 used.

But as above, after 30 mins of usage, the Ipad Air 2 will be wiping the floor with the K1.
 
Last edited:
Ah righty then. Anandtech's benchmarking is flawed as it does not fit your anti Nvidia narrative. Never mind the fact they have been the first to pull up many Android OEM's on GPU benchmark shenanigans. OK Chuckles, don't you ever get tired of carrying that can?

The facts are, K1 matches the performance of Apples state of the art custom 20nm GPU on a 28nm process. What do all the boys and girls at home think will happen when K1 moves to 20nm. Answers on a post card, and the winner pulled from the hat gets nothing but the satisfaction on another impotent DM rant.

Oh and some more pesky facts. The 6 series is the current PVR state of the art, with products just hitting the market very recently. The 7 series was only announced a few days ago, and we wont see it in any shipping products for at least a year, so cool your jets on that big guy.
http://www.anandtech.com/show/8706/imagination-announces-powervr-series7-gpus-series7xt-series7xe
 
http://anandtech.com/show/8666/the-apple-ipad-air-2-review/5

If you look at this, after time the K1 is slowed down, the ipad air 2 basically doesn't.

The Ipad air 2 is now beating the K1 while pushing a 50% higher resolution. That makes it more than 50% faster. Even accounting for 20nm, K1 uses a ridiculous amount more power. If you took the clocks up and unleashed more performance, killed battery life as Nvidia is willing to do for benchmarks, then Apple would be 50%+ ahead in the entire benchmark suite.

Nvidia's willingness to run unsustainable clocks to fool people into buying what they think is a fast gpu will only leave customers unhappy.

Again, 50% higher resolution, faster when continuously used while offering much better battery life.

EDIT:- I didn't call Anandtech benchmarks flawed., Anandtech pointed it out. I quoted it above when you wouldn't read it so I'll quote again, from Anandtech's review.

Of course, GPU performance is a function of both peak performance and sustained performance, so we'll have to look at the GFXBench battery rundown test to get the full picture here.

They are posting peak numbers, but the mostly crucial thing to point out here is Apple are not using high clocks which then throttle down BUT THEY CAN DO THAT. If they did that their peak numbers would rocket another 50%. Nvidia is lying to you, Nvidia is showing you performance for the first 2-5 minutes but nothing like the performance you can get after that. Play a game for 10 minutes and you'll have performance nothing like Nvidia advertise. Play same game on the Ipad Air 2 and you will have very very close to that performance still an hour or two later.

If Apple played the same game their peak would be 50% higher, their continuous performance numbers ARE 50% ahead when the nvidia tablet has throttled down to likely similar clocks/power usage to what Apple offers.

Its like the first set of benchmarks are a 980Gtx at 1500Mhz and a 290X at 800Mhz, the continuous use benchmarks have the clocks both set at 800Mhz and the performance is entirely different.

One company is willing to mislead their customers completely about effective performance, the other isn't.

Anandtech twice in the review point out that Nvidia CAN'T sustain that performance and that continuous/peak are entirely different. Continuous is the only one that matters unless immersive gaming on an Nvidia tablet includes closing the application every 3 minutes to allow it to cool down for a few minutes. While anandtech point out that the ipad air 2 is trashing the Nvidia tablet in continuous performance AND they point out that continuous use performance is crucial AND very different to peak performance they also mostly show peak performance benchmarks which gives a very poor review.
 
Last edited:
Once again, 20nm only just managing to beat 28nm. What do the boys and girls beating their heads against their desks as this dance is played out yet again think will happen with 20nm K1?

Edit: aaaand now we have reached the wild overreaction stage. Yay!
 
Last edited:
Once again, 20nm only just managing to beat 28nm. What do the boys and girls beating their heads against their desks as this dance is played out yet again think will happen with 20nm K1?
There is a lot more to it then that. Tegra k1 is running at almost half the watt per performance. Just going down to 20nm alone wouldn't be anywhere near enough for it to catch up.

Or to put it another way 2 PowerVR cores running at very low speeds at very low power are beating 192 Tegra cores running at very high power and at high speeds.
 
Last edited:
Yes it would. The only thing the A8X has right now is the basic advantage a smaller process brings of less power draw and heat generation.

Let's revisit this when we get the first 20nm K1 parts, I imagine it will be uncomfortable reading for some.
 
Yes it would. The only thing the A8X has right now is the basic advantage a smaller process brings of less power draw and heat generation.

Let's revisit this when we get the first 20nm K1 parts, I imagine it will be uncomfortable reading for some.
I do not agree the A8X has a lot of massive advantages like being a tile based deferred rendering Architecture. The smaller process alone does not account for the doube power draw efficiency over Tegra. But I am very happy to come back to this and I believe it will be uncomfortable reading for the Tegra fans. I have seen enough of the next generation PowerVR series 7 to feel no threat from Tegra. When power draw and heat are a limiting factor like in mobile then tile based deferred rendering architecture has a massive advantage over the method NVidia use. That is why NVidia abandoned the IoT and phone markets and are under 1% market share in the tablet market.

Tegra pretty much needs a 100% improvement in performance per watt to catch up to series 6. By the time next gen Tegra comes out it will need a 200% improvement in performance per watt as it will be up against PowerVR series 7.
 
It's worth pointing out that 20nm is one of the smallest process updates in terms of power/performance, it's a "bad" process. The reason they are all switching to finfet(and fake 16nm) so quickly is because that is where the real power drop comes from. 20nm has an advantage but it's half that of say the drop to 28nm from 40nm.

It doesn't even remotely come close to accounting for the performance/power difference.


But once again for the people who are refusing to listen. Anandtech show quite clearly that in continuous use mode(the only mode any users will actually use the tablet) it has higher performance while pushing 50% more pixels. It's only 3fps ahead in that test, 46 vs 49fps, but again it's pushing 50% more pixels. That isn't just beating it, that is 50% faster, and that isn't something Nvidia will merely claw back just by going to 20nm.


http://anandtech.com/show/8666/the-apple-ipad-air-2-review/5

if a GTX 980 was offering 3fps more but at 1600p vs a 290x at 1080p, you can be sure as hell you'd be pointing that out. But here you're just ignoring it and ignoring the massive throttling because it's inconvenient. I haven't see specified clocks yet but on the A7 the POwer VR gpu was rated at 650Mhz but was run at around 350Mhz in the iphone because those were the sustainable clocks. Apple could enable 650Mhz clocks that work for 3 minutes before it throttles, at which point it would smash it being 50% faster in all the benchmarks, that is all Nvidia has done. It's not a faster gpu, it's just stupidly and unrealistically clocked to make benchmarks look better but CAN'T sustain those clocks in real world usage at all.

It's right there again, higher performance while pushing 50% higher resolution with almost double the battery life. Nvidia will gain battery life and some performance, but 50% from the process alone is not at all likely. 20nm is offering a give or take 25% power drop for the same performance. Most processes historically offer 45-50% power drop. It's half as good a node as usual and Nvidia can't hope to increase performance 50% through pushing to 20nm alone.

Nvidia CHOSE to release a 28nm chip a couple months before Apple released a 20nm chip, rather than be a couple months behind it and on 20nm with a competitive product they went the T3 route, extremely late at 40nm and made to look woeful by the first 28nm chips months later, backfired then, will backfire now.

50% behind in a few months, ouch, same as happened last time.

By the time Nvidia have a 20nm version out Apple will have A9 out, maybe at 16 or 14nm, it will most likely have a 7 series gpu which again will bring another large bump in performance.
 
Last edited:
Yawn.

I see in your haste to trash anything and everything Nvidia, you missed that the last benchmarks I posted were all offscreen tests, now I would hope you know what that means. As you seem to have some fundamental comprehension issues, I'll spell it out, that means they are all rendering at the exact same resolution, so no 50% penalty at all.

It must have pained you that the A8X didn't smash the K1 as you so fervently hoped and wished. Also it's amusing that apparently any shrink wont bring any real improvements for K1, yet the next shrink for Apple will in your own words "bring another large bump in performance". That's pretty transparent even for yourself.

Now, as I like to use pesky facts and evidence, even when talking to hilariously blinkered fanboys, lets see just how K1 operated under heavy load shall we?
Wix4uV9.png
0Zzj87k.png

Oh No! What is this? Could it be that K1 can operate for almost 2 hours under a very intensive GPU load before any thermal throttling occurs, how can this be!?
 
Last edited:
Back
Top Bottom