• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Tegra X1 (Maxwell for mobile devices / car)

Gotta love nvidia's marketing headlines, the 1tflop number is a rather specific usecase, and not really comparable with a lot of other given numbers.

Most gpu 'flops' values are given for FP32, with which the X1 gets 512GFlops, however under certain circumstances it's able to package two FP16 ops into a vector operation in the FP32 pipeline (it has no FP16 pipelines, so worst case scenario it uses one FP16 operation in the FP32 pipeline), so under certain conditions it can claim the magic teraflop...

It also bases that off a 1GHz GPU clock, which seems high for a phone/tablet device, even on 20nm...

It is an improvement on K1 (obviously), but ultimately by the time K1 devices were out in the wild the performance wasn't exactly a standout leader, high-end and decent but not staggeringly better than the competition at the time, I see the same happening here...
 
Gotta love nvidia's marketing headlines, the 1tflop number is a rather specific usecase, and not really comparable with a lot of other given numbers.


They did the same for the K1, these numbers will be for the "full fat" X1 with no power / thermal restrictions. That being said, it will still be very powerful in tablets (it will probably never make it into a phone)
 
Looks a little underpowered and under speced for when it’s due out, many of its key specs are lower than today’s chips. I don’t understand some of the choices they cancelled 16nm and choose 20nm when most are moving to 16nm that generation. So yet another generating NVidia will be a process behind. Why?
 
Can you give me a comparison (CPU clock frequency doesnt really count in ARM land)

PowerVR GXA6850
Clusters 8
FP32 ALUs 256
FP32 FLOPs/Clock 512
FP16 FLOPs/Clock 1024
Pixels/Clock (ROPs) 16
Texels/Clock 16

That’s what we have now and will be replaced by a new generation by the time X1 arrives. I don't see how the X1 is going to compete against series 7 or others which it will be competing against. Being only 20nm is really going to hurt the X1.

Series 7 PowerVR GT7900 will be 16 shading clusters, 512 ALU at FP32 or 1024 ALU at FP16 cores and over 60% faster clock to clock , cluster to cluster against series 6 and GT7900 looks to be at a 16nm process.
 
Last edited:
PowerVR GXA6850
Clusters 8
FP32 ALUs 256
FP32 FLOPs/Clock 512
FP16 FLOPs/Clock 1024
Pixels/Clock (ROPs) 16
Texels/Clock 16

That’s what we have now and will be replaced by a new generation by the time X1 arrives. I don't see how the X1 is going to compete against series 7 or others which it will be competing against. Being only 20nm is really going to hurt the X1.

Series 7 PowerVR GT7900 will be 16 shading clusters, 512 ALU at FP32 or 1024 ALU at FP16 cores and over 60% faster clock to clock , cluster to cluster against series 6 and GT7900 looks to be at a 16nm process.

Have any real world benchmarks from shipping devices for the GXA6850 or GT7900 ? (no leak, or differing operating systems from what the Tk1 can run)
 
I like how NV are spinning this to the moon.

They likely couldn't get it below 10W so of course it's not gonna be in a phone. But they see fit to compare it to A8 anyway.
 
Have any real world benchmarks from shipping devices for the GXA6850 or GT7900 ? (no leak, or differing operating systems from what the Tk1 can run)

The GXA6850 is out now in products but there are no fair benchmark comparisons as the K1 is unable to run the metal API which gives the GXA6850 a very large speed boost in real world apps. No benchmarks from the GT7900 only specs. The GT7 line is what the X1 will be up against. Just to be clear the GTline is not shipping now its due to ship around about the same time as X1.

What I was trying to get across is the X1 specs are around the GXA6850 level which is half as good as what the X1 is going to have to compete against.
 
Last edited:
From the article in OP

In one particularly involved test, the NVIDIA team measured the voltage usage rates of the X1 versus the iPad Air 2... after tearing apart eight of them and downclocking the X1 so both were at an equivalent level of performance. In the end, average power consumption for the Air 2 was 2.6 watts, versus 1.5 watts for the X1, a pretty significant power savings

Nvidia do always spin this and tend to neglect the power draw of the thing bit 2.6 -> 1.5 watt saving at the same performance level isnt bad at all.

Dont get me wrong I am probably more cautious of the tegra products than most, but they have always been at the bleeding edge of arm on linux and android GFX
 
The GXA6850 is out now in products but there are no fair benchmark comparisons as the K1 is unable to run the metal API which gives the GXA6850 a very large speed boost in real world apps. No benchmarks from the GT7900 only specs. The GT7 line is what the X1 will be up against. Just to be clear the GTline is not shipping now its due to ship around about the same time as X1.

What that tells me is that we still cant compare TK1 to GXA6850 and we cant really compare products that are not on the market yet.

For chip -> chip comparison we need to use the same API (and OS if possible) otherwise its like comparing an R9 290 on Mantle on windows to a 970 on linux running openGL :/
 
People love to rag on AMD for marketing, they're complete hypocrites for giving NV a pass. They never give a straight answer on Tegra power consumption because they can't bloody hit the envelope. They desperately want to be in phones but their chips don't cut the mustard.

And look at this for gods sakes:

PV4kHpA.png
 
Anyone read the preview on Anandtech? For all those who were ragging on Nvidia Tegra K1 for it's power requirements (and rightfully so) the demo kit they were sent in one test used 1.5 watts compared to the A8X which used 2.5.
 
People love to rag on AMD for marketing, they're complete hypocrites for giving NV a pass. They never give a straight answer on Tegra power consumption because they can't bloody hit the envelope. They desperately want to be in phones but their chips don't cut the mustard.

Agreed, infact I feel that nvidia are at times worse than AMD for marketing lies / misinformation.

That image is a very good example, all it is really saying is that Unreal engine can scale from a 10w "mobile" device up to a 100w games console. But what it is implying is that the 10w device performs the same as the 100w device.
 
Anyone read the preview on Anandtech? For all those who were ragging on Nvidia Tegra K1 for it's power requirements (and rightfully so) the demo kit they were sent in one test used 1.5 watts compared to the A8X which used 2.5.

Yeah at running Manhattan 1080p with both the A8X and TX1 aiming for 33fps the TX1 ran at 1.5watt consumption 1watt less than the A8X.
 
What that tells me is that we still cant compare TK1 to GXA6850 and we cant really compare products that are not on the market yet.

For chip -> chip comparison we need to use the same API (and OS if possible) otherwise its like comparing an R9 290 on Mantle on windows to a 970 on linux running openGL :/

Pottsey is a die hard powerVR fan, he is always talking about theoretical products from their marketting and comparing imaginary chips to real world product tests

i think he has shares in powervr iirc
 
Pottsey is a die hard powerVR fan, he is always talking about theoretical products from their marketting and comparing imaginary chips to real world product tests

i think he has shares in powervr iirc

He's trying to compare apples with apples, if Nvidia and Apple release a new Soc sometimes mid to late 2015 then comparing the gpu in them to each other makes a hell of a lot more sense than Nvidia comparing a gpu in an unreleased product to a product that has been out for some time.

7 series is a HUGE efficiency increase over 6 series, same way Maxwell is over Kepler... don't see Nvidia fans claiming that isn't true. When the Apple lets presume A9X is out in a product next year will we be comparing the new Tegra to that, or this years product?

Apple went a bit weird with A8x, it's a slightly updated Cyclone and a last gen GPU, rather than go 7 series they basically stuck two old 6 series together so while the gpu die size wise might be similar, if they were using a single 7 series with the same number of shaders it would perform way higher than it does now.

Nvidia is comparing an unreleased product not available for some time(if we presume based on when K1 was announced that we'll be lucky to see the new tegra before the middle of the year, then it will be around 10 months after the Ipad but 1-2 months before the next Ipad.... Nvidia consistently compares against the previous gen products and ends up releasing something the compares favourably to something out the year before but poorly to things out within a couple of months of it. Same thing every year with Nvidia/Tegra, literally the same thing every single year.

Also note power performance improves considerably with lower temps, again a big heatsink not inside a tablet on a "test platform" being compared to a chip inside a working tablet with significantly worse cooling.... and an Nvidia controlled platform in which we have no idea if the numbers are accurate.
 
Back
Top Bottom