• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

Associate
Joined
18 Oct 2013
Posts
1,475
Location
far side of the moon
big pascal won't be here until at least 4th quarter. Nvidia has never ever been good at node shrinks and moving to new memory. Last 2 times they failed badly....the replacements for their mobile parts are earliest 3rd quarter; but said most likely be 4th quarter.

Those chips make more money than standard desktop; they will be one of the top of the stack; Nvidia also needs to get out new professional chip as they have Intel and AMD breathing down their neck; they need to get something out there.

Along with industry coming back with Nvidia is 1 to 2 quarters behind AMD on their shrinks - this is just damage control.
 

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,359
Location
kent
Nvidia would want to maximise profit.


So that means another card like the 970 then, as I bet they made more profit of from all the 970's than they did all the 980ti's.

Of course those sort of figures we will never actually know so it is only conjecture, but with the amounts of 970 sold I'm fairly sure it would work out that way.
 
Soldato
Joined
7 Feb 2015
Posts
2,864
Location
South West
big pascal won't be here until at least 4th quarter. Nvidia has never ever been good at node shrinks and moving to new memory. Last 2 times they failed badly....the replacements for their mobile parts are earliest 3rd quarter; but said most likely be 4th quarter.

Those chips make more money than standard desktop; they will be one of the top of the stack; Nvidia also needs to get out new professional chip as they have Intel and AMD breathing down their neck; they need to get something out there.

Along with industry coming back with Nvidia is 1 to 2 quarters behind AMD on their shrinks - this is just damage control.

And also has released a card that uses the newer memory and construction process. So AMD have pipecleaned for their next major card. Nvidia are still mostly inexperienced with it. So big Pascal may be as few between as the Fury cards where when it drops.
 
Caporegime
Joined
18 Oct 2002
Posts
32,624
I am a frequent user here and I haven't seen him say it. Jen Hsun did way way back but of course those with a little knowledge would know that won't be 10x the frame rates of Crysis for example.

Nvidia said Pascal has 5-10x the performance of Maxwell in certain compute tasks, we now know from the drive PX2 tests that this is true.

No one ever claimed that rlaes directly to games.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
Nvidia said Pascal has 5-10x the performance of Maxwell in certain compute tasks, we now know from the drive PX2 tests that this is true.

No one ever claimed that rlaes directly to games.

PX2 hasn't confirmed that anywhere at all, not even close.

The slides they showed for PX2 are comparing a SINGLE Titan X compared to an entire PX2 unit. That is two gpus, and FOUR SOCs.

The biggest increase in performance listed between a SINGLE Maxwell GPU and 2 discrete GPUs, two more GPUs in the Tegras and one SOC(looks like a fabric providing one from first glance) and an image processing SOC is in..... image processing.

The rest of it is 1Tflop increase from the gpu to the entire system and tripling the 'deep learning' flops, which with the 4 separate SOCs can't remotely be attributed to the Pascal GPU alone. The best thing Pascal does improved over Maxwell is lower precision, meaning 8Tflop SP is equivalent to 16Tflop half precision IF it scales perfectly, it may only provide a certain ratio as with double precision. Yet they claim 3 times the tflops in deep learning performance... quite obviously a lot of that gain is from something outside of half precision flops and from putting things not suited to the GPU onto the other SOCs.

Making claims about Pascal performance by comparing a Maxwell with 6 separate chips, 2 Pascals, 2 Tegras and 2 other SOCs is incredibly disingenuous. In no way can you take PX2 numbers and suggest how much Pascal has improved performance and they absolutely didn't confirm claims of 5 times the compute performance at all.

Half of the gains Nvidia have claimed in increased compute performance came from a different interface in bandwidth limited systems with 8 gpus and even then Nvidia felt the need to put a disclaimer in that they were '*very rough estimates'.

Actually I would say the best way to phrase what Nvidia claimed was up to 5 times the performance from certain compute tasks(all mixed precision or only half precision, NOT single precision to single precision or double to double precision improvements) and 2x that from preventing bandwidth problems in specific systems. SO Nvidia is only claiming up to 5x compute improvement, then 2 x those improvements from Nvlink, which is just an interface. It's not even about giving the card more bandwidth, it's about preventing bandwidth to that card dropping. IE one card in a system gets 16x pci-e, but 8 cards only share 2x lanes each(theoretically). It's not about giving one card 64x pci-e lanes worth of bandwidth, but in 8 card systems preventing it dropping to 2x lanes each, but say 4x lanes each, hence the 2x increase from Nvlink. I wouldn't call that part of the architecture.

Today if you could make a system with double the available pci-e lanes to the CPU it would improve gpgpu compute performance the same way and really can't be considered a compute architecture/performance increase.
 
Last edited:
Soldato
Joined
7 Feb 2015
Posts
2,864
Location
South West
I believe some of that performance improvement also comes from the direct GPU to GPU links provided by Nvlink. Which also helps improve latency.

But i think the majority of the performance improvement comes from fixing the gimped Dual precision performance on maxwell. so it is rather disingenuous to say 10X performance over maxwell in certain systems and to think of it as a massive jump. when previous compute cards had better double precision performance.

I am sure that current PCIe routes to the CPU then back to the next GPU in the chain. i do not believe there are direct links between PCI sockets.
 
Associate
Joined
19 Oct 2014
Posts
99
Location
Essex
With the die shrink, very likely. In laymen terms they can fit more cores per nm so as long as they have a decent size die, I would expect some decent gains.

Sorry I realised I came off as a bit of a douche above there, I didn't mean to. I was just being a bit sceptical of the gains but I'm hoping the shrinking die will feed through to real life performance.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
They might be referencing APU performance more than CPU. HBM in 2017 on a APU is going to allow AMD iGPUs to stretch their legs. If you took even a current APU and put it through a full node die shrink you could realistically fit double the shaders but the current shader count is already bandwidth limited. Going from system memory of 25GB/s to a single stack of HBM at 256GB/s and you'd go from heavily bandwidth limited today to being able to easily support double the gpu power in an APU.

From what I recall current APU's can put out about 1Tflop, 2.5 should be easy on a Zen based APU. 4 much higher performance CPUs, hyperthreading and way more shaders.

CPU alone, can't really see it.
 
Soldato
Joined
7 Feb 2015
Posts
2,864
Location
South West
Also (I know this isn't GPUs) but WHAT is happening with CPUs in 2017 that we don't know about?!

According to that Nvidia slide, CPU DP is going to increase to ~250% next year?


EDIT: Referencing this article from earlier http://wccftech.com/nvidia-flagship-pascal-gpu-2h-2016/

Those slides are as reliable as soggy paper. They omitted the Maxwell dip and K80 has 1/3 of its performance knocked off.

in other words its just PR guff. Nvidia has no idea what is happening in the X86 business. Unless they are considering HSA APU's with HBM. But that would still be due to GPU DP rather than the Floating point units in X86 processors.
 
Caporegime
Joined
20 May 2007
Posts
39,931
Location
Surrey
They might be referencing APU performance more than CPU. HBM in 2017 on a APU is going to allow AMD iGPUs to stretch their legs. If you took even a current APU and put it through a full node die shrink you could realistically fit double the shaders but the current shader count is already bandwidth limited. Going from system memory of 25GB/s to a single stack of HBM at 256GB/s and you'd go from heavily bandwidth limited today to being able to easily support double the gpu power in an APU.

From what I recall current APU's can put out about 1Tflop, 2.5 should be easy on a Zen based APU. 4 much higher performance CPUs, hyperthreading and way more shaders.

CPU alone, can't really see it.

Yes i think you are correct. The graph is entitled "gpu motivation" so it probably is referencing the graphics on an APU, not actual CPU power.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
As a question for those in here who believe Nvidia are close or will be first to finfet/new generation, why do you think Nvidia are right now launching improved versions of their mobile gpus in high volume segments?

If Nvidia were going to be first or at least pretty much same ballpark as AMD to small and midrange parts on finfet, there is absolutely no reason to increased the current gen performance and make the next gen look 5% worse in doing so. To me the sole reason is you know you're going to get hit, hard, and likely for some time and you want to reduce the performance gap to the thing you're going to lose to by as much as possible.
 
Last edited:
Associate
Joined
28 Jan 2010
Posts
1,547
Location
Brighton
They might be referencing APU performance more than CPU. HBM in 2017 on a APU is going to allow AMD iGPUs to stretch their legs. If you took even a current APU and put it through a full node die shrink you could realistically fit double the shaders but the current shader count is already bandwidth limited. Going from system memory of 25GB/s to a single stack of HBM at 256GB/s and you'd go from heavily bandwidth limited today to being able to easily support double the gpu power in an APU.

From what I recall current APU's can put out about 1Tflop, 2.5 should be easy on a Zen based APU. 4 much higher performance CPUs, hyperthreading and way more shaders.

CPU alone, can't really see it.

This would make a lot of sense indeed.

It will be interesting to see what a Zen APU with HBM will be capable of. I'd still be in the high-performance camp myself, but maybe it'll manage geniune everything-you-need performance for a normal person (including the ability to stretch into things like 4K decode, PS4-like gaming, etc.)
 
Soldato
Joined
7 Feb 2015
Posts
2,864
Location
South West
This would make a lot of sense indeed.

It will be interesting to see what a Zen APU with HBM will be capable of. I'd still be in the high-performance camp myself, but maybe it'll manage geniune everything-you-need performance for a normal person (including the ability to stretch into things like 4K decode, PS4-like gaming, etc.)

The Zen APU's are said to outclass the Current Consoles in terms of performance. THe CPU can easily do that and i wouldn't be surprised with the GPU side of things along with a single 4GB stack of HBM2 on die.

IF nindento and AMD have things right then it could be a ZEN APU in the NX.
 
Soldato
Joined
29 May 2006
Posts
5,354
The Zen APU's are said to outclass the Current Consoles in terms of performance. THe CPU can easily do that and i wouldn't be surprised with the GPU side of things along with a single 4GB stack of HBM2 on die.

IF nindento and AMD have things right then it could be a ZEN APU in the NX.
This is going off topic but if Nintendo have things right they wouldn't be using AMD for graphics but IMG. Using PowerVR graphics would let Nintendo have standout features over the other consoles far more then what AMD could do. PowerVR have a number of large advantage one of which is Real time Ray Tracing which is not something AMD or NVidia can provide. Something like a high end AMD CPU linked with a high end PowerVR GPU would make a unique stand out consoles and give Nintendo advance feature to compete against other consoles.

Outclassing current consoles isn't really that impressive these days.
 
Last edited:
Associate
Joined
18 Oct 2013
Posts
1,475
Location
far side of the moon
This is going off topic but if Nintendo have things right they wouldn't be using AMD for graphics but IMG. Using PowerVR graphics would let Nintendo have standout features over the other consoles far more then what AMD could do. PowerVR have a number of large advantage one of which is Real time Ray Tracing which is not something AMD or NVidia can provide. Something like a high end AMD CPU linked with a high end PowerVR GPU would make a unique stand out consoles and give Nintendo advance feature to compete against other consoles.

Outclassing current consoles isn't really that impressive these days.
OT
Nintendo isn't going to use Powervr after using AMD for last 3 consoles; Nintendo has come back said they are very happy with what AMD has given them and will continue to use them.

PowerVR is solid in mobile; but shown they couldn't keep up in High end graphics - they fell behind and have stayed behind.
OT

DT brought up good point if the new mobile chips that are coming 3rd/4th quarter are refreshes of what they have means; they are farther behind with pascal than I originally thought and industry has said. As I thought these chips were small pascal
 
Soldato
Joined
29 May 2006
Posts
5,354
OT
Nintendo isn't going to use Powervr after using AMD for last 3 consoles; Nintendo has come back said they are very happy with what AMD has given them and will continue to use them.

PowerVR is solid in mobile; but shown they couldn't keep up in High end graphics - they fell behind and have stayed behind.

I don't agree for a few reasons, first Nintendo said they are very interested the new PowerVR graphics and approached IMG about a next gen console. 2nd @owerVR never fall behind in the high end and can do some next gen advanced graphics well over tripple the speed of high end amd or NV cards. AMD cannot run the same advanced graphcis in realtime that PowerVR can. Sorry for poor spelling trying to type from an old phone. We know NV lost out on new consoles but I don't see why people think that means AMD auto win and get the deal.
 
Soldato
Joined
7 Feb 2015
Posts
2,864
Location
South West
This is going off topic but if Nintendo have things right they wouldn't be using AMD for graphics but IMG. Using PowerVR graphics would let Nintendo have standout features over the other consoles far more then what AMD could do. PowerVR have a number of large advantage one of which is Real time Ray Tracing which is not something AMD or NVidia can provide. Something like a high end AMD CPU linked with a high end PowerVR GPU would make a unique stand out consoles and give Nintendo advance feature to compete against other consoles.

Outclassing current consoles isn't really that impressive these days.

Imagion might have been focusing on hardware based ray tracing for a while but being good at raytracjng does not make up for the other GPU components which are lacking compared to AMD and Nvidia. There is a reason why they only show simple textured scenes and simple poly scenes in their ray tracing demoes.

The lighting may be very impressive but the rest not so much.

And another reason for using AMD is that they have proven SOC's in this fuel. With high GPU performance.
 
Last edited:
Soldato
Joined
29 May 2006
Posts
5,354
Imagion might have been focusing on hardware based ray tracing for a while but being good at raytracjng does not make up for the other GPU components which are lacking compared to AMD and Nvidia. There is a reason why they only show simple textured scenes and simple poly scenes in their ray tracing demoes.

The lighting may be very impressive but the rest not so much.

And another reason for using AMD is that they have proven SOC's in this fuel. With high GPU performance.
What is lacking? They support all the same features and beyond what AMD and NV, they have a long history working on consoles, some of the scenes are not simple. The ones that are simple are to focues on and demo a single feature. When compareing a high end powervr spec against AMD then powervr can do everything amd can but amd cannot do all the high end features powervr can. Both have worked with Nin on conoles so I don't see why they have to go with amd. Powervr is the better spec hardware with better graphics and standout features when it comes to consoils. Sorry I hate typeing on phones will edit my post when I get home to make it more readable..
 
Back
Top Bottom