• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

Associate
Joined
11 Mar 2016
Posts
361
Well, the big announcements will come 17 hours from now!
Lets all hope for a May release date and instant 50 clams off all the current gen cards!
 
Soldato
Joined
19 Dec 2010
Posts
12,069
Agreed. I think there's a large element of wishful thinking from people who splashed out on a 980 Ti that it's going to hold up and still be the king, but I see no reason to believe that'll be the case. The 970 came along and gave the 780 Ti a good kicking, despite being priced so much lower launch vs launch, and that was just another 28nm card. And one that was launched less than a year after the 780 Ti at that. The gap this time will almost certainly be longer. For me it'd have to go down as one of the biggest tech disappointments in recent memory if we don't see that sort of leap again with a die shrink thrown into the mix too.

The 970 and 780Ti were about equal in performance? The 970 was so cheap because of that exact reason, that it was still on 28nm.

Just curious, what sort of performance are you expecting from the X70 card?
 
Associate
Joined
24 Nov 2010
Posts
2,314
A quick showing of the new cards (possibly mock ups) with availability in 6-9 weeks (end may beginning of June) with all the AIB's showing off their boards at Computex (May 31st-4th June).

That's my guess anyway.

They most likely will announce that. But it'll be a deception. I don't see them having anything before the end of August (and then only in tiny volumes) at the earliest.

The 150W 'mobile' 28nm part is all you need to know about the likelihood of any 16nm parts in May / June (or July, and quite possibly August / September too).
 
Associate
Joined
10 Jul 2009
Posts
1,559
Location
London
Caporegime
Joined
18 Oct 2002
Posts
33,188

Very strange numbers there. First up it's just a Drive PX2 which they talked about at CES. 2x discrete Pascal with 4GB memory each by the looks of it and 2x Denver SOCs which themselves have an Pascal as it's IGP.

The weird part is how it lists memory bandwidth, 50+GB/s for the SOCs which are listed as 128bit LPDDR4, which basically means around 2Ghz clock speeds giving max theoretical bandwidth of 2 x 128/8 = 32GB/s, realistic bandwidth being lower it looks like they are adding the bandwidth of both SOCs individually (like they and AMD do on a dual GPU card). But that means they've got two discrete Pascal GPUs listed as having both 4GB of memory and 80GB/s of bandwidth, or 40GB/s per card which is exceptionally low.

40GB/s for each gpu, even 80GB/s suggests very low end GPUs, a 730 50W card with sub 1tflop of performance has 30GB/s of bandwidth using ddr3. The new 950 is a card using 105GB/s for 1.6TF of performance. Even if you use 80GB/s for the card it would indicate something even vastly more efficient, not much faster than a GTX 950. But a 950 shrunk down to 16nm would be around 40-45W in power output.

How is the Drive PX2 a 250W device if it's likely using 2x 50W GPUs. If it's using dramatically more powerful GPU's in the 100W range each, then how on earth do they only have 80GB/s of bandwidth. Because 100W cards on 28nm have much more bandwidth than that, 16nm 100W cards should require significantly more.

Drive PX2 just gets more and more puzzling. Nothing about it seems to make sense at all. 250W device, with 2x roughly 100W GPUs and 2 excessively power hungry mobile SOCs at 25W each makes sense. But a 100W gpu on 16nm you'd expect to have similar performance and bandwidth to a 200W 28nm gpu... which when you consider a 980GTX is a 165W card and uses 224GB/s, just doesn't make sense with them listing bandwidth at 80GB/s.
 
Associate
Joined
26 Mar 2016
Posts
150
The weird part is how it lists memory bandwidth, 50+GB/s for the SOCs which are listed as 128bit LPDDR4, which basically means around 2Ghz clock speeds giving max theoretical bandwidth of 2 x 128/8 = 32GB/s, realistic bandwidth being lower it looks like they are adding the bandwidth of both SOCs individually (like they and AMD do on a dual GPU card). But that means they've got two discrete Pascal GPUs listed as having both 4GB of memory and 80GB/s of bandwidth, or 40GB/s per card which is exceptionally low.

You should get your numbers right, it's just 128Bit with LPDDR4 1600. That's 51,2Gb per SoC. Exactly same as Ipad Pro. Gpus are 128Bit at 2500mhz GDDR5 each.
You've forgot to double the clock speed, as it's DDR Ram.
 
Last edited:
Don
Joined
20 Feb 2006
Posts
5,266
Location
Leeds
I know that GTC is not necessarily games conference, but all this push for Deep Learning metrics with Pascal from nVidia concerns me as a gamer.

Why would deep learning metric improvements concern you as a gamer, It is still the same thing that has been used previously but with upgraded code shown on a new and existing process.
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
You should get your numbers right, it's just 128Bit with LPDDR4 1600. That's 51,2Gb per SoC. Exactly same as Ipad Pro. Gpus are 128Bit at 2500mhz GDDR5 each.
You've forgot to double the clock speed, as it's DDR Ram.

I didn't, just got the wrong clock speed on lpddr4.

The gpu memory is still a massive problem. 80GB/s is easy to achieve on a gpu, the issue is performance vs bandwidth.

Look back, the closest card in bandwidth on Maxwell is 105Gb/s gtx 950 currently. It's a 1.6TF card. In almost every architecture bandwidth scales with number of shaders because you have to have the ability to feed them data to be processed with a given bandwidth calculated to be required per shader.

250W TDP for the Drive PX2, 16nm GPUs, GPUs with only 80GB/s make zero sense within that. A shrunk 950 with significantly lower memory clocks would mean 40W or so per gpu, two being sub 100W, how is it a 250W TDP device? 75W SOCs? That makes no sense, also you'd expect the discrete GPUs to do the bulk of the 8TF it's rated for. So maybe 3-3.5Tf from the discrete gpus and the rest from IGPU. Look at the specs of a current Maxwell with 3-3.5tflops, 970, 145W, 256bit memory, 224GB/s 1664 shaders. Scaling that down to 14nm and again you're at 80W or so cards, which makes more sense for a 250W device with two of them in... but that card requires 224GB/s for that level of performance and shaders. Even with efficiency in bandwidth usage it seems insanely unlikely to achieve a 3+ Tflop card at 16nm requiring anything less than 150GB/s of bandwidth, let alone half of that.
 
Last edited:
Back
Top Bottom