• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

Associate
Joined
13 Jan 2012
Posts
2,314
Location
United Kingdom
That must be a fake, the thing looks hideous in pictures. But usually when you see the shiny metallic card in the flesh perceptions can change :D

Will be sticking with my 980tis for now anyway unless a 1080ti can outperform my setup
 
Soldato
Joined
17 Jun 2004
Posts
7,616
Location
Eastbourne , East Sussex.
http://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf

GP100’s SM incorporates 64 single-precision (FP32) CUDA Cores. In contrast, the Maxwell and Kepler SMs had 128 and 192 FP32 CUDA Cores, respectively. The GP100 SM is partitioned into two processing blocks, each having 32 single-precision CUDA Cores, an instruction buffer, a warp scheduler, and two dispatch units. While a GP100 SM has half the total number of CUDA Cores of a Maxwell SM, it maintains the same register file size and supports similar occupancy of warps and thread blocks. GP100’s SM has the same number of registers as Maxwell GM200 and Kepler GK110 SMs, but the entire GP100 GPU has far more SMs, and thus many more registers overall. This means threads across the GPU have access to more registers, and GP100 supports more threads, warps, and thread blocks in flight compared to prior GPU generations.

Looks like GP100 will never be used as a graphics chip as it doesn't have any ROPs. Nvidia's whitepaper doesn't mention any, and that's an official technical document that should detail all the parts rather than selective marketing. ROPs are a big transistor and space drain so if they're focusing on a compute-only core it makes sense to leave them out if they'll never be used for Compute tasks. We can only guess they are cooking a GP101/200 with cut-back FP64 and added ROPs to match Vega? Who knows
 
Caporegime
Joined
18 Oct 2002
Posts
33,188
http://images.nvidia.com/content/pdf/tesla/whitepaper/pascal-architecture-whitepaper.pdf



Looks like GP100 will never be used as a graphics chip as it doesn't have any ROPs. Nvidia's whitepaper doesn't mention any, and that's an official technical document that should detail all the parts rather than selective marketing. ROPs are a big transistor and space drain so if they're focusing on a compute-only core it makes sense to leave them out if they'll never be used for Compute tasks. We can only guess they are cooking a GP101/200 with cut-back FP64 and added ROPs to match Vega? Who knows

http://www.nvidia.co.uk/content/PDF/kepler/NVIDIA-Kepler-GK110-Architecture-Whitepaper.pdf

Notice rops in the Gk110 whitepaper? They aren't listed in the specs when comparing it to previous architectures, just texture units

http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last/3

Like GF100 before it, GK110 has been built to fill multiple roles. For today’s launch we’re mostly talking about it from a compute perspective – and indeed most of the die is tied up compute hardware – but it also has all of the graphics hardware we would expect in an NVIDIA GPU. Altogether it packs 15 SMXes and 6 ROP/L2/memory controller blocks, versus 8 SMXes and 4 ROP/L2/memory blocks on GK104.

First do you see any major difference between the high level block diagram of Kepler and pascal in terms of something that says 'ROPS' on it and something that doesn't? Anandtech point out it has rops in the memory controller block but Nvidia themselves don't focus on rops, don't list ROPs on their Tesla cards and didn't list them in any information about previous Tesla cards.

Now compare the comparison specs on these pages

http://www.anandtech.com/show/6446/nvidia-launches-tesla-k20-k20x-gk110-arrives-at-last

http://www.anandtech.com/show/6760/nvidias-geforce-gtx-titan-part-1

Tesla launch comparison table, no rops, Titan based off the exact same core, rops included in the comparison.

Rops are entirely irrelevant to Telsa and Nvidia don't talk about them, I do think that at some stage Nvidia might well make a fully compute card and save the transistors and this generation might be that generation. But people are jumping to a lot of conclusions about it based off things they are just entirely guessing that mostly fly in the face of all previous cards(which again is possible, just not likely).


As for telling people they are wrong because Nvidia marketed 680gtx as high end so everyone calling it midrange is wrong... wow. A company marketing something as anything doesn't make it fact and it's categorically seeable as false. When the 680gtx launched GK110 had taped out, it was a real chip, Nvidia new exactly how much faster it was. What makes a core low/mid/high end is die size and performance. Nvidia had for many generations made cores ranging from 50mm^2 to 550mm^2, the 680gtx at 300mm^2 fell exactly in the range that had for many generations made up Nvidia's midrange cards and Titan fell squarely in their 500mm^2 or larger high end card range. Nvidia could release Tesla and call it the worlds fastest spaceship... that doesn't make it so. 680gtx was midrange at launch, midrange at GK110 launch and is still the midrange Kepler card based on there being a core significantly smaller and a core significantly larger in the Kepler family.

If we use Nvidia marketing speak or your take on what makes a card midrange, if Nvidia pulled GM100 from the market today, leaving GM104 as the biggest core being sold and Nvidia went out and told everyone the GTX 980 is now the high end card.... would that be true? It doesn't matter when it launched, before or after the much much bigger core, the bigger core exists and thus the smaller core can never be the high end card.
 
Last edited:
Soldato
Joined
24 Aug 2013
Posts
4,549
Location
Lincolnshire
The 980Ti is what 15% faster than the 780Ti if not more. Same memory, same 28nm. Just a new architecture.

I'd be very disappointed if a 1080Ti is 15% faster than a 980Ti. Especially as it has a big shrink and new HBM2.
 
Man of Honour
Joined
13 Oct 2006
Posts
92,149
The 980ti is way more than 15% faster than a 780ti.

Indeed - my 780 with the out the box clocks is pretty much dead on the nose of a stock 780ti and 980ti figures are often only a hair short of double in benchmarks (depending a bit on the benchmark) and most of the time around +60-70%.
 
Soldato
Joined
2 Oct 2012
Posts
3,246
The 980Ti is what 15% faster than the 780Ti if not more. Same memory, same 28nm. Just a new architecture.

I'd be very disappointed if a 1080Ti is 15% faster than a 980Ti. Especially as it has a big shrink and new HBM2.

I very much doubt that! Seen as my 290x is faster than your standard 780Ti well it's always been faster than my friends 780Ti when comparing what FPS he gets in games. Although my 290x was overclocked to 1200 on the core and his 780Ti just has a mild overclock.

Point being i upgraded to a 980Ti and it was at least 30% faster than my 290x and that was before i over clocked it. Its more like 35-40% faster game dependant.

I could only maintain around 40 - 45 FPS on 1440p in most games on near enough max settings on my 290x now i can maintain a 60FPS cap on 1440p all the time with my 980Ti
 
Man of Honour
Joined
13 Oct 2006
Posts
92,149
I very much doubt that! Seen as my 290x is faster than your standard 780Ti well it's always been faster than my friends 780Ti when comparing what FPS he gets in games. Although my 290x was overclocked to 1200 on the core and his 780Ti just has a mild overclock.

Point being i upgraded to a 980Ti and it was at least 30% faster than my 290x and that was before i over clocked it. Its more like 35-40% faster game dependant.

I could only maintain around 40 - 45 FPS on 1440p in most games on near enough max settings on my 290x now i can maintain a 60FPS cap on 1440p all the time with my 980Ti

A bit different to my experience - my 780 at max overclock (bare in mind with the GHz edition it is more like comparing against an OC'd 780ti than a 780) still edges it over my friends 290X Tri-X (think he was running 1240 or 1250 on the core or something) in most games and I rarely have any framerate issues at 1440p with max settings - believe me if I did I'd have had a 980ti in there like as of months ago :S though to be fair there are some games like The Witcher 3 that I've not played where it might be a different story.

EDIT: Though I've not tried any comparisons with DX12 :O

EDIT2: It is quite interesting though - when I got this card it wouldn't have been unfair to say that OC'd v OC'd it stomped on that 290X - now they are running pretty much neck and neck in a lot of stuff.
 
Last edited:
Associate
Joined
24 Sep 2015
Posts
233
I don't go for reference cards anyway but I can't see why this wouldn't be real. It would be very attractive to people who like acers 'gamer' designs.

I'm far more concerned about the name of it, nvidias marketing dept need a good kicking if they call a 4k card '1080'.
 
Soldato
Joined
9 Mar 2015
Posts
4,553
Location
Earth
Looks horrible. Love the ref design on current cards, very nice looking, that looks hideous. Think I will buy aftermarket this time sadly if it looks like that.
 

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,359
Location
kent
How long after the reference cards are released is it usually until we see the 3rd party cooler cards?

Shall be pretty damn quick this time round, potentially even straight away. :)

Is that a hint Gibbo?


I reckon it was, no winking smiley at the end of it probably means he knows when he is expecting stock, but nothing would have arrived yet as it is still a bit too early.

But then again, if the stores knew when these were expected somebody somewhere would have leaked it, as not everybody is as sensible as Gibbo. :)
 

bru

bru

Soldato
Joined
21 Oct 2002
Posts
7,359
Location
kent
People are for some reason, getting concerned that that those who would even look at this card, would think it only supported 1080p resolution.

That could work both ways, maybe it looking like the perfect card for 1080p would in fact be a good thing, it would possibly help gain sales from the uninformed. The informed who would be looking for new cards wouldn't base their buying decision on the name of the card anyway.
 
Back
Top Bottom