• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

Hmm just trying to work out how much faster a 1080 will be compared to a 980ti.

Looking on HWbot they have a lot of 980ti's which are overclocked to 1800-1900mhz (the 1080 leaked benchmark was at 1860mhz).

So according to that the 980ti is 20% faster at the same clockspeed (about 12k GPU score vs 10k on the 1080)

So if a 1080 is at 2Ghz and a 980ti is at 1.45Ghz, the 1080 has 40% higher clock speed but is 20% slower, so overall probably around 20% faster.

But the leak could be fake or the 1080 drivers could get better, also my maths is not great but was just bored so I have no idea if that will be correct.
 
Last edited:
It doesn't matter how they spin it, Pascal was a rushed compromise and didn't exist ~2 years ago. They cancelled a uarch i forget the name of nearly 3 years ago, and moved Volta forwards from its already slightly delayed ETA to fill the gap. Then they announced that Volta had been pushed back at least another 18 months, and that 'Pascal' would fill the gap.

Since then there has been much speculation about it being a half way house between Maxwell (non-asynchrous and non-parallel) to Volta (much more GCN-like).

The reality is that Pascal is quite clearly FINFET Maxwell.

Huge clocks (which add to TDP and probably significantly reduce useful life) can't make up for an architecture that's already looking long in the tooth, and is now expected to last well into 2018. Software tricks like Multi-Projection aren't going to cut it.

It's now abundantly clear why that Stardock developer implied that Pascal wouldn't be efficient enough for certain applications, and Polaris would ... high clocks are NVIDIA's only weapon.

^^ Wow, this guy really has it in for Nvidia/Pascal, I think he must have dedicated most of his weekend to scaring everyone off!! :p:p:p:p:eek:
 
I would like to know how he is going to get them to run as the SLI bridges they use only support 2 way setups.:D

Apparently he asked an Nvidia rep specifically if 3 way SLI was still possible, and was told yes.

My guess is the cards will work with both standard SLI bridges and the HB versions, but with the latter being limited to 2 way SLI only.
 
Since then there has been much speculation about it being a half way house between Maxwell non-asynchrous and non-parallel to Volta (much more GCN-like).

Maxwell (2nd gen) supports parallel, asynchronous queues fine it may not be quite as tolerant as GCN when it comes to loading it up but the hardware supports despatching upto 31 mixed compute or memory op command queues + 1 graphics queue simultaneously.

Pascal has significantly redesigned DMA engines and things like worker scheduling and memory operations moved away from software assisted functionality to improve performance and the ability for concurrent threads to work with memory properly, etc. without having to resort to context switching.

EDIT: It will likely still need a certain amount of kindness from software developers to get the most out of it compared to GCN.
 
Last edited:
Maxwell (2nd gen) supports parallel, asynchronous queues fine it may not be quite as tolerant as GCN when it comes to loading it up but the hardware supports despatching upto 31 mixed compute or memory op command queues + 1 graphics queue simultaneously.

Pascal has significantly redesigned DMA engines and things like worker scheduling and memory operations moved away from software assisted functionality to improve performance and the ability for concurrent threads to work with memory properly, etc. without having to resort to context switching.

EDIT: It will likely still need a certain amount of kindness from software developers to get the most out of it compared to GCN.

Absolute nonsense. This has been gone over so many times. Maxwell does not and cannot. It has to emulate the function in software, which is why it's switched off or extremely slow, when used in a game, for NVIDIA.

Pascal changes the software emulation method to be less costly, but it DOES NOT support hardware asynch compute / shaders. The hope now moves to Volta.

It MAY be worth using now for NVIDIA (Pascal) in some games, but it's not going to be remotely close to a 4th generation hardware iteration.
 
Absolute nonsense. This has been gone over so many times. Maxwell does not and cannot. It has to emulate the function in software, which is why it's switched off or extremely slow, when used in a game, for NVIDIA.

Pascal changes the software emulation method to be less costly, but it DOES NOT support hardware asynch compute / shaders. The hope now moves to Volta.

It MAY be worth using now for NVIDIA (Pascal) in some games, but it's not going to be remotely close to a 4th generation hardware iteration.

Even if what you said was true - you can't just change the software emulation with something like that to fix it (any changes to make it "less costly" would be completely negligible performance wise) - the inefficiencies come from the fact that it has to use context switching to make up for the inability to access certain data from different queues/threads as needed in hardware which is the slowdown rather than the frontend software emulation itself - which can be somewhat mitigated by the developer not just chucking work at it indiscriminately.
 
Last edited:
Apparently he asked an Nvidia rep specifically if 3 way SLI was still possible, and was told yes.

My guess is the cards will work with both standard SLI bridges and the HB versions, but with the latter being limited to 2 way SLI only.

The is a picture of new hb 2,3,4 way SLI bridges on nvidia 1080 page.

sli.png
 
Maxwell (2nd gen) supports parallel, asynchronous queues fine it may not be quite as tolerant as GCN when it comes to loading it up but the hardware supports despatching upto 31 mixed compute or memory op command queues + 1 graphics queue simultaneously.

Pascal has significantly redesigned DMA engines and things like worker scheduling and memory operations moved away from software assisted functionality to improve performance and the ability for concurrent threads to work with memory properly, etc. without having to resort to context switching.

EDIT: It will likely still need a certain amount of kindness from software developers to get the most out of it compared to GCN.

Absolute nonsense. This has been gone over so many times. Maxwell does not and cannot. It has to emulate the function in software, which is why it's switched off or extremely slow, when used in a game, for NVIDIA.

Pascal changes the software emulation method to be less costly, but it DOES NOT support hardware asynch compute / shaders. The hope now moves to Volta.

It MAY be worth using now for NVIDIA (Pascal) in some games, but it's not going to be remotely close to a 4th generation hardware iteration.

Could you 2 love birds please provide some source material for your claims? would be lovely to read from a trusted source, would make it a lot easier to figure out what is right and what is less so if so.
 
It doesn't matter how they spin it, Pascal was a rushed compromise and didn't exist ~2 years ago. They cancelled a uarch i forget the name of nearly 3 years ago, and moved Volta forwards from its already slightly delayed ETA to fill the gap. Then they announced that Volta had been pushed back at least another 18 months, and that 'Pascal' would fill the gap.

Since then there has been much speculation about it being a half way house between Maxwell (non-asynchrous and non-parallel) to Volta (much more GCN-like).

The reality is that Pascal is quite clearly FINFET Maxwell.

Huge clocks (which add to TDP and probably significantly reduce useful life) can't make up for an architecture that's already looking long in the tooth, and is now expected to last well into 2018. Software tricks like Multi-Projection aren't going to cut it.

It's now abundantly clear why that Stardock developer implied that Pascal wouldn't be efficient enough for certain applications, and Polaris would ... high clocks are NVIDIA's only weapon.

Timing also, Where IS Polaris? I wanted something since the Ti came out but was not going from 980 to 980ti thats dumb. But Polaris is no where to be seen and Pascal is going to give me 35%-40% on my 980 thanks to clocking hopefully.


In the meantime, Nvidia just got to announce Async, HDR support and DP 1.4 which is another easy win because no one else is around to compete. By the time AMD come to the party Nvidia will strike again.
 
Absolute nonsense. This has been gone over so many times. Maxwell does not and cannot. It has to emulate the function in software, which is why it's switched off or extremely slow, when used in a game, for NVIDIA.

Pascal changes the software emulation method to be less costly, but it DOES NOT support hardware asynch compute / shaders. The hope now moves to Volta.

It MAY be worth using now for NVIDIA (Pascal) in some games, but it's not going to be remotely close to a 4th generation hardware iteration.

Wow. The Pascal reveal really has AMD's embedded PR drones worked up.

All weekend doing your bit to try and put people off. :o
 
Yeah its bloody terrible, ive been on a single card for months and months, 2nds disabled, just can't be bothered to pull it out (ooh err) :D

I've had sli since the 680 (2012?), then 780's, then TX's. Just before I sold my first titan I gamed with 1 too see what it was like.

Felt slightly smoother and the fps didn't seem to go down much. Obviously I'll be getting a 1080 but will I get 2....

Really don't know, I just don't know. Really fed up of the half a**ed sli support of the last year particularly.
 
What I think is going to be interesting is how well the 1xxx sales do. We all know the 970 and 980ti sold 10,000's of units they were a massive seller for Nvidia which reflected in their balance sheet. Can the same people who spent there money, in the TI case £500+ only a year ago then invest in the 1070 or more likely the 1080ti and give Nvidia the same sales figures?
 
Back
Top Bottom