• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

In one of the videos I have watched,(This one I think https://www.youtube.com/watch?v=NzOD0WaUJVs) it is mentioned that this might be the biggest performance leap from NVidia ever from one series to the next. Not sure if it is or not, but I just wanted to get other peoples opinion.

I'm not sure it is the biggest jump ever, that probably goes to the 8800 but it certainly looks more than average at this stage, better than I expected. This may be because the node process is somewhat more than a typically 1 node decrements, although it is 20nm planar we have the addition of finfets and is in fact an additional iteration moving to Finfet+
 
looks like they do.

https://twitter.com/MrBenB/status/730657529533992960

Disclaimer: posted earlier in this thread by someone, but I couldn't find it.

"Maybe the most powerful gaming pc in UK right now!"

Hardly! Kaap's 4 Titan X's would obliterate it for starters!

Also, only 2 of those cards are connected in SLI. I assume the middle 1 is a dedicated PhysX card (can anyone say overkill)!
 
Wouldn't those leaked 3Dmark benchmarks be considered as breaking the NDA which is the 17th? Anyway yes I hope they are true, but I am always skeptical of anything until it is 100% confirmed by multiple sources!!
 
Hopefully that is the case. But only proof I have seen is Nvidia at 2.1Ghz at their event, only 4 days now and no more speculating!

Yep, not long.

I think memory overclocks could be a bigger deal this generation. In previous generations memory was much less of a factor (despite the claims of HBM manufactures), overclock didn't net you much. This generation there is a big jump in performance but not in bandwidth.
As usual nvidia would have worked had to mitigate bandwidth requirements, e.g. increasing cache sizes, ,improved compression etc. but we might be at a stage where memory clocks are quit important.

The GDRR5X clocks are quite conservative, micron were claiming surprising success at higher speeds. Nvidia could have played things safe since it is brand new memory tech with not all chips hitting the higher speeds. Its possible some cards could see big improvements with large memory overlocks.
 
Wouldn't those leaked 3Dmark benchmarks be considered as breaking the NDA which is the 17th? Anyway yes I hope they are true, but I am always skeptical of anything until it is 100% confirmed by multiple sources!!

yes, but if someone under NDA leaked the results to vgacardz then it is hard to trace. Plus at this stage Nvidia probably doesn't care to much, it generates positive press for them.
 
I am hoping that the performance of the SLI situation has improved significantly and, if they can support 3-4 cards (which looks hopeful) then I will be sticking 4 in my project rig!

Cheers
Ben
 
If that chart is right then even in ideal circumstances with 100% scaling with crossfire, one 1080 is still going to outperform my two cards never mind all the games where I am running on a single card.

Looks like Ill be upgrading.......

Now I wonder if I am quick, how much Ill get for my two 290x????

EDIT: Looks like with waterblock ill get about £450 for the pair. Shouldnt need to add too much to grab a founders 1080 then........................
 
If those benches are correct it is just over 100% faster than a 970, so twice as fast!
Yea, I'm not sure where his math was with that one.

That would potentially put the 1070 at well over a 50% improvement on the 970.

Still, I'd rather see gaming benchmarks. These artificial ones just dont do much for me.
 
Last edited:
Yep, not long.

I think memory overclocks could be a bigger deal this generation. In previous generations memory was much less of a factor (despite the claims of HBM manufactures), overclock didn't net you much. This generation there is a big jump in performance but not in bandwidth.
As usual nvidia would have worked had to mitigate bandwidth requirements, e.g. increasing cache sizes, ,improved compression etc. but we might be at a stage where memory clocks are quit important.

The GDRR5X clocks are quite conservative, micron were claiming surprising success at higher speeds. Nvidia could have played things safe since it is brand new memory tech with not all chips hitting the higher speeds. Its possible some cards could see big improvements with large memory overlocks.

I read that the chips are rated at 12GBs and not 10Gbs like the 1080 has at default, not sure if that is correct or not. Can anyone confirm this? By far the main problem with the 1070 (why I don't think it will be any better or even as good as a Titan X) is the normal GDDR5 and 256bit.
 
Last edited:
Nvidia are only officially supporting 2 way SLI with Pascal cards. SO even if people manage to get 3way working, they shouldn't expect it to perform well if at all.

TBH, it would have been better from the start if both teams just focused on dual card setups and optimised for it.


Well Jayztwocents said that Pascal being limited to only two way SLI is rubbish, and NVidia sent him 3 cards.

At the 6 min mark.
https://www.youtube.com/watch?v=hNvfX0QZbYU

Dissclaimer: this has been posted before, in this thread.





"Maybe the most powerful gaming pc in UK right now!"

Hardly! Kaap's 4 Titan X's would obliterate it for starters!

Also, only 2 of those cards are connected in SLI. I assume the middle 1 is a dedicated PhysX card (can anyone say overkill)!

The link I quoted was just to indicate that the old SLI bridges do indeed still work, which was Kaap's question in the first place, hence why I quoted him when answering.

Yes I know that only two of the cards are connected, but that is irrelevant to why I linked to that image. :)
 
What's the crack with microstutter in SLI set-ups these days? Still an issue? If so, does G-Sync resolve it?
Definitely still an issue. Gsync might be able to do variable refresh rates, but it does not do a damn thing about inconsistent frametiming. Very much a 'per application' basis, though.

I read that the chips are rated at 12GBs and not 10Gbs like the 1080 has at default, not sure if that is correct or not. Can anyone confirm this? By far the main problem with the 1070 (why I don't think it will be any better or even as good as a Titan X) is the normal GDDR5 and 256bit.
Potentially.

But then people said the same thing when GM204 was announced with a 256-bit bus.
 
Last edited:
Yep it will always be an issue until they take it seriously, obviously they don't, otherwise it would have been fixed quite a while back surely?
Multi-GPU is complicated. Moreso than ever with rendering paths getting more and more advanced. Makes it difficult for developers and difficult for Nvidia/AMD to implement it through drivers.

And now with DX12/Vulkan, it's gonna be even harder cuz devs have to do most of the work themselves, so that's not exactly gonna help increase support, either.
 
Yep it will always be an issue until they take it seriously, obviously they don't, otherwise it would have been fixed quite a while back surely?

The only way to mostly eliminate microstuter is through Explicit mGPU using a low abstraction API. They will never be able to get rid o the microstuter with DX11 hacked on top SLI, since the driver and game engine don't know what eachother are doing. which is where the stutter comes from.
 
Back
Top Bottom