• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Geforce Pascal Review thread

Em...different vendors would not have a unified design for the PCB, so how is EK going to be able to design a block that will fit (and work well) on all of them?

In terms of compatibility of block with non-reference card, just thinking back about people asking about watercooling their non-reference card...nobody could really answer for sure if the waterblock would be compatible for cards with custom PCB. Somehow I got a feeling that Nvidia will dictate that AIB partners are not allow to use reference design for their PCB for the custom cards...so the only option is probably custom "hydro" cards with pre-fitted block or go for the more expensive reference card.

EK will have blocks ready for release according to their facebook page.

also they reckon the block will be compatible across editions;

Mike Carpio Will this be for the Founders Edition cards only?

EK Water Blocks

EK Water Blocks No, all reference design circuit board cards are compatible. Founders and non-founders.

https://www.facebook.com/EKWaterBlocks/photos/a.204208322966540.61821.182927101761329/1028441380543226/?type=3&theater
 
Not sure which card to go for so asking fora little assitance in my decsion making. Currently I am running 2x 670's in SLI with a resolution of 2560x1440 (Dell Gsync).

Been holding out for an upgrae for a while now, nearly pulled the trigger on a 980Ti.

So for the rez I run at will a 1070 be enough? Or should I opt for the 1080 due to the extra horse power? Ultimately would love to play games at high/max details.

Thanks in advance
 
T

We really need Vega from AMD to at least compete with the 1080 ti, or nvidia will continue to name their own prices.

another way, don't buy anything from nvidia. until they fix their prices.
the user base should be educated on this.
 
Not sure which card to go for so asking fora little assitance in my decsion making. Currently I am running 2x 670's in SLI with a resolution of 2560x1440 (Dell Gsync).

Been holding out for an upgrae for a while now, nearly pulled the trigger on a 980Ti.

So for the rez I run at will a 1070 be enough? Or should I opt for the 1080 due to the extra horse power? Ultimately would love to play games at high/max details.

Thanks in advance

I'd go 1080, huge upgrade over 670s. It's looking like the real sweet spot (1080 + 1440p gsync) if still expensive.
 
Thanks guys. Did we not know the date for the lifting of the 1080 NDA beforehand? I guess they don't want the spotlight taken off the 1080 too much.

EDIT - do reviewers actually have 1070s in hand?
 
I was reading the Arstechnica review and saw this statement:

http://arstechnica.com/gadgets/2016/05/nvidia-gtx-1080-review/2/

While Nvidia has led GPU performance for some time—bar AMD's impressive turn with the release of the 290X back in 2013—in recent months it's suffered a few setbacks when it comes to DirectX 12 and performance under Stardock's Ashes of the Singularity. The problem for Nvidia has been asynchronous shaders, or rather, the lack of them in its hardware. AMD took a gamble early on when designing its GCN range of GPUs (the 7000-series and up) with hardware-based asynchronous shaders. These allow its GPUs to take the multithreaded workloads of DX12 and execute them in parallel and asynchronously, greatly improving performance over serial processing.

Pascal still doesn't have hardware-based asynchronous shaders. In DX12 games like Ashes of the Singularity that take advantage of them, Nvidia doesn't enjoy the same kind of performance boost as AMD. In early tests it even dropped in performance, although recent driver updates have seen Nvidia cards at least achieve parity between DX11 and DX12.
Enlarge

Instead of asynchronous shaders, Pascal uses a technique called pre-emption. Effectively, this enables the GPU to prioritise one set of more complex tasks over another (for example, preferencing compute tasks like physics over graphics). The trouble is, longrunning compute jobs can end up monopolising the GPU. This was a particular issue for Maxwell, where the GPU could only pre-empt tasks at the end of each command. That means extra time spent waiting for the command to end increasing latency.

Pascal implements pixel level pre-emption, allowing the GPU to pause smaller tasks at any point in order to save the status of them to memory while bigger tasks complete. It's an interesting solution, but it still doesn't replace the performance of hardware-based asynchronous shaders. Fortunately for Nvidia, even with the increasing number of DX12 games being released, few of them take full advantage of asynchronous shaders. Fewer still have shown any real improvement in performance over DX11.

That will change over time (spoiler: it does a little here too), but there's more work required on the developer side to support the low-level hardware features of DX12. Right now, most simply aren't bothering. That's not to mention that despite its lack of async, Nvidia has one very big advantage over the competition: clock speed.
 
NVidia do it differently to AMD and PCPer explain it better than I possibly could and worth a read for those interested.

http://www.pcper.com/reviews/Graphi...ition-Review-GP104-Brings-Pascal-Gamers/Async

For those not interested, just keep making silly statements!

Emm,why are you having a go at me,mate?? I just quoted what Ars said - it is basically the same as what the PCPER article also said.

Pascal,is basically better at balancing the loads - it can switch between tasks very quickly,meaning there is no penalty running Async and if they run those individual tasks quicker than AMD then they won't be penalised.
 
Last edited:
That wasn't aimed at you and I didn't even quote you. Look at the post prior to mine for an example of my closing sentence ;)
 
Cool,but look at my edit - that is my understanding of how Nvidia has tackled it.

Yer, my understanding is unsure of who does it better and we need a dedicated Async benchmark to see the difference. As long as they both can cope, all is gravy in my book but we still need some DX12 game engines to really see the full benefit of DX12.
 
NVidia do it differently to AMD and PCPer explain it better than I possibly could and worth a read for those interested.

http://www.pcper.com/reviews/Graphi...ition-Review-GP104-Brings-Pascal-Gamers/Async

For those not interested, just keep making silly statements!

Looks worse than I was lead to believe - out of the load balancing you've effectively got the functional equivalent of 2 of AMD's ACE units but ticking over at a higher speed versus what is likely to be 4 ACEs on desktop Polaris parts but running slower.

Thing that will likely save nVidia here is that it is probably not possible to efficiently process that data 100% in parallel (synthetically loaded up they'd trample on nVidias implementation).
 
Looks worse than I was lead to believe - out of the load balancing you've effectively got the functional equivalent of 2 of AMD's ACE units but ticking over at a higher speed versus what is likely to be 4 ACEs on desktop Polaris parts but running slower.

Thing that will likely save nVidia here is that it is probably not possible to efficiently process that data 100% in parallel (synthetically loaded up they'd trample on nVidias implementation).

I don't think it was ever going to be a perfect solution with Pascal in truth but it looks to be more than capable to me, at least on paper.
 
Looks worse than I was lead to believe - out of the load balancing you've effectively got the functional equivalent of 2 of AMD's ACE units but ticking over at a higher speed versus what is likely to be 4 ACEs on desktop Polaris parts but running slower.

Thing that will likely save nVidia here is that it is probably not possible to efficiently process that data 100% in parallel (synthetically loaded up they'd trample on nVidias implementation).

This is really interesting that you say that. Somebody a while back said Pascal is more like GCN1.0 in Async abilities,and they have been unusually accurate on some of their predictions. They are correct again,WTF??

But at the same time GCN1.0 does not have a performance penalty,so if Nvidia is faster in other ways it should be OK.
 
This is really interesting that you say that. Somebody a while back said Pascal is more like GCN1.0 in Async abilities,and they have been unusually accurate on some of their predictions. They are correct again,WTF??

But at the same time GCN1.0 does not have a performance penalty,so if Nvidia is faster in other ways it should be OK.

I did it was in a link someone posted, the link was awhile ago as well, cannot remember what forum it was someone else might remember and he was right about quite a lot
 
Back
Top Bottom