• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Geforce Pascal Review thread

I don't think it was ever going to be a perfect solution with Pascal in truth but it looks to be more than capable to me, at least on paper.

Well it isn't just Maxwell with a die shrink:P but it can't truly execute lots of concurrent queues.
 
Last edited:
This is really interesting that you say that. Somebody a while back said Pascal is more like GCN1.0 in Async abilities,and they have been unusually accurate on some of their predictions. They are correct again,WTF??

But at the same time GCN1.0 does not have a performance penalty,so if Nvidia is faster in other ways it should be OK.

Only in a very equivalent fashion - the implementation isn't anything like GCN 1.0 but there is some equivalency in what it effectively brings to the table.

The interesting thing is that with Polaris AMD have actually gone a bit towards the nVidia approach while they are still using the GCN architecture the ACEs and scheduling hardware appears to have been rejigged to finally face the fact that developers don't tend to implement workload in the idealistic broadly parallel way and better able to balance performance between broad and narrower workloads.
 
I was reading the Arstechnica review and saw this statement:

http://arstechnica.com/gadgets/2016/05/nvidia-gtx-1080-review/2/

That is one of the most stupid and ignorant reviews i have read in a long time.


of course Pascal, just like Maxwell and Kepler supports DX12 multi-engines, and the compute queues are done asynchronously in hardware. What isn't done is the task scheduling, but there is no requirement to do that in hardware per se.


Preemption can actually be far superior to AMD's's approach using fences and barriers for concurrency which have a high overhead. The Problem with Maxwell is that poorly designed async shader can cause a high context switch costs, something which has been reduced in Pascal considerably.




Too many people are of the mindset that there is only 1 correct way to support DX12 mutli-engines when the reality is the API leaves it entirely up to the IHV how to support them. Nvidia have a different approach, one that is harder for developers but when correctly tuned is faster and more efficient. With Pascal there are big improvements in reducing context switch times, right-down to the single pixel-level preemption in grpahics queues, and single CUDA instruction when in compute queues.
 
I don't think it was ever going to be a perfect solution with Pascal in truth but it looks to be more than capable to me, at least on paper.

And let's face it, with a massive marketshare lead over AMD it's the more supportable solution!!

Hopefully both will be implemented, mainly due to AMD being in consoles too :)
 
Not sure which card to go for so asking fora little assitance in my decsion making. Currently I am running 2x 670's in SLI with a resolution of 2560x1440 (Dell Gsync).

Been holding out for an upgrae for a while now, nearly pulled the trigger on a 980Ti.

So for the rez I run at will a 1070 be enough? Or should I opt for the 1080 due to the extra horse power? Ultimately would love to play games at high/max details.

Thanks in advance
If the 1070 is around 980Ti/TitanX in performance as expected, then yea, it should be good enough for 1440p/60fps with High-Ultra settings. You wont be able to max everything out, but so long as you dont mind that, I imagine it'll be a really good card for your needs.

The 1080 would be nicer, but I dont think the premium will exactly be worth it unless you need the best right now.

And then there might be some good deals on 980Ti's running around soon, which would be another option(though a 1070 is probably a better bet going forward).

But Ashes of the Singularity is CLEARLY an AMD biased game, that's why Nvidia cards performance suffer...or at least that's what I already see people preaching at this part of the forum when it comes to dx12 discussions :p
It obviously is, but more importantly - WHO PLAYS this damn game? Why are people so obsessed with how a card runs on this game that nobody actually cares about beyond its benchmark tool? It's just weird.

If people want DX12 performance benches, then use games that people actually likely to play - Tomb Raider, Hitman, etc.
 
It obviously is, but more importantly - WHO PLAYS this damn game? Why are people so obsessed with how a card runs on this game that nobody actually cares about beyond its benchmark tool? It's just weird.

If people want DX12 performance benches, then use games that people actually likely to play - Tomb Raider, Hitman, etc.

Simply because it's been built from the ground up to use DX12, most other titles as DX11 games with a retro fit added on to support DX12. It's the only title out there that gives us an idea of what to expect once DX12 gains more traction.
 
Simply because it's been built from the ground up to use DX12, most other titles as DX11 games with a retro fit added on to support DX12. It's the only title out there that gives us an idea of what to expect once DX12 gains more traction.

It seems to have been designed to synthetically load up AMD's async hardware in a way that I extremely doubt we will see in any other game though only time will tell.
 
Only in a very equivalent fashion - the implementation isn't anything like GCN 1.0 but there is some equivalency in what it effectively brings to the table.

The interesting thing is that with Polaris AMD have actually gone a bit towards the nVidia approach while they are still using the GCN architecture the ACEs and scheduling hardware appears to have been rejigged to finally face the fact that developers don't tend to implement workload in the idealistic broadly parallel way and better able to balance performance between broad and narrower workloads.

From the Polaris slides,we saw the command processor has been changed,so it will be interesting to see if the Polaris cards do better on lower end CPUs,when compared to the previous AMD cards.
 
They said the block with be compatible with cards with reference design board/PCB, which is just stating the obvious. What I was referring to are cards with non-reference design board.

You aren't giving up on this issue. Check the ek website. Waterblocks for a lot of the custom cards, g1, asus matrix etc. Want Galax HOF? then go bitspower. You need to try harder.;):o
 
Simply because it's been built from the ground up to use DX12, most other titles as DX11 games with a retro fit added on to support DX12. It's the only title out there that gives us an idea of what to expect once DX12 gains more traction.

No it wasn't it was previously built on Mantle as a marketing showpiece for AMD's new low-level API, but before that is was most likely DX11 before AMD started paying the developers. With Mantle scraped it transitioned to DX12, with a strange obsession with maximizing Asyunc shaders which.

AotS is no more representative of DX12 than Tomb Raider or Hitman.
It looks ugly as well, and no one seems to play it.
 
i wish they did it the other way around. Release the big titan first, even though i find that card silly, because 6 month or 12 for a 1k card is not a lot life when some mid tier high end card is going to surpass it coming next gen. But i understand why they dont do it, because if they did it this way they could not get people to double dip, first the mid tier then the high tier each generation. Business is Business.
 
For those with a limited budget,yes, but there's only one top-end card each generation and it come at a premium but still worth it if you have the £ or need it for the resolution you play at.

The definition of A bargain is perceived value whether you can afford it or not, being able to afford it or not does not change something being a bargain or not.
 
Back
Top Bottom