• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

what's the point if you only keep that overclock for about 10 min ? and you get less than 2% performance per 125mhz, you need 620mhz overclock for 10% performance boost.
this is a big let down.

On water he kept the OC for a lot longer. It was the stock cooler resulting in 10 min drops.

Overclocking wise it is still the same as the Ti and TX. Stock to maxed out only yields several fps increase at most.
 
What I don't understand is WHY are Nvidia not taking Async fully on board and utilising it properly? Makes no sense to me, surely it's a win win?? It can't be a technical hurdle too high given their resources?! :confused:

Just for ideas sake,maybe nvidia don't see it as such a high priority just yet? When necessary they'll probably be better on top of it ,if it indeed needs improving
 
It's not a let down, if we were talking about a 1080ti or Titan it would be but we're not.

When they arrive hopefully this will hit mid range pricing and it will be the very fast mid-high card that it is. The full high part has slightly more than double the transistors. If it can achieve similar clocks it will make this card look silly at its current price point.
 
Any word on how long we'll have to wait for the likes of Asus for the Matrix cards or EVGA and the Classified ?

I know last time it took Asus 4 months from the launch of the 980 Ti to launch the Matrix, Maybe different this time ?
 
It's not a let down, if we were talking about a 1080ti or Titan it would be but we're not.

the 1080ti is going to be a brute force monster but for how much £?

do you think they will do the titan thing all over again ..knowing that peeps will know that the ti will be round the corner and wait for it so not be stung once again.

that's if where going to see a ti but its more than likely. and they will be sticking with gddrx which will mature over time ...


so hbm 2 will be mid end of next year then ...who knows....what i do know ....if i was spending £600 + on a gfx which i am not i would hold off for the ti but that's just me
 
Just for ideas sake,maybe nvidia don't see it as such a high priority just yet? When necessary they'll probably be better on top of it ,if it indeed needs improving

They do because if it didn't matter to them they wouldn't have spent a year talking about how Maxwell totally has it and will be enabled soon and why they are talking about async improvements in Pascal. They are saying they have them rather than saying they are useless, which is entirely the opposite of what they would be doing if they believed it was useless and didn't have good async hardware.

The issue is Maxwell architecture probably started being worked on around say 2012, Pascal probably had work stated on it by 2013, maybe 14. Architectures aren't an overnight thing, it's not Maxwell comes out then they start on Pascal the day after it's ready. They are done concurrently, sometimes you see a feature brought forward from one architecture to another if it can be done and they see a way to integrate it, sometimes the opposite, something isn't ready or is taking too much space so is pushed to the next architecture.

AMD were working on and added async hardware a very long time ago, they made the very big push to move gaming APIs forward. DX12 came out last year, Nvidia most likely simply lacked the time to change Pascal enough to make async shaders work within the hardware.
 
the 1080ti is going to be a brute force monster but for how much £?

do you think they will do the titan thing all over again ..knowing that peeps will know that the ti will be round the corner and wait for it so not be stung once again.

that's if where going to see a ti but its more than likely. and they will be sticking with gddrx which will mature over time ...


so hbm 2 will be mid end of next year then ...who knows....what i do know ....if i was spending £600 + on a gfx which i am not i would hold off for the ti but that's just me


Without a doubt we'll see the Titan first, I'm betting the week before Battlefield 1 (amd this time) then a couple of months later we'll get the Ti at 30% less cost and only 3fps less performance.
Of course I'll be jumping straight in both feet first for the Titan, I don't do waiting.
 
I can see the Titan being £1500. I like my GPU's but not that much :p. I'm thinking another year or so until the 1080Ti. 1080's will more than keep me happy until then.
 
Without a doubt we'll see the Titan first, I'm betting the week before Battlefield 1 (amd this time) then a couple of months later we'll get the Ti at 30% less cost and only 3fps less performance.
Of course I'll be jumping straight in both feet first for the Titan, I don't do waiting.

be a cracking card

i have the cash and would love to get the next titan but unfortunately my Yorkshireman tight arse safety mechanism kicks in which enables me to not part with my money i have tried many times ..i just seem to freeze strange it is ....

So i have said before and i know i have witted on about it in this thread i think the best deals i have seen thus far is the 2nd hand 980 ti g1's go for as little as £351 and the msi 6g's £329 and strix an all other evga classes go around that mark ...
that to me as been the bargains of all this.

Sadly these have all increased now that the dust as settled
 
What I don't understand is WHY are Nvidia not taking Async fully on board and utilising it properly? Makes no sense to me, surely it's a win win?? It can't be a technical hurdle too high given their resources?! :confused:

Because it just isn;t that important, however much AMD try to market it as such.

Async can allow better utilization of GPU that is not being utilized properly. It is a solution to a problem that nvidia GPUs just don't have to such a high degree, there are other bottle necks. And even if there was a huge utilization problem with nvidia GPUs there are more solutions than simply sync compute, for example, removing the bottlenecks that are preventing full utilization of the GPU (command processors, geometry throughput, tessellation,scheduling).

As a user you only care about performance, there are may different ways to gain performance. Async is one possible way if the GPU has compute utilization issues. It isn't a feature like tessellation or fragment shaders etc. that have to be done in order to correctly render the scene at playable frame rates in hardware.


As simplistic view with completely made up number to try and illustrate why AMD and nvidia view async differently:

  • AMD GPUs might get 70% of their theoretical performance. Using Async carefully they can get another 15-20% boost under some scenarios if the developers can implement a lot of async shaders. Enabling this kind of performance bump requires a significant cost in transistors, transistors that could be used for other things that make games faster (tessellation, ROPs, TMUs, compression, cache).
  • Conversely, Nvidia's Maxwell might work at 90% utilization. Adding Async to the same level as AMD does might add 5-7% performance, but again has a transistor cost. A simplified multi-engine system with software scheduler but hardware based dynamic load-balancing and very fine gained preemption might gain you 3-4% utilization for much less transistors, this is what we get with Pascal.
  • The end result being that Pascal might get 90-95% utilization, again older AMD hardware at 85-90% with AMD only getting that boost if developer put in the effort (but the effort is much less for AMD GPUs than Nvidia).
  • These numbers are made up but look at the theoretical performance of the Fiji FuryX compared to 980Ti. The Fury should destroy 980ti, it doesn't, because AMD have serious utilization issues and their real-world performance does not stack up to their theoretical performance. Maxwell with far less compute capability through fewer shaders is faster in a vast majority of games. Nvidia spent their transistor budget making the GPU reach closer to its theoretical limit and not get bottle necked by geometry, ROPs or the command processor. AMD developed a GPU with massive compute resources that it can't properly use and are bottle necked in other parts of the GPU.




If you look at the leaks of the Polaris GPU they are are actually moving in direction of Maxwell. Polaris 10 looks to have much less compute units/shaders. They are spending the transistor budget on trying to utilize the shaders better and not get bottle necked elsewhere. If you look at Nvidia's GPU releases the number of Compute units has grown that fast, their design efforts have been in keep the compute units well utilized. Async may be far less useful on Polaris than Hawaii and Fiji....




In the future Async compute will liekly be more important as GPU add more and more compute units it gets harder and harder to keep them all fed and balanced. But that is probably at least 2 generation away where it become critical.
 
Last edited:
Because it just isn;t that important, however much AMD try to market it as such.

snip

In the future Async compute will liekly be more important as GPU add more and more compute units it gets harder and harder to keep them all fed and balanced. But that is probably at least 2 generation away where it become critical.

Thanks for info, it does look like AMD's gains come from the fact that the performance before was bad, and is now nearer to Nvidia, whereas Nvidia were not bad in the first place so the gains look smaller.

Hitman and AOTS are supposed to use DX12 and Async a lot and the benchmarks for the 1080 are some of the biggest gains over the 980ti. Only game where DX12 is no good is ROTR, where it looks like the developers did not do a very good job with Dx12 in that game.
 
From what I'm reading, the 1080 is pretty much the speed increase I expected. An incremental bump over the 980ti. There hasn't been a serious jump in speed for years now.

Problem is, for the first time in years people are actually needing a big jump in speed for VR and 4k gaming.
 
From what I'm reading, the 1080 is pretty much the speed increase I expected. An incremental bump over the 980ti. There hasn't been a serious jump in speed for years now.

Problem is, for the first time in years people are actually needing a big jump in speed for VR and 4k gaming.

The jump in VR is pretty big - though 4K not so much - it seems to be a bit hampered in 4K - which is allowing the 980ti to close the gap at that res somewhat - due to not being equipped with fillrate shovelling power to push out the pixels, etc. at that resolution.
 
They're using SMAA at 4K, not really needed and you could get a fair few more fps by disabling it. Unless they're going balls to the wall every setting maxed?
 
Back
Top Bottom