• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

From recollection. Top Kek. I also like the long winded desperate arm waving you do regarding settings, you may just be able to convince the really hard of thought. So keep making stuff up and throwing around that FUD. Each post you make just has you looking more ludicrous than the last.

Me, I'm just happy to post the numbers and let people make their own mind up. It's quite telling you are not, and are desperate to cast doubt on the entire site. And those numbers show that when tested away from the AND sponsored and assisted developer benchmark, NV cards have a much higher performance delta to AMD's. Spooky.

What's spooky is you have referenced that site twice now, starting in another thread. When the flaw of testing only 20 secs has been pointed out, your still trying to act like it represents the game as a whole, it simply does not/cannot.

It's fine if you don't want to use a canned bench, I can understand how either side could mess with that, however why only use 20 sec's of a map which is away from all the action???, in fact why did they not use the default start position in the town???? its the worst possible way to test
 
Last edited:
From recollection. Top Kek. I also like the long winded desperate arm waving you do regarding settings, you may just be able to convince the really hard of thought. So keep making stuff up and throwing around that FUD. Each post you make just has you looking more ludicrous than the last.

Me, I'm just happy to post the numbers and let people make their own mind up. It's quite telling you are not, and are desperate to cast doubt on the entire site. And those numbers show that when tested away from the AND sponsored and assisted developer benchmark, NV cards have a much higher performance delta to AMD's. Spooky.


You're trying to say canned benchmarks are the only reason AMD were winning in reviews of performance.

[H] show AMD ahead in performance using in game numbers and the very site you're using, using in game performance numbers, had a Fury NON X ahead of a 20% or so overclocked 980ti.

With the performance gap had that not been an overclocked 980ti being similar to that shown by [H] and every other review site using the games 'canned benchmark'. Look at the gap between the 980ti overclocked and the Titan X at stock, which is getting destroyed by a Fury X and a Fury pro, a 980ti not overclocked would be very close to Titan X performance. So in the non canned benchmarks every site including the one you're using, showed AMD dominating and agreeing with the numbers in the canned benchmark.

So where is your proof that AMD were only ahead due to a canned benchmark if two of the sites not using it both still showed AMD significantly ahead.

The different settings are there in black and white. There is a reason there are settings for higher quality in the control panel, they enable.... higher quality for lower performance. They were honest enough to include the fact they made this change but you're not honest enough to acknowledge they did this... I clearly made it up, when the website explicitly stated it.
 
It's another data point. One free from as much IHV investor interference as possible. Everyone is free to make their own mind up on the test and results. On one hand you have a test where many sites show a 390 beating a 980ti in DX11, and one where results seem to fit the general trend of where those cards lie performance wise.
 
You're trying to say canned benchmarks are the only reason AMD were winning in reviews of performance.

[H] show AMD ahead in performance using in game numbers and the very site you're using, using in game performance numbers, had a Fury NON X ahead of a 20% or so overclocked 980ti.

With the performance gap had that not been an overclocked 980ti being similar to that shown by [H] and every other review site using the games 'canned benchmark'. Look at the gap between the 980ti overclocked and the Titan X at stock, which is getting destroyed by a Fury X and a Fury pro, a 980ti not overclocked would be very close to Titan X performance. So in the non canned benchmarks every site including the one you're using, showed AMD dominating and agreeing with the numbers in the canned benchmark.

So where is your proof that AMD were only ahead due to a canned benchmark if two of the sites not using it both still showed AMD significantly ahead.

The different settings are there in black and white. There is a reason there are settings for higher quality in the control panel, they enable.... higher quality for lower performance. They were honest enough to include the fact they made this change but you're not honest enough to acknowledge they did this... I clearly made it up, when the website explicitly stated it.

Please show me these other non canned tests. I've had a look at that H one and can see nothing to indicate what or how they tested in this title, unlike PCGH who go into specific detail. Given that it's more likely the built in test was used wouldn't you agree? And please, you really need to wipe away your tears over the speed of the 980Ti being used. As many, many people have pointed out, it's a perfectly valid representation of the speed the vast majority of TI's will be running at, just like the Fury clocks represent the average speed most of those will be at. :D Surely, enthusiast aimed tests should be giving real world numbers? :) I'm happy to let individuals look at all of this and come to their own conclusion, you, for some reason seem almost desperate to lead people to yours.

Now we have got to the bottom of that. Lets see if we can clear up all your previous FUD storms in this thread. Hows the 'Pascal cant release this early', 'No chance of releasing before AMD', 'No chance of using GDDR5X' and finally that ludicrous FUD attempt at the HDMI support, that you dug up a 3 year old article written in Frence regarding a Sony projector working out for you?
 
Last edited:
My opinion is that Pascel is just a Maxwell die shrink with minor tweaks. Paxwell. :)

Just seems that pascal was introduced when they realised that hbm 2 might not be ready in time. Or volta was delayed and pascal made as a stop gap measure.
 
Last edited:
Most of the time the next architecture is a revision of the previous, so the fact that Pascal is very similar to Maxwell isn't really any surprise.

All companies do this and then every now and again there will be a completely different approach, like a rebuild from the ground up.
 
Most of the time the next architecture is a revision of the previous, so the fact that Pascal is very similar to Maxwell isn't really any surprise.

All companies do this and then every now and again there will be a completely different approach, like a rebuild from the ground up.

Except Pascal was never on Nvidia's roadmap..........................
 

One of the most ignorant articles I have read in the long time, clueless fool peddling complete garbage (not you flopper, the idiot you linked to).

The only thing that matters is instructions per second, IPC is irrelevant.it surprised me how clueless people are of microprocessor design when they think IPC is some be-all and end-all when it is merely a design design.

Nvidia have official said pascal was designed for high clock speed by going through intensive critical path optimization. This can knowingly reducing IPC, but as IPS increases so does performance.
 
One of the most ignorant articles I have read in the long time, clueless fool peddling complete garbage (not you flopper, the idiot you linked to).

The only thing that matters is instructions per second, IPC is irrelevant.it surprised me how clueless people are of microprocessor design when they think IPC is some be-all and end-all when it is merely a design design.

Nvidia have official said pascal was designed for high clock speed by going through intensive critical path optimization. This can knowingly reducing IPC, but as IPS increases so does performance.

Please educate me then as I thought the article had its points. Which is basically at the same clock speed maxwell=pascal.

All pascal brings (to games) anyway seems to be a higher maximum clock speed.

Yes its going to be twice as fast in VR so gains per clock speed there but thats it as far as I can see?
 
Even so it had some interesting points. Assuming his figures are correct, the 1080 does just appear to be a mega overclocked 980ti.....................

And certainly isnt showing any signs of its die shrink or a similar level of increased performance as the 980 did when it was launched.

Its almost as if Nvidia are drip feeding us 20% gains every launch.................


Of course the 1080 is twice as fast clock per clock as a 980ti in VR which is where all the R&D seems to have being spent.


No, the 1080 shows big gains.

If you take that idiots numbers then TSMC facilitates 40% higher clocks at the same power usage. The 1080 has over 65% faster clocks AND 20% lower power. There is a 45% improvement over what TSMC are claiming , which is down to pure architectural changes, in this case critical path optimization.


Nvidia is claiming around 50% of the increase in clock speed is due to their critical path optimization, meaning around half the performance gain is due to architectural changes.
 
Last edited:
Please educate me then as I thought the article had its points. Which is basically at the same clock speed maxwell=pascal.

All pascal brings (to games) anyway seems to be a higher maximum clock speed.

Yes its going to be twice as fast in VR so gains per clock speed there but thats it as far as I can see?

Comparing performance at equal clock speed is absolutely irrelevant and meaningless.

You can design a processor with:
1) high IPC and low clock speed
2) low IPC and high clock speed
3) a mixture of them both.

The inky goal is to maximize the instructions per second. Whether a processor has high or low IPC is largely irrelevant, the only thing that matters is performance.

Nvidia chose to increase clock speed by reduce the time of the most expensive instructions, which increases clock speed, increasing instructions per second, increasing performance.


If Pascal was just a die shrink then it would show less than half the performance gains it does show. A quick look at the performance increase and power consumption reduction shows that Maxwell would be impossible to hit such figures.
 
Comparing performance at equal clock speed is absolutely irrelevant and meaningless.

You can design a processor with:
1) high IPC and low clock speed
2) low IPC and high clock speed
3) a mixture of them both.

The inky goal is to maximize the instructions per second. Whether a processor has high or low IPC is largely irrelevant, the only thing that matters is performance.

Nvidia chose to increase clock speed by reduce the time of the most expensive instructions, which increases clock speed, increasing instructions per second, increasing performance.


If Pascal was just a die shrink then it would show less than half the performance gains it does show. A quick look at the performance increase and power consumption reduction shows that Maxwell would be impossible to hit such figures.

Ah but that is where we differ. A Maxwell if it could run at Pascal speeds would give exactly the same performance. Obviously it cant on both the power and thermal front.

So a die shrunk Maxwell would give exactltythe same performance as Pascal does anyway.

So apart from VR, what does pascal bring to the party? Nothing that wouldnt have been gained from a die shrink anyway.
 
Please show me these other non canned tests. I've had a look at that H one and can see nothing to indicate what or how they tested in this title, unlike PCGH who go into specific detail. Given that it's more likely the built in test was used wouldn't you agree? And please, you really need to wipe away your tears over the speed of the 980Ti being used. As many, many people have pointed out, it's a perfectly valid representation of the speed the vast majority of TI's will be running at, just like the Fury clocks represent the average speed most of those will be at. :D Surely, enthusiast aimed tests should be giving real world numbers? :) I'm happy to let individuals look at all of this and come to their own conclusion, you, for some reason seem almost desperate to lead people to yours.

Now we have got to the bottom of that. Lets see if we can clear up all your previous FUD storms in this thread. Hows the 'Pascal cant release this early', 'No chance of releasing before AMD', 'No chance of using GDDR5X' and finally that ludicrous FUD attempt at the HDMI support, that you dug up a 3 year old article written in Frence regarding a Sony projector working out for you?

HDMI was purely because there is nearly no official information about hdmi b, so sue me for using that.

Second, I linked to the same damn website you are using with non canned benchmarks, pcgamehardware or whatever it is. The review of effectively episode 1 of Hitman showed a Fury non pro beating a 20% overclocked 980ti in DX12..... take off the 20% overclock and how much slower will the 980ti be, considering the Titan X is at stock and absolutely miles behind the 980ti, it's a relatively good barometer of where 980ti performance would be.

The very website you are using and praising for not using canned benchmarks, shows a Fury pro at stock beating a significantly overclocked 980ti in the very game you are complaining about canned benchmarks. The website you are using as proof, shows AMD with a fairly significant performance lead in Hitman. The benchmarks you linked to were episode 2, the 980ti has for all intents and purposes, identical performance as the launch performance review, but moving from a Fury pro to a Fury X dropped performance a bit over 10%, though in that review they specifically state they used a higher quality, lower performance setting in AMD drivers and don't state the settings as clearly as in the earlier review. Maybe AMD put out a set of drivers that didn't do well in Hitman, maybe the earlier drivers and maybe the next drivers also would put performance back where it was.

But the very website you are using to claim Nvidia are way ahead in the actual game in fact said a stock Fury Pro beat the overclocked 980ti.


http://www.pcgameshardware.de/Hitman-Spiel-6333/Specials/DirectX-12-Benchmark-Test-1188758/


OWBdobM.jpg


That shows Titan stock getting 30.8fps at 4k, 49.9 at 1440p, 67.1 at 1080p, a 390x beats it by between 5-12% or so depending on resolution.

980ti overclocked to 1380Mhz, 36.7 at 4k, 59.2 at 1440p, 76.2 at 1080p.

Fury non pro at stock, 38.1 at 4k, faster, 58.6 at 1440p, just behind, 75.4 at 1080p, just behind.

So once again the very website you are using, says a Fury non pro is beating the overclocked 980ti at 4k, the Titan without a 20% overclock is, oh, would you look at that, about 20% behind the 980ti at 4k/1440p. Looking at Fury stock to Titan X stock, is the same gap every single other review shows, canned benchmark or not.

EDIT:- Further more, you bring this up and hint at conspiracy theories and canned benchmarks doing AMD a favour, but you outright refused to even acknowledge that a big overclock vs stock cards would give different results. Look at the stock Titan vs an overclocked 980ti, the gap is huge for cards that often perform very similarly. If AMD had a 15-20% lead on one site with all reference cards, then you look up another site and it's got nvidia cards overclocked by 20% and the gap is much closer, you attribute this solely to not being a canned benchmark and absolutely refused to acknowledge that overclocked cards make a huge difference.

It's entirely valid to say Fury X doesn't overclock as far and this might be what you get, though if you overclocked a Fury X it would close the gap and the 390x which overclocks better would still do well in it's price bracket but the fundamental point stands. AMD cards are performing excellently according to the very website you are insisting say Nvidia are blowing AMD away in this game. a 390x beating a Titan X comfortably in a non canned benchmark.

Oh I also forgot to say the 390 non pro beat the Titan X as well at 4k/1440p and the 980 overclocked to 1316Mhz is being beaten by a lowly 290x.
 
Last edited:
Ah but that is where we differ. A Maxwell if it could run at Pascal speeds would give exactly the same performance. Obviously it cant on both the power and thermal front.

So a die shrunk Maxwell would give exactltythe same performance as Pascal does anyway.

So apart from VR, what does pascal bring to the party? Nothing that wouldnt have been gained from a die shrink anyway.

That is where you are wrong.

A die shrunk maxwell will have slower clocks and high power draw.

You are saying "IF" maxwell could run at pascal soeeds, but by the very design.of the architecture it cant, so anything drawn from that caveat is entirely meaningless. What if Maxwell could run at 160GHZ, then it would be 100x faster than Pascal. But guess what, it cant.

Maxwell die shrink to TSMC 16nm would have 25% lower clocks and use 20% more power, approximately.
 
Last edited:
Back
Top Bottom