• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

** The Official Nvidia GeForce 'Pascal' Thread - for general gossip and discussions **

That is where you are wrong.

A die shrunk maxwell will have slower clocks and high power draw.

You are saying "IF" maxwell could run at pascal soeeds, but by the very design.of the architecture it cant, so anything drawn from that caveat is entirely meaningless. What if Maxwell could run at 160GHZ, then it would be 100x faster than Pascal. But guess what, it cant.

Maxwell die shrink to TSMC 16nm would have 25% lower clocks and use 20% more power, approximately.

:confused: what are you smoking:confused:
 

Literally the first link in Google when you type HDMI 2.0b shows you up... Yet you went digging for something for your FUD campaign. The best you could come up with was a years old article, written in French, regarding a completely different piece of hardware. I've no wish to sue you, or cause you any harm, but for once it would behove you to admit your mistakes and apologise to those who you have aggressively belittled for having the gall to disagree with your opinion.

Anyway, lets post their far more up to date numbers, based on a test that does not used the AMD sponsored and assisted developers canned test and/or scenes.
NMihEA3.png

So on one hand we have a set of results taken from when the game first launched. Using a test run and scenes specifically chosen by the developer who has had sponsorship and technical assistance from AMD. On the other a set of results taken far more recently, with drivers that have had time to be optimised and from a scene not chosen by the developer for specific reasons.

I'll let others make of those as they will, rather than aggressively trying to lead them to a specific conclusion.
 
How would a die shrink result in slower clocks and higher power?????

If a chip has been designed around the current and leakage properties of a specific process, moving to a smaller one, where leakage is a bigger problem, could potentially cause problems.
 
Literally the first link in Google when you type HDMI 2.0b shows you up... Yet you went digging for something for your FUD campaign. The best you could come up with was a years old article, written in French, regarding a completely different piece of hardware. I've no wish to sue you, or cause you any harm, but for once it would behove you to admit your mistakes and apologise to those who you have aggressively belittled for having the gall to disagree with your opinion.

Anyway, lets post their far more up to date numbers, based on a test that does not used the AMD sponsored and assisted developers canned test and/or scenes.
NMihEA3.png

So on one hand we have a set of results taken from when the game first launched. Using a test run and scenes specifically chosen by the developer who has had sponsorship and technical assistance from AMD. On the other a set of results taken far more recently, with drivers that have had time to be optimised and from a scene not chosen by the developer for specific reasons.

I'll let others make of those as they will, rather than aggressively trying to lead them to a specific conclusion.

The very website you're talking about used in gaming numbers and not the benchmark to show the Fury pro beat the overclocked 980ti... [h] also using in game numbers and not the benchmark shows a large win for AMD. The non overclocked Titan X in pcgh's numbers is 20% down on a Fury pro. All these numbers agree with every site I've seen, using the canned benchmark, or in game numbers.

What you're linking to is a review of episode 2, and they benchmarked I presume one of the new levels. The numbers aren't directly comparable.

However your point is that AMD only led because of canned benchmarks, yet the very site you're using to 'prove' this, proved exactly the opposite. In the new performance review the 980ti overclocked beat a stock Fury X, if the 980ti hadn't been overclocked it wouldn't have won anyway.

As for aggressive, you're ignoring logic, you're now screaming about canned benchmarks in two reviews that state they didn't use a canned benchmark and the site you keep linking to showed what they benchmarked in the game, a single solitary scene which had no big explosions or anything particularly good going on, just a crowd of people.

You're running around attacking everyone that points out the numbers aren't showing what you want them to and that other non canned numbers agree with the actual benchmark. You're also point blank refusing to accept that overclocking could remotely have anything to do with the change in numbers which is simply bizarre.

EDIT:- the benchmarks you show also say very specifically

Results not necessarily comparable with earlier reviews

Because it's a later updated version of the game. Non canned vs canned benchmark numbers show the same thing at launch, large AMD lead, Fury pro beating a 980ti overclocked. In a later version of the game, benchmarking a different level, they got different results.... shock horror. Yet in that same non canned benchmark the plain old 390 destroys the 970 in DX11 at 4k and beats the 980 also, both of the Nvidia cards being significantly overclocked again. If anything it might point to the latest update not having a great driver for Fury, but the 390 is still doing very strongly compared to the 970/980.
 
Last edited:
Well yes if the 1070 is a bad clocker. The 970 overclocked was > a stock 980, right? Quite a bit of stuff disabled on the 1070 so we'll see.

I am guessing that a 1070, a second hand 980ti, or indeed a 1080 will be an improvement on my 760.

A bit more than a stock 980

The 980 Graphics Score in that chart is 13148.

My score here of 13906. Thats + 6% vs the stock 980. http://www.3dmark.com/fs/8573432 my 970 BTW is a 24/7 overclock in that run.

The stock 1070 @ 17557 would be +26% vs my overclocked 970
 
Last edited:
more babble and avoiding direct numbers

I'm linking to the ep2 review as it is does not use the test based on scenes hand chosen by the developer who has worked closely with AMD on the game. In addition drivers from both IHV's have had a month or so the be tweaked and optimised. It's funny as usually it is the AMD peeps stating that games should be retested once drivers have had time to be worked on, in this case, not a word. :)

So what we have is the same game, showing different results depending on what is tested. I'll leave it up to individuals to decide how they interpret it all.
 
HDMI was purely because there is nearly no official information about hdmi b, so sue me for using that.

yeah, so difficult to check if hdmi 2.0b is 10gbps or 18gbps

http://www.hdmi.org/manufacturer/hdmi_2_0/

What are the key advanced features enabled by HDMI 2.0b?

Enables transmission of High Dynamic Range (HDR) video
Bandwidth up to 18Gbps
4K@50/60 (2160p), which is 4 times the clarity of 1080p/60 video resolution

first link on google when searching for "hdmi 2.0b"

for someone as intelligent as yourself, I wonder how you managed to think an article from 2013 talking about a sony projector getting a firmware update to support hdmi 2.0, when 2.0a wasn't even announced until 2015, had any bearing on 2.0b specs is mind boggling
 
Last edited:
How would a die shrink result in slower clocks and higher power?????

Lower clocks and higher power draw compared to Pascal.

A GM200 (980ti) with a straight die shrink might run at about 1350-1400Mhz and draw the same 225W.

Nvidia engineered Pascal to increase clock speeds to 1660Mhz AND simultaneously reduce power by 50w.



If you are going to continue to argue then please learn about clock frequency and critical path optimization so we can have a debate based on knowledge and facts:
http://www.ece.ncsu.edu/asic/2012/docs/timing
http://www-inst.eecs.berkeley.edu/~cs150/sp13/agenda/lec/lec17-timing2.pdf
https://embedded.eecs.berkeley.edu/eecsx44/lectures/Spring2013/timingAnalysis.pdf
 
Last edited:
Lower clocks and higher power draw compared to Pascal.

A GM200 (980ti) with a straight die shrink might run at about 1350-1400Mhz and draw the same 225W.

Nvidia engineered Pascal to increase clock speeds to 1660Mhz AND simultaneously reduce power by 50w.



If you are going to continue to argue then please learn about clock frequency and critical path optimization so we can have a debate based on knowledge and facts:
http://www.ece.ncsu.edu/asic/2012/docs/timing
http://www-inst.eecs.berkeley.edu/~cs150/sp13/agenda/lec/lec17-timing2.pdf
https://embedded.eecs.berkeley.edu/eecsx44/lectures/Spring2013/timingAnalysis.pdf

Sorry I am not arguing I am just asking a simple question. Clearly I have mistakenly believed that because every die shrink has resulted in faster clocks and lower power that it was down to the die shrink. Clearly you are telling me otherwise so I was just asking why?

I dont have time to read detail white papers so a quick explaination as to why wikipedia is wrong will suffice please

Die shrinks are beneficial to end-users as shrinking a die reduces the current used by each transistor switching on or off in semiconductor devices while maintaining the same clock frequency of a chip, making a product with less power consumption (and thus less heat production), increased clock rate headroom, and lower prices.
 
If you run the 980 and 1070 at the same clock i bet the performance is the same.

So what. It is meaningless.

If you drive a Ferrari Enzo at 60MPH and a Nissan Mica at 60MPH, does that mean the Nissan is a the same car as the Ferrari and could keep up with it in a race?

It is absolutely pointless thing to say.



pus I would say the opposite could eb true, Pascal probably has a lower IPC than Maxwell. When the longest instruction takes X time units you can try to optimize those instruction paths, or you simply divide that introduction up into multiple clock steps reducing IPC. If the Instructions per second increases despite requiring more clock ticks then there is an overall improvement.
 
Last edited:
Sorry I am not arguing I am just asking a simple question. Clearly I have mistakenly believed that because every die shrink has resulted in faster clocks and lower power that it was down to the die shrink. Clearly you are telling me otherwise so I was just asking why?

I dont have time to read detail white papers so a quick explaination as to why wikipedia is wrong will suffice please

I never said a die shrink doesn't increase clock speed or educe power usage.

The increased clock speed and reduced power consumption in Pascal relative to Maxwell comes form 2 factors:
1) A die shrink from 28nm down to 16nm FF+
2) Architectural changes and optimizations to reduce critical pat lengths and improve thermal properties.


the sum total of the increased clock speed and reduced power of Pascal far exceed that which is possible purely through the die shrink. The performance delta is due to architectural improvements.


Improvements don;t have to increase IPC, they only have to increase instructions per second. One was to do that is to reduce the crtical path length of the longest instructions allowing faster clock speed and better performance.
 
HDMI was purely because there is nearly no official information about hdmi b, so sue me for using that.

Second, I linked to the same damn website you are using with non canned benchmarks, pcgamehardware or whatever it is. The review of effectively episode 1 of Hitman showed a Fury non pro beating a 20% overclocked 980ti in DX12..... t etc etc etc etc etc etc etc etc

What relevance does this have to pascal? I do not have a problem with it, but why in this thread as opposed to a fury x vs 980ti or similar?
 
Last edited:
save your money on the FE edition guys! just got this message from kitguru about the custom cards for the 1080, suprised they replied back to my personal message, here you have it (y) with that said I'd expect custom cards within the first week of June :D
Cug0Or6.png
 
The make or break for the 1080 in my opinion is 1) how good the AIB cards are 2) how much.

1080 a "midrange" or "high end" card?

Mid : 314nm, 256bit bus, 64 rops

Mid / high : 2560 cores is more than normal for midrange, 8Gb memory

High : GDDR5x (although 256bit), MSRP

So overall it is a mix of "midrange" and "high end", the 1070 is "midrange" the 1080 is "mid/high end" and the 1080ti and titan are "high end".
 
Last edited:
Get the 1070 if needs be..That way you will keep resale value for when the 1080ti hits..

980ti is dead tech now and its long gone.Move forward mate.

Probably yes with the driver "issues" with old cards, also the improvement with Dx12 and the better memory compression etc. Hopefully and OC 1070 will be as good as an OC 980ti at 3440x1440, but I am not sure about that. I want a 1080 though, that is the whole reason my 980ti is now gone, but the 1070 is a back up plan as I am not happy with 630.
 
Last edited:
Back
Top Bottom