• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

****Official OcUK Fury X Review Thread****

Hmmmmmm, I'm thinking Titan in Jan/Feb then the rest to follow tbh. They'll push HBM2.0 out on something asap to not look to be left behind :)

Considering AMD is a partner with Hynix on HBM,they will probably be out the boat first anyway IMHO with HBM2 and its still more likely midrange first for retail for Nvidia. They did that for the GTX680 against the Titan and the GTX980 against the Titan X.

Any initial large die Pascal cards will be going for HPC - the GM110 based cards were made available for commercial customers nearly six months before retail customers.

If 28NM is anything to go by costs,its more likely to more of the same with 16NM and 14NM,so the higher profit margin customers will be served first,especially as yields will be another concern and companies like Apple for example will get priority over small players like AMD and Nvidia.

Plus since the GM210 has been out only since March,I don't think Nvidia will be replacing it within a year for retail.
 
Last edited:
People are shouting about Pascal and that it will be 10 times faster than Maxwell because it says so on a slide. I highly doubt that the realworld performance of pascal will be 10 times faster.. its just unthinkable. If we get twice the performance we should be happy as that is still 60% more than what we are use to get on a good release.
 
Hmmmmmm, I'm thinking Titan in Jan/Feb then the rest to follow tbh. They'll push HBM2.0 out on something asap to not look to be left behind :)

There will be a pipecleaner Pascal (like 750Ti was for Maxwell) in Q2 '16 (if not delayed). This will be months ahead of enthusiast, and there's unlikely to be a new Titan SKU in '16 at all. The Tesla cards will hit many months before big Pascal units are rebranded as Titan or 1080Ti.

NVIDIA is squarely aiming Pascal at people who still use CUDA and supercomputing markets. They've been losing design wins and marketshare to AMD in this and the pro-graphics workstation space (doesn't help they're permanently out of Apple) for a while now. They have to stop the rot before it's too late. It's where they've traditionally made all their money. It'll be like Fermi ... you'll barely even hear about consumer version of the big chips at first.

People are shouting about Pascal and that it will be 10 times faster than Maxwell because it says so on a slide. I highly doubt that the realworld performance of pascal will be 10 times faster.. its just unthinkable. If we get twice the performance we should be happy as that is still 60% more than what we are use to get on a good release.

All the Pascal slides have been re: supercomputer / workstation market. Not a single thing has been said about consumer versions (mainly as they're an afterthought). GCN beats Maxwell by more than 10x in various compute tasks ... hence why NVIDIA have embargoed comparative testing of their cards in compute. Maxwell's compute performance is so bad (Kepler also not great), in some tasks 10x for Pascal is easily possible. New architecture will be drastically better for Vulkan / DX12 than existing NVIDIA cards, but will probably offer minimal improvement on legacy APIs. Until Pascal hits though, games built from the ground up to take advantage of Mantle / DX12 / Vulkan will see NVIDIA cards getting absolutely annihalated by anything with GCN.
 
Last edited:
I'm not saying which is better or more suitable. All I'm doing is trying to find some sort of explanation for the different results in reviews.

Some sites show what they did for testing and others don't, hardwarecanucks review lists their testing methods and they have videos of it. Seems to be pretty short playthroughs of level segments, roughly 2 minutes worth.

http://www.hardwarecanucks.com/foru...682-amd-r9-fury-x-review-fiji-arrives-13.html

This digital storm vid compares single and sli\crossfire titan x and fury, they seem about on par with each other. Does seem to be some canned benchmarks in the vid though.

 
Last edited:
People are shouting about Pascal and that it will be 10 times faster than Maxwell because it says so on a slide. I highly doubt that the realworld performance of pascal will be 10 times faster.. its just unthinkable. If we get twice the performance we should be happy as that is still 60% more than what we are use to get on a good release.

Who thinks it will be 10 times faster?!
 
Considering AMD is a partner with Hynix on HBM,they will probably be out the boat first anyway IMHO with HBM2 and its still more likely midrange first for retail for Nvidia. They did that for the GTX680 against the Titan and the GTX980 against the Titan X.

Any initial large die Pascal cards will be going for HPC - the GM110 based cards were made available for commercial customers nearly six months before retail customers.

If 28NM is anything to go by costs,its more likely to more of the same with 16NM and 14NM,so the higher profit margin customers will be served first,especially as yields will be another concern and companies like Apple for example will get priority over small players like AMD and Nvidia.

Plus since the GM210 has been out only since March,I don't think Nvidia will be replacing it within a year for retail.

This is assuming AMD have the funds/budget to create another GPU with HBM2 in the reasonable future.

Remember how long AMD's top card was the 290X? Remember how many new GPU's NVIDIA managed to launch in that timeframe?

I wouldn't be surprised if FuryX/Fury/Nano/Fury 2X are all we have for another 2 years.
 
Who thinks it will be 10 times faster?!

http://www.theinquirer.net/inquirer...u-pascal-will-be-10-times-faster-than-maxwell

For starters.. Every time someone talks about 4k gaming and pascal is mentioned as the holy grail due to sites like the one above.

Or how about nvidias own press conference:
Pascal-10x-Maxwell.png
 
I'm not saying which is better or more suitable. All I'm doing is trying to find some sort of explanation for the different results in reviews.

Different test methodology is the most likely explanation. Toms Hardware for example appear to run either built in benchmarks or short (60-90 second) "playthroughs". HardOCP on the other hand run much longer sessions representing a more real world approach.

Another reason will be the area of the game tested. Crysis 3 for example will have performance that varies drastically from level to level. Depending on the resource load card A may perform better than card B in section A but fall behind in section B.
 
Who thinks it will be 10 times faster?!

Small Pascal barely faster than big Maxwell.

Big Pascal 1.7x faster than big Maxwell.

As a crude guide if you want 10x faster you need 10x the transistors and that is not gong to happen on a single node shrink.
 
This is assuming AMD have the funds/budget to create another GPU with HBM2 in the reasonable future.

Remember how long AMD's top card was the 290X? Remember how many new GPU's NVIDIA managed to launch in that timeframe?

I wouldn't be surprised if FuryX/Fury/Nano/Fury 2X are all we have for another 2 years.

Does it really matter? the 290x was nipping at the 980 in some 4k scenarios even though it was brought to market to fight against the 780 and the Titan. They didnt need to release a new card as the price to performance ratio was still much better with the exception of the 970 when it launched.
 
http://www.theinquirer.net/inquirer...u-pascal-will-be-10-times-faster-than-maxwell

For starters.. Every time someone talks about 4k gaming and pascal is mentioned as the holy grail due to sites like the one above.

Or how about nvidias own press conference:
Pascal-10x-Maxwell.png

Plenty of NVIDIA-trolls keep turning somersaults about how NVLink is going to be the next big thing and AMD have no answer for it. The FPS in games will be insane. Well that's great, I'm sure we'll see lots of games and video editing software ported to their IBM Power8 workstations. It ain't coming to PC.
 
Funny thing is if you look back at launch day reviews for the 980ti and compare the benchmarks to launch day Fury X.

The Fury X beats it on every single game, that's with identical hardware as well.

[source]princeoftrees on reddit

PC Gamer: http://imgur.com/2JUTqE6[1]
Tom's Hardware: http://imgur.com/jglgyVq

Based on that given the same time span of ~1 month, the Fury X should pull ahead with each driver release.
 
Different test methodology is the most likely explanation. Toms Hardware for example appear to run either built in benchmarks or short (60-90 second) "playthroughs". HardOCP on the other hand run much longer sessions representing a more real world approach.

Another reason will be the area of the game tested. Crysis 3 for example will have performance that varies drastically from level to level. Depending on the resource load card A may perform better than card B in section A but fall behind in section B.

Thanks. :)
 
This is assuming AMD have the funds/budget to create another GPU with HBM2 in the reasonable future.

Remember how long AMD's top card was the 290X? Remember how many new GPU's NVIDIA managed to launch in that timeframe?

I wouldn't be surprised if FuryX/Fury/Nano/Fury 2X are all we have for another 2 years.

Or maybe they are pushing all their R and D to get as early as possible onto the next process node??

There is a lot of noise that they will be using GF instead of TSMC,meaning they might not be as supply constrained since TSMC has a lot of big players like Apple and Qualcomm using their fabs.

We need to be very careful about too much doom and gloom.

AMD had the 2900XT and HD3870 which was followed by both the HD4870 and HD5870.

In many ways they were far less competitive with the 2900XT and HD3870 than they are now.
 
NVIDIA is squarely aiming Pascal at people who still use CUDA and supercomputing markets. They've been losing design wins and marketshare to AMD in this and the pro-graphics workstation space (doesn't help they're permanently out of Apple) for a while now. They have to stop the rot before it's too late. It's where they've traditionally made all their money. It'll be like Fermi ... you'll barely even hear about consumer version of the big chips at first.

Got anything to back that up?

Nvidia have made huge wins in recent times. They are the sole supplier for IBM supercomputer designs and the US government has recently announced they will be building the three fastest yet announced supercomputers in existence around NV hardware. That's some big money and mindshare right there.

So AMD got Apple, that's peanuts as we know Apple screw down their supplier margins, plus Mac Pro sales are hardly setting the world alight, it has not helped AMD's bottom line much has it?

Edit: You just need to look at the top 500 to see what people who's opinion counts really think about AMD's compute designs.
A total of 75 systems on the list are using accelerator/co-processor technology, up from 62 from November 2013. Fifty of these use NVIDIA chips, three use ATI Radeon, and there are now 25 systems with Intel MIC technology (Xeon Phi). Intel continues to provide the processors for the largest share (85.8 percent) of TOP500 systems.
http://www.top500.org/lists/2014/11/
 
Last edited:
This is assuming AMD have the funds/budget to create another GPU with HBM2 in the reasonable future.

Remember how long AMD's top card was the 290X? Remember how many new GPU's NVIDIA managed to launch in that timeframe?

I wouldn't be surprised if FuryX/Fury/Nano/Fury 2X are all we have for another 2 years.

Arctic Islands is aimed at the same launch window next year. Computex / E3. Whole range. Enthusiast / High End AI will make it to market before equivalent consumer versions of Pascal. This is certain. Only way it doesn't happen is if GF/Samsung's 14nmFF LP+ production lines burn down.

Does it really matter? the 290x was nipping at the 980 in some 4k scenarios even though it was brought to market to fight against the 780 and the Titan. They didnt need to release a new card as the price to performance ratio was still much better with the exception of the 970 when it launched.

... and the slightly respun 290X (390X) with new drivers beats the 980 at 2560x1440 and annihalates it at 4K, as will the 290X with the new drivers ... 780 and Titan are nowhere close (though I suspect much of this is due to driver gimping by NVIDIA / over-tesselation in GameWorks).
 
Back
Top Bottom