• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD R9 Fury X2 - Paper Launch This Month - Available Q1 2016 - But Could Face Potential Delays

Should a game developer put tesselation at the highest level that the 980TI can run, because tesselation is a DX standard and according to your logic all cards should support it just as well? If AMD cards aren't as fast at Tesselation as Nvidia cards does that mean AMD cards aren't DX compliant, or does ia mean DX specifications are irrelevant to GPU opimization?

Every card today supports tessellation, unless you're talking about HairWorks which was proven to have the tessellation wacked up WAY higher than needed.

Rubbish. Supporting a feature doesn't mean it is supported at high speed.

Either it's supported or it's not, none of this "high speed" BS. Only thing that changes is how optimised the card is for that specific feature. For instance, AMD's GCN architecture is far better at Async Shaders than Nvidia's Maxwell architecture and Nvidia's Maxwell architecture is better at tessellation. Does that mean AMD doesn't support tessellation? No. Does that mean Nvidia doesn't support Async Shaders? No.
 
Last edited:
What CAT said, aimed at adressing parallelisation deficit to some extent (Volta is the real deal for parity with GCN in this regard, or NV hopes superior), asynch compute etc, low latency, general purpose.

Reason for slow growth is there'll be limited volumes / yields for both AI and Maxwell next year ... new node, HBM2, and NVIDIA will be moving to a new architecture and interposers and stacked memory for the first time. Plus obviously they have no console presence, so any growth they have towards the future in terms of hardware features / APIs is purely on the strength of PC Pascal sales (a tiny fraction of GCN).
 
Anyway,I do wonder whether we will see any consumer 14NM/16NM cards by summer at this rate though!!

ARK needs it!! :p

Don't think it will make any difference to ARK. It's horribly optimised and from what the devs have said they sound extremely unenthused by DX12 and far more enthusiastic about UE4 Vulkan support ... which won't happen soon and I doubt NV (sponsored by) will like as they won't have any Vulkan GW libraries for even longer.
 
Precisely and completely incorrect... as usual. DX12 in no way at all causes devs to become more dependent on 3rd party libraries. Was that just an excuse to mention gameworks and dependent in the same sentence. Also DX12/low level api, one of the fundamental driving features is precisely that it will take LESS assistance from hardware vendors to achieve what they want to do. A slimmer driver and no black box DX to cause lots of issues they can't identify themselves. Many/most of the problems with finding issues with games is that the dev has to talk with the hardware vendors and MS/DX people and they all have to work together to find some way to get it to work. With a slim driver, way more access and a low level API they can directly find and solve problems themselves with no waiting around for answers from e-mails.

This is actually where the smaller devs gain the most. Where lets say Rockstar are a huge important dev and will get priority access from AMD/Nvidia and help from MS re DX problems, small dev Indie Company B, will send a request for help that MS, AMD and Nvidia don't have time for but can't find a solution on their own. With DX12 a smaller dev has far more ability to fix it's own problems with far better transparency and better tools.
Exactly right.
 
Waiting to see how this card fares and also if it's close to release of the next shrink how they compare.

I really want to upgrade from my 290 but I've just not seen anything i want to spend money on just yet, especially with a rumoured shrink coming.

Interested though to see this card, but also very wary of AMDs inability to support crossfire
 
Yup, someone kindly posted a interview with Oxide saying how much dramatically easier multi-gpu is under DX12. So it will be pretty hard to debunk because what I said was accurate and what he said was complete rubbish.

Literally every dev talking about DX12 has said multi-gpu will be dramatically easier. They have all said that drivers will be far LESS important under DX12 than DX11 for all usage, single or multi gpu. It's one of the fundamental reasons DX12 is what it is, slim driver that actively makes it much easier for devs, big and small devs. It's not difficult to design a game to use two gpus, they are effectively the same, send work here, send work there. The problem is DX11 doesn't make it transparent, it's a massive black box of gibberish devs have to work through. The coding itself isn't hard, getting code to work through DX11 is incredibly difficult and getting help from MS is a nightmare because there are so many problems and bugs to work around.
 
Last edited:
Yup, someone kindly posted a interview with Oxide saying how much dramatically easier multi-gpu is under DX12. So it will be pretty hard to debunk because what I said was accurate and what he said was complete rubbish.

Literally every dev talking about DX12 has said multi-gpu will be dramatically easier. They have all said that drivers will be far LESS important under DX12 than DX11 for all usage, single or multi gpu. It's one of the fundamental reasons DX12 is what it is, slim driver that actively makes it much easier for devs, big and small devs. It's not difficult to design a game to use two gpus, they are effectively the same, send work here, send work there. The problem is DX11 doesn't make it transparent, it's a massive black box of gibberish devs have to work through. The coding itself isn't hard, getting code to work through DX11 is incredibly difficult and getting help from MS is a nightmare because there are so many problems and bugs to work around.

yep,Mantle rocks with dx12 and Vulkan as a platform all thanks to AMD :D
Merry Christmas
 
Whats the point??

Unless Nvidia and AMD only have retail 14NM/16NM based cards at the end of next year(which could happen),seems a bit of a late launch.

The Arctic Islands family of GPUs has been subject to numerous leaks in the past, unfortunately none of which had any information about when they were actually going to be released. There hasn’t been any reliable information about Arctic Islands’ release timeframe, that is until today. We have confirmed that the company is planning to introduce its next generation 14nm / 16nm family of graphics cards throughout the summer of 2016 and into the back to school season.

http://wccftech.com/amd-1416nm-arctic-islands-launching-summer-2016/
 
Last edited:
Well, it's Winter, and without the GPU fans spinning, it'll warm your house twice as fast. :D

Without the fans spinning it'll be worse at dissipating heat into your house, so a bit slower to begin with. Then it'll hit it's thermal ceiling and throttle, so overall heating your house much more slowly. If you're going to make jokes about thermodynamics at least get it the right way round! :D
 
Which is set to be AMD’s most powerful and most advanced graphics chip to date. Greenland is rumored to feature up to 18 billion transistors and 32GB of second generation HBM with 1TB/s of memory bandwidth. Making it the largest ever graphics engine conceived by the company, at approximately twice the transistor count of AMD’s current flagship code named “Fiji” powering the Fury series of Radeon graphics cards.
Will it still have 64 rops though.
The transistor count increase going from Hawaii to Fiji was substantial yet the real world performance gain was not and everyone blamed it on the rop count not changing so there throwing big transistor count numbers about means nothing if there's a weakest link issue again.

Read more: http://wccftech.com/amd-greenland-gpus-feature-hbm2-14nm-coming-2016/#ixzz3vAZmAwyu
 
Last edited:
Will it still have 64 rops though.
The transistor count increase going from Hawaii to Fiji was substantial yet the real world performance gain was not and everyone blamed it on the rop count not changing so there throwing big transistor count numbers about means nothing if there's a weakest link issue again.

Read more: http://wccftech.com/amd-greenland-gpus-feature-hbm2-14nm-coming-2016/#ixzz3vAZmAwyu

I still don't think it's just the rops count that held fury back, certainly at high resolution Fury performs close to where it should be. The weakpoint of Fury is still at 1080p and 1440p. Essentially Fury is just x2 tongs on one die.
As I predicted before fury was released the transistor density would reduce clocking headroom so I hoped Amd would have good front end efficiency to counteract the current loading and power consumption of the high cu count.

I've always thought that Fiji has a scheduling/queuing problem that is inefficient at providing the engines with throughput from the global data share which is seen at lower resolutions. Everyone says it's amd's directx 11 drivers, but if you look at the architecture the front end is wide and built for parallel compute which is where direct x 12 comes in. Also they only have 4 rasters, (1 per engine) gm200 has 6 (1 per gpc). The engines maybe are too big, for both power gating and for latency and throughput. Maybe for greater yield rates and consumption it would be better to have 6-8 smaller engines but a better balanced scheduler through the 8 ace units.
 
Back
Top Bottom