• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD Radeon R9 290X with Hawaii GPU pictured, has 512-bit 4GB Memory

4K won't be a viable option even when we are on 20nm, it will only become viable when we are on the next shrink after 20nm.

From a display and/or technology maturity point of view fairly likely - performance wise will depend a bit on how 20nm GPUs turn out.

If you think 20nm will produce a GPU more than twice, 3, 4 times as fast as an R9 290X then I think your crazy :p

Crysis 3 is 62 FPS.

It is viable now, a 20nm R9 390X or GTX Titan 2 will just be more viable.

Full fat maxwell is probably ~80% faster than the Titan (on 20nm) depending on final clock speeds but nVidia seem to be trying to push that back in the hope of getting it onto 16 (TSMC is already ramping up 16nm hybrids) or even 14nm. Probably see a mid-range masquerading as high end part first thats 40-50% slower than the full fat on 20nm similiar to what they did with the GTX680.

EDIT: Talking traditional game rendering - games that are heavy on next generation compute/complex shader effects will be a different story.
 
Last edited:
If you think 20nm will produce a GPU more than twice, 3, 4 times as fast as an R9 290X then I think your crazy :p

Crysis 3 is 62 FPS.

It is viable now, a 20nm R9 390X or GTX Titan 2 will just be more viable.

You will be lucky if you see a GPU twice as fast as a R9 290X or Titan and this will only happen in the last days of 20nm.

As to Crysis 3 to get over 60fps @1600p maxed you need 3 Titans so 4K forget it.

No it is not viable unless you want to turn the settings down which defeats the point of 4k, you may as well get a 1600p monitor if you are going to do that.
 
This was the point/question i was trying to raise last night, rather unsuccessfully.

From my understanding the only way they can claim it reduces lag is for those who game with V-sync ON presently.

If you game with V-sync OFF, G-sync will increase lag (but by less than V-sync does), whilst eliminating screen tear.
 
I'm sure my 780 can get 60 FPS on the lowest setting.

It's contextual - there's no good saying that card X gets Y FPS without providing the context unless you've got some kind of agenda.

Ah now it makes sense.

I didn't, its not, your trying to provoke me again... its not working.

/
 
That scaling at 4k is impressive but much like Kaap I want to see detail settings stretched. There is no way the 290x paired is capable of anywhere near a solid 60 frames in Crysis 3 with Ultra settings in 4k.
 
I'm just going off what the industry experts have said. I'd rather listen to them for the moment.

What, like Tim Sweeney telling us for the last 10-15 years how hardware rendering is going to die out and everything is going to be done on the CPU in software? Or Carmack telling us how megatexture is the future, whilst making great engines and poor games? There's no doubt that these guys are great and have made significant impacts on the field, but that doesn't stop them being wrong about stuff, and their opinions do conflict with other experts in the field.

Obviously, if you're Nvidia staging a Nvidia cheerleading event, you're only going to ask the high-profile devs that are going to support your products, and not invite the devs that will do otherwise.

So while the likes of Sweeney may not like various trends in the industry, if Mantle gains traction, they will support it (or be paid not to) because that gives them the most monetary benefit.
 
Last edited:
Nvidia has been using proprietary API's to gain an advantage over AMD for may years, this will only ramp up now, this is how its going to be, its not going to be long before developers will have to develop a game twice, one for each brand of GPU.

Developers have been writing multiple paths for performance reasons for years, even with DX. Mantle will be no different in this respect, but in return for extra work of building the subsystem to circumvent DX, you get low level access and the potential for much higher performance.

Of course no one is obliged to use Mantle, and any developer can continue to use DX instead.
 
Developers have been writing multiple paths for performance reasons for years, even with DX. Mantle will be no different in this respect, but in return for extra work of building the subsystem to circumvent DX, you get low level access and the potential for much higher performance.

Of course no one is obliged to use Mantle, and any developer can continue to use DX instead.

Fair point :)
 
Developers have been writing multiple paths for performance reasons for years, even with DX. Mantle will be no different in this respect, but in return for extra work of building the subsystem to circumvent DX, you get low level access and the potential for much higher performance.

Of course no one is obliged to use Mantle, and any developer can continue to use DX instead.

But writing an engine to use Mantle as well as DirectX is a little different than writing different paths in the same API or using nvapi type apis to add additional features.
Don't forget that developers currently have an alternative API than can run on multiple platforms and I believe offers performance boosts over DirectX. I don't believe many developers chose to write an alternative OpenGL path though (or we'd likely have a much larger number of games ported to Linux).

Not sure why Mantle would be much different, unless AMD start paying companies to support it. Of course when Nvidia pay developers to make games run better on their hardware than their rivals it seems to annoy people. Wonder if it will be the same if AMD pay companies to use Mantle?
 
Developers have been writing multiple paths for performance reasons for years, even with DX. Mantle will be no different in this respect, but in return for extra work of building the subsystem to circumvent DX, you get low level access and the potential for slightly higher performance.

Of course no one is obliged to use Mantle, and any developer can continue to use DX instead.

Fixed

There won't be a massive boost.
DX overhead is not hard for any modern CPU to push through with pure grunt. Any gains from mantle will be small. AMD are making it a big thing so people buy their cards.
 
Back
Top Bottom