• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Possible Radeon 390X / 390 and 380X Spec / Benchmark (do not hotlink images!!!!!!)

Status
Not open for further replies.
DX12 has as many facts known to the public as Fury X does :D

Nice link back to the furry one.

So taking the Titan X as 100%, the 980ti as 96% ????

where do you reckon the furry one X will be ?


I reckon 91% and the pro version at 87%

get your bets in now....:)
 
Nice link back to the furry one.

So taking the Titan X as 100%, the 980ti as 96% ????

where do you reckon the furry one X will be ?


I reckon 91% and the pro version at 87%

get your bets in now....:)

Out the box I reckon it'll give the Titan X a good run - but then nVidia will unleash their "performance" driver that the Kepler "fixes" have been pulled from and edge ahead again.
 
With the size of that thing, It's what 550-600mm2? And with CLC cooler. I expect nothing less than reference TX spankage. The only thing that might be worse than TX is power consumption.
 
humbug do you think that HBM is the sort of thing that developers will be able to code to utilise better, or do you reckon it will be down to the driver development teams.

Could this lead to a situation where the older tech is no longer optimized for just like the move from VLIW4 to GCN and NVidia's previous to Fermi. I suppose even in a similar way that tessellation is being used a lot more and the older techs that don't tessellate quit so well are lagging behind.

I think Buffer resourcing is something that can be improved by both Developers and Drivers.

I think we have got the to point now where better looking games are no longer being perused through adding more, making it bigger. I think everyone understands that mindlessly chewing up resource is no longer an option, instead efficiency is the goal now.

Lossless colour multipliers and texture compression techniques are already being done at the engine level, Anti aliasing from massively oversampling textures and pollys is a ham fisted and primitive way to reduce jaggies, it has enormous resource costs.

Take the Grass on my Test map, with 8x MSAA it would still look like something hand-drawn badly with a freshly sharpened pencil, a sea of Jaggies, using FXAA to get rid of it will look like a vaseline smeared lens, with SSAA it would look ok but be too heavy on resources.

As it is i'm using Projection Matrix Jittering applied Temporal AA (Brilliant CryTek thank you) not a jaggie in sight and uses less resources than MSAA or even FXAA, i might make a screen shot of how it looks without that ^^^^ horrible.

My point is using a different approach, an intelligent approach getting more from less is possible, and its more possible than ever with DX12, Vulkan and Mantle. getting away from 15 year old development principles has unlocked the future of software and hardware.

In long rambling way to answer your question, i think both Developers and Vendors can and will make better use of the hardware.

I'm really exited about it, this is a great time to be an enthusiast here.
 
Last edited:
Nice link back to the furry one.

So taking the Titan X as 100%, the 980ti as 96% ????

where do you reckon the furry one X will be ?


I reckon 91% and the pro version at 87%

get your bets in now....:)

If we don't get Titan X beating performance I think everyone will be disappointed given the stick Nvidia got for the improvement the 980 offer over previous gen and it doesn't seem like AMD went the efficiency route like Nvidia did so expecting pretty huge increase over the 290X or we may have a riot!

I think Buffer resourcing is something that can be improved by both Developers and Drivers.

I think we have got the to point now where better looking games are no longer being perused through adding more, making it bigger. I think everyone understands that mindlessly chewing up resource is no longer an option, instead efficiency is the goal now.

Lossless colour multipliers and texture compression techniques are already being done at the engine level, Anti aliasing from massively oversampling textures and pollys is a ham fisted and primitive way to reduce jaggies, it has enormous resource costs.

Take the Grass on my Test map, with 8x MSAA it would still look like something hand-drawn badly with a freshly sharpened pencil, a sea of Jaggies, using FXAA to get rid of it will look like a vaseline smeared lens, with SSAA it would look ok but be too heavy on resources.

As it is i'm using Projection Matrix Jittering applied Temporal AA (Brilliant CryTek thank you) not a jaggie in sight and uses less resources than MSAA or even FXAA, i might make a screen shot of how it looks without that ^^^^ horrible.

My point is using a different approach, an intelligent approach getting more from less is possible, and its more possible than ever with DX12, Vulkan and Mantle. getting away from 15 year old development principles has unlocked the future of software and hardware.

In long rambling way to answer your question, i think both Developers and Vendors can and will make better use of the hardware.

I'm really exited about it, this is a great time to be an enthusiast here.

Presumably this is all already happening on the PS4 with its low level API? So we can expect to see PS4 levels of quality in future PC games?
But I do wonder if in this console lead world we'll see much extra effort to add more to the PC versions or if we'll get a lot of the same but with lower requirements?
 
If we don't get Titan X beating performance I think everyone will be disappointed given the stick Nvidia got for the improvement the 980 offer over previous gen and it doesn't seem like AMD went the efficiency route like Nvidia did so expecting pretty huge increase over the 290X or we may have a riot!



Presumably this is all already happening on the PS4 with its low level API? So we can expect to see PS4 levels of quality in future PC games?
But I do wonder if in this console lead world we'll see much extra effort to add more to the PC versions or if we'll get a lot of the same but with lower requirements?

I don't know but I don't see why not
 
Just popped back in here to quickly add this,

PowerColor Clarifies Details on AMD 390X Photos at Computex

The pictured card is NOT the new R9 390X, nor is it an official cooling design.

this is neither a finished new card nor the highly expected Radeon R9 390X card.

http://www.eteknix.com/powercolor-clarifies-details-on-amd-390x-photos-at-computex/

To much moaning minnie like behaviour going on in this thread so swiftly exiting again until we get some solid info :D
 
Nice link back to the furry one.

So taking the Titan X as 100%, the 980ti as 96% ????

where do you reckon the furry one X will be ?


I reckon 91% and the pro version at 87%

get your bets in now....:)

X at 105%, Pro at 92%
 
Last edited:
If we don't get Titan X beating performance I think everyone will be disappointed

I think most people are hoping it beats the Titan X. I would personally be content* if it hits 980ti performance at 500 quid as I'll buy one and a freesync monitor, saving money in two places over the nvidia equivalent. Dependent on the wife, the difference might let me go 6 core on the cpu instead of 4 core.

* obviously I'd be happier if the 4gb model is 10pc up on the 980ti and the 8gb model destroys the Titan X, though I'll have likely bought a card before we see the 8gb Fury.
 
Nice link back to the furry one.

So taking the Titan X as 100%, the 980ti as 96% ????

where do you reckon the furry one X will be ?


I reckon 91% and the pro version at 87%

get your bets in now....:)

I wouldn't argue with this at all. From everything we know and don't know and what AMD themselves have said, I don't think we are looking at a 980Ti beater, let alone a TX beater. I do hope I am wrong but I just can't see it.

So yer, I will go 92% just to be kind :D
 
Everything that you see passes through the buffer

The buffer holds Texture preloads, as you turn a corner the objects are already there and visible, that's because the GPU has already rendered it and stored it in the buffer.

The bigger the buffer the more of this information it can preload and store, if the buffer isn't big enough to store all the information for every which possible way you could turn it needs to flush the buffer to make room for the image coming into view, that causes latency which manifests its self as a slowdown, lower FPS or in some cases a dramatic slowdown which you would see as a stutter.

The faster the Memory architecture the faster it can flush and preload which results in higher FPS and less stuttering.

You forget that there are other things involved in all this (CPU, RAM, storage, PCI-E Bus speed etc,etc).

We can't know until the HBM cards are released but I'm relatively certain 4GB HBM won't be a noticeable improvement over GDDR5 unless the R9-390X core is also a powerhouse.

The engine being greedy (as you have mentioned) is actually a good thing as the buffer is readily available with 0 stutter or latency when caching maps,texture etc. Probably a better bet to wait for 8GB HBM, if not available on the first release for a much much smoother experience.
 
Last edited:
IMO Fury X needs to desiverly beat the Titan X if AMD wants to become relevant again. 110% over Titan X.

If the rumoured specs are true it should your talking about a 250 watt card going up against a 300 watt card after all so why shouldn't it be faster. Just look at how much extra performance Nvidia got from reworking an already good architecture, after 2 years AMD must have had a similar breakthrough with GNC.
 
Status
Not open for further replies.
Back
Top Bottom