• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Possible Radeon 390X / 390 and 380X Spec / Benchmark (do not hotlink images!!!!!!)

Status
Not open for further replies.
Not going to happen lol

Going on the number of cores the 390X is likely to be about 7% slower than a TitanX. I also don't think the 390x will be anywhere as good at overclocking as a TitanX. The TitanX comes with a 1000mhz base clock and it is dead easy to get them to 1400mhz.:D:)

Another question that needs an answer is will the 390X GPU core get a bit toasty with the HBM packed around it, yes I know HBM uses less power but GDDR5 is a good distance from the core when it comes to cooling.

Wouldnt be so rash with the assumptions Kaap, the card is an unknown quantity, no one knows what HBM will or wont be like, its never been used before, it could be terribad but it could also be insanely good.

I know you own AMD but love Nvidia, thats obvious to anyone who reads your posts, but even you must agree the card is an uknown entity at this point and could beat the TitanX if engineered correctly, simply say "Lol no" before seeing any official stats from AMD is a bit premature mate.
 
Wouldnt be so rash with the assumptions Kaap, the card is an unknown quantity, no one knows what HBM will or wont be like, its never been used before, it could be terribad but it could also be insanely good.

I know you own AMD but love Nvidia, thats obvious to anyone who reads your posts, but even you must agree the card is an uknown entity at this point and could beat the TitanX if engineered correctly, simply say "Lol no" before seeing any official stats from AMD is a bit premature mate.

If i just shelled out the money he did on 4 x Titan X i would probably be doing the same :D:D:D:D:D
 
Wouldnt be so rash with the assumptions Kaap, the card is an unknown quantity, no one knows what HBM will or wont be like, its never been used before, it could be terribad but it could also be insanely good.

I know you own AMD but love Nvidia, thats obvious to anyone who reads your posts, but even you must agree the card is an uknown entity at this point and could beat the TitanX if engineered correctly, simply say "Lol no" before seeing any official stats from AMD is a bit premature mate.

I am equally fond of AMD and am looking to getting some more of their GPUs.;):p:)
 
The Titan x is not that great at overclocking if you look at the numbers. From what i have seen it boosts at stock to around 1150 which it should sit at if cooling is good. An overclock to 1400 is only around 22%. 22% is decent but compare that to the original 7950 which came in with 800mhz core and could reach 1200mhz+ which is a 50% overclock then it's just an average overclocker. You can't really use a base clock on a Nvidia card to judge how far they overclock as we all know they never really sit anywhere close to it unless they are thermally limited.

Say the 390x is base 1000mhz core, to equal the Titan X overclock you would only need 1220mhz on the core which is not out of this world looking at past and current cards.

This is not exact obviously but it's much closer than comparing with a Nvidia base core clock.

I will be surprised if the 390x is faster than the Titan X but i would not be shocked because i think it's possible. The hbm memory worries me more because if it comes with only 4gb then Nvidia even if slower still hold the cards with a possible 980ti 6gb.

Well that is a apples to elephants comparison if ever I heard one.

You pick the boost clock of the Nvidia card and then the base clock of a AMD card that was introduced before they started using a boost clock, one that everyone knows was rather underclocked when it was launched at that.

I'm sure some one will correct me but I don't think AMD have yet had a card that has had a base clock above 1GHz yet (since they introduced AMD power play [ AMD's version of the boost system])

Disclaimer: I am not saying that AMD way of doing things is better or worse than NVidia's it is just different. We have known for a along time now that you cannot judge a card on its clock speed but rather than on its performance.

As for overclocking, as humbug just pointed out, it seems to range from around the 15% to 20% mark, I wouldn't say that one card is that much better than the next at all.
Maybe it just seems that NVidia cards clock higher because the numbers are physically bigger, but when you break the numbers down there isn't much in it.
 
Last edited:
Kaap do you just keep all your old GPU's or do you just get rid of them when done with them? im genuinely interested to know what you do with so many GPU's.

Im too lazy to run any more than 2 in a PC, 4 would give me sleepness nights hahaha
 
The Titan x is not that great at overclocking if you look at the numbers. From what i have seen it boosts at stock to around 1150 which it should sit at if cooling is good. An overclock to 1400 is only around 22%. 22% is decent but compare that to the original 7950 which came in with 800mhz core and could reach 1200mhz+ which is a 50% overclock then it's just an average overclocker. You can't really use a base clock on a Nvidia card to judge how far they overclock as we all know they never really sit anywhere close to it unless they are thermally limited.

Say the 390x is base 1000mhz core, to equal the Titan X overclock you would only need 1220mhz on the core which is not out of this world looking at past and current cards.

This is not exact obviously but it's much closer than comparing with a Nvidia base core clock.

I will be surprised if the 390x is faster than the Titan X but i would not be shocked because i think it's possible. The hbm memory worries me more because if it comes with only 4gb then Nvidia even if slower still hold the cards with a possible 980ti 6gb.

All things are possible and it is important to keep an open mind.

Having said that I don't think it is in AMDs interests to make a card faster than a TitanX (even if it is quite easy) as it would add to the cost, heat output and cooler noise. They need to aim to beat the 980 and 290X by a good margin and if it happens to be faster than a TitanX that is a bonus.

The TitanX in isolation is a pointless card for NVidia as it is way too expensive for very little performance gain. What gives the TitanX meaning and makes it worthwhile for NVidia is there will be a cut down version(s) sharing a similar design that can absorb all the GM200 chips that don't make the grade which can be sold for a lot less.
 
Well that is a apples to elephants comparison if ever I heard one.

You pick the boost clock of the Nvidia card and then the base clock of a AMD card that was introduced before they started using a boost clock, one that everyone knows was rather underclocked when it was launched at that.

I'm sure some one will correct me but I don't think AMD have yet had a card that has had a base clock above 1GHz yet (since they introduced AMD power play [ AMD's version of the boost system])

Disclaimer: I am not saying that AMD way of doing things is better or worse than NVidia's it is just different. We have known for a along time now that you cannot judge a card on its clock speed but rather than its performance.

As for overclocking, as humbug just pointed out, it seems to range from around the 15% to 20% mark, I wouldn't say that one card is that much better than the next at all.
Maybe it just seems that NVidia cards clock higher because the numbers are physically bigger, but when you break the numbers down there isn't much in it.

It's nothing more than the only way you can compare overclocks between the 2 companies atm. With proper cooling all 290x's will hold 1000mhz every day of the week 24/7. Heck mine holds 1100 while staying under 70oc load 24/7. Nvidia cards when cooled properly are never anywhere near base clocks hence why you can't use them. I only used the 7950 to show what a really good clocking card looks like in comparison. I could easily have went with a gtx470 which would clock over 40% if that changes things.
 
Last edited:
Back to the original statement

Our estimation is based on the average FPS figures across 19 different games tested by TPU in their review of the titan x. at 4k the performance of amd’s gcn based gpus scales in perfect linearity. for example the 2560 gcn unit r9 290 was precisely 2.0x faster than the 1280 gcn unit hd 7870. even though the 290 is clocked slightly below the 7870, so this would account for the architectural improvements that amd introduced with the volcanic islands architecture verses the southern islands architecture.
the performance estimate below is based on scaling with the number of stream processors / gcn units of fiji xt vs hawaii xt but again this does not account for any potential architectural improvements or potential hbm memory bandwidth benefits.
gcn 1.0: 2560 shaders @ 1000mhz - 2x 32 rop - 2x 256bit @ 1500mhz (100%)
gcn 1.1: 2560 shaders @ 947mhz - 64 rop - 512bit @ 1250mhz (100%)

gcn 1.1: 2816 shaders @ 1000mhz - 64 rop - 512bit @ 1250mhz

380x is not going to be a 290x rehash.
the 290x architecture is already obsolete to amd, if they where just going to rename the 290x - 380x dont you think they would have done that already?

gcn evolution:

*snip*


in practice:

gcn 1.2 - r9 285: 1792 shaders @ 918mhz - 256bit @ 1250mhz - 172gb/s
gcn 1.0 - r9 280: 1792 shaders @ 933mhz (+2%) - 384bit @ 1375mhz - 240gb/s (+40%)
gcn 1.0 - r9 280x: 2048 shaders (+14%) @ 1000mhz (+9%) - 384bit @ 1500mhz - 288gb/s (+65%)

as you can see there the 285 is slightly lower clocked than the 280 and it has 40% less memory bandwidth.

it has 23% less gpu than the 280x and 65% less memory bandwidth.

the actual gaming results of the 285 are at its best is 10% faster than the 280x despite the 280x being a lot more gpu.

at its worst its 5% slower than the 280.




On average its somewhere in-between the 280 and 280X

A GCN 1.2 GPU with 2816 Shaders @ 1000Mhz, a 512Bit Bus with 320GB/s of Memory Bandwidth should be 10% to 30% faster than a 290X.

GCN 1.2: 2816 Shaders @ 1000Mhz - 64 ROP - 512Bit @ 1250Mhz (120%) 380X?????

Fiji: 4096 Shaders @ 1000Mhz - 128 ROP - 4096Bit @ 1000Mhz (165%+) 390X?????
 
Last edited:
Kaap do you just keep all your old GPU's or do you just get rid of them when done with them? im genuinely interested to know what you do with so many GPU's.

Im too lazy to run any more than 2 in a PC, 4 would give me sleepness nights hahaha

I keep most of them

I still have my HD 5970s and GTX 590s ready for action.

My GTX 980s are on top of the wardrobe waiting to go into a PC.

My GTX 960s are on the spare bed.

My 290Xs and TitanXs are in PCs to my right and are used as my 24/7 machines.

My original Titans are to my left in another PC and will probably be swapped out for my 980s.

My GTX 690s I use to play videos and browse the net on the telly in living room (this PC used to regularly get into the Futuremark HOF top 10 lol).

My HD 7970s (and the PCs they were in) I gave away to some people at work so their kids could do some gaming.
 
You sound a bit like me dude.

Piles of tech, thinking you will use it, when in fact you dont so you should sell them before their value plummets.

I use to have piles of stuff, now im down to what I actually use and the odd backup item.
 
Well that is a apples to elephants comparison if ever I heard one.

You pick the boost clock of the Nvidia card and then the base clock of a AMD card that was introduced before they started using a boost clock, one that everyone knows was rather underclocked when it was launched at that.

I'm sure some one will correct me but I don't think AMD have yet had a card that has had a base clock above 1GHz yet (since they introduced AMD power play [ AMD's version of the boost system])

Disclaimer: I am not saying that AMD way of doing things is better or worse than NVidia's it is just different. We have known for a along time now that you cannot judge a card on its clock speed but rather than on its performance.

As for overclocking, as humbug just pointed out, it seems to range from around the 15% to 20% mark, I wouldn't say that one card is that much better than the next at all.
Maybe it just seems that NVidia cards clock higher because the numbers are physically bigger, but when you break the numbers down there isn't much in it.

The first AMD cards to use a 'Boost' was the 7950 Boost Edition and the 7970 Ghz Edition

7950 BE = 850Mhz Base, 925Mhz Boost
7970 Ghz = 1000Mhz Base, 1050Mhz Boost

The 290X has a clock rate of "Up to 1000Mhz"
The 290 "Up to 947Mhz"

Both those Hawaii cards with reference coolers (ironically the ones TPU use, and the XFX DD HARDCorp use) jump around between 850Mhz and the Maximum Stated clock speed

Hawaii has Thermal and Power auto adjust, the reference ones especially often reach the thermal celling causing them to run lower than the maximum boost clock, as i said usually between that and 850Mhz

Ones with half decent AIB coolers stick solid to the Maximum stated clocks of whatever they are, usually factory overclocked, like the Tri-X, Vapour-X, Gaming, Lightning, DCU-II and my PCS+

HARDCorp usually use an XFX DD that IMO is particular bad for an AIB Cooler, confirmed to me when they tested a PCS+ which utterley destroyed all their XFX DD performance figures.

IMO the biggest mistake AMD made with Hawaii was that reference cooler, reviews with one, and most are, make the cards look so much slower than they actually are with a proper cooler on them.
 
Last edited:
IMO the biggest mistake AMD made with Hawaii was that reference cooler, reviews with one, and most are, make the cards look so much slower than they actually are with a proper cooler on them.

Yup completely agree.

Both sides have had these sorts of issues in the past, I'm sure AMD will get it right this time.

 
I would like to see AMD put more effort into a reference blower cooler, they are necessary for Cross-Fire.

Nvidia's is beautiful and a lot more effective, by comparison AMD is not only loud and ineffective but also unattractive, i think it commits the worst crime of all, it looks cheap, if your paying 300 - 400 - £500 (the price of a half decent 42" telly) it needs to look like hundreds of £ worth of gadget.

I think they can do better, i think they must do better.
 
Def. drop the ghettoplastic look and make it look expensive even if it isn't :D

How much could it really cost to find a half-decent designer? They must get submissions too.
 
Status
Not open for further replies.
Back
Top Bottom