• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Calculating theoretical 4K performance of a GPU vs an R9 390

Associate
Joined
3 Jul 2019
Posts
115
Basically, to work out if a GPU would have double the 4k resolution performance of an R9 390 (Pixel Rate - 64.00 GPixel/s), should I use the Pixel rate to determine the performance?

If so, would a card with twice the pixel rate offer twice the 4k performance? Or, is GPU bandwidth just as important?

I notice that GTX 1080 TI and RTX 2080 TI both have pixel rates > double the R9 390, at 136.0-139.2. It looks like AMD still has some catching up to do based on that.
 
Last edited:
In short, you can't go by anything like that. Too many factors, some unknown to the public, which determine the performance at the end of the day. Best you can do is go by available benchmarks. Might be an idea to collate benchmarks for all cards you're interested in and look for a correlation to raw specs.
 
Basically, to work out if a GPU would have double the 4k resolution performance of an R9 390 (Pixel Rate - 64.00 GPixel/s), should I use the Pixel rate to determine the performance?

If so, would a card with twice the pixel rate offer twice the 4k performance? Or, is GPU bandwidth just as important?

IDK what you have in your mind, but a card twice as fast as a R9 390X is a RVII, 1080Ti range.
 
IDK what you have in your mind, but a card twice as fast as a R9 390X is a RVII, 1080Ti range.

I'd never buy the Radeon VII, its a premium product and afaik, they aren't even producing them anymore. I think it was an experiment to test out GPUs on 7nm and to prepare for Navi.

I also think it will be superseded soon by overclocked 5700 XTs and later, the 5800 series.

The 1080 TI is still holding up pretty well specification wise though.
 
Last edited:
Also, couldn't AMD create a dual GPU card that would beat the RTX 2080 TI? And if so, why not do it this year?

And wouldn't such a card most likely show some gains from PCIE 4?

EDIT - We already know that TDPs of 500 watts are possible, seen with the
R9 295X2 here:
https://www.techpowerup.com/gpu-specs/radeon-r9-295x2.c2523

Wouldn't work with my 650w Seasonic Prime though lol. Maybe that's why AMD is shying away from dual GPUs.
 
Last edited:
I'd never buy the Radeon VII, its a premium product and afaik, they aren't even producing them anymore. I think it was an experiment to test out GPUs on 7nm and to prepare for Navi.

I also think it will be superseded soon by overclocked 5700 XTs and later, the 5800 series.

The 1080 TI is still holding up pretty well specification wise though.

I didn't said to buy RVII, I said "range" which means the cards around that level of performance, and GTX1080Ti doesn't have that good range at the low 1% fps these days compared to other cards.

Also why not dual GPU is because Crossfire & SLI are dead.
 
What do you think of the RTX 2080? Some of the stats (like texture / pixel rate) and specs are lower than the GTX 1080 TI, but it often seems to perform similarly. Link:

https://www.techpowerup.com/gpu-specs/geforce-rtx-2080.c3224

The main difference I see in the specs is the memory clock, so perhaps the core and memory clocks are just as important as stats like the texture and pixel rate?

On the other hand though, perhaps a 1080 TI overclocked would be just better overall.

EDIT - Just noticed the 2080 has GDDR 6 memory, so higher memory clocks on the 1080 TI may not be an option.
 
Last edited:
What do you think of the RTX 2080? Some of the stats (like texture / pixel rate) and specs are lower than the GTX 1080 TI, but it often seems to perform similarly. Link:

https://www.techpowerup.com/gpu-specs/geforce-rtx-2080.c3224

The main difference I see in the specs is the memory clock, so perhaps the core and memory clocks are just as important as stats like the texture and pixel rate?

On the other hand though, perhaps a 1080 TI overclocked would be just better overall.

EDIT - Just noticed the 2080 has GDDR 6 memory, so higher memory clocks on the 1080 TI may not be an option.

If you have the money buy the 2080S as the 2080 is EOL. However personally I would wait until the AIB 5700XT come out next month.
For no other reason other than it would have better drivers, better cooling and might worth to go cheap for 3200x1800 + RIS upscaled to 4K. But all these next month.

So just wait 1 month, otherwise 2080S however the card isn't that much faster than 2080. Tops 5-8%.
 
So the 2080 Super will replace the 2080, and cost the same amount?

The 2080 Super still has a lower Pixel / Texture rate than the 1080 TI, according to this:

https://www.techpowerup.com/gpu-specs/geforce-rtx-2080-super.c3439

But it makes up for it with significantly higher core and memory clocks. It also has higher memory bandwidth.

Single Floating point performance for both cards is > 11 Teraflops I notice.

If the 2080 Super could be pushed upto 12 teraflops, I'm pretty sure it would overtake the 1080 ti.
 
Last edited:
If only Nvidia would sell the RTX 2080s / or RTX Super 2080 for ~£400. I'd buy one today if they did.

Did availability / prices never recover from the bitcoin mining obsession, or, can't Nvidia produce enough GPUs?
 
Last edited:
Back
Top Bottom