• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: Will you be buying a Radeon VII?

Will you be buying a Radeon VII?


  • Total voters
    352
u9BekPi.jpg


It is a nice looking card.


It is but 1 thing that sticks out to me is the size of the fan hub vs the blades. Had they made that central hub smaller they could have had a bit more size to the blades for better airflow. Might not need it but the centre hub does look oddly large for an axial fan and the blades somewhat short.
 
too expensive for me, the card its self looks promising, but it looks like 1080ti performance for 1080ti prices 2 years later... If I had wanted to spend that kind of money I would have bought a 1080ti
 
No

HBM2 adds to the cost and throttles 1080p/1440p performance. Yes HBM2 would be good at 2160p but the card does not have the raw performance to compete against a 2080 Ti.

8gb or 12gb of GDDR6 would have been better for a gaming card.

How does HBM2 throttle 1080p performance? I've seen you post something similar a few times, but can't see the logic.

HBM2 bandwidth is exceptional, and the latency can be either lower or higher than GDDR5x/6 depending on the workload.

This Quora post is quite articulate on HBM2 latency:
https://www.quora.com/In-how-many-c...er-than-current-GDDR5-on-current-GPU-chipsets

Here's a paper on HBM1 performance, note that latency changes for block size:
https://arxiv.org/pdf/1704.08273.pdf

Here is a presentation written by NVidia singing HBM/HBM2's praises for GPU workloads:
https://www.archive.ece.cmu.edu/~ece740/f15/lib/exe/fetch.php?media=dram_for_gpus_talk.pptx

Anyway, cutting to the chase: HBM2 is in theory superior in every way to GDDR5X/6, where a wide parallel bus will always outperform serial buses. It's currently low clock speeds mean that the latency can be higher under certain workloads. However, for GPU's memory latency is not as relevant as for CPU's due to the predictable operations being performed.
HBM2 currently costs more, but that will not always be the case once volumes rise. Further, GDDR6 is a nightmare for board layout and tolerating - it costs non-trivial money to implement.
 
How does HBM2 throttle 1080p performance? I've seen you post something similar a few times, but can't see the logic.

HBM2 bandwidth is exceptional, and the latency can be either lower or higher than GDDR5x/6 depending on the workload.

This Quora post is quite articulate on HBM2 latency:
https://www.quora.com/In-how-many-c...er-than-current-GDDR5-on-current-GPU-chipsets

Here's a paper on HBM1 performance, note that latency changes for block size:
https://arxiv.org/pdf/1704.08273.pdf

Here is a presentation written by NVidia singing HBM/HBM2's praises for GPU workloads:
https://www.archive.ece.cmu.edu/~ece740/f15/lib/exe/fetch.php?media=dram_for_gpus_talk.pptx

Anyway, cutting to the chase: HBM2 is in theory superior in every way to GDDR5X/6, where a wide parallel bus will always outperform serial buses. It's currently low clock speeds mean that the latency can be higher under certain workloads. However, for GPU's memory latency is not as relevant as for CPU's due to the predictable operations being performed.
HBM2 currently costs more, but that will not always be the case once volumes rise. Further, GDDR6 is a nightmare for board layout and tolerating - it costs non-trivial money to implement.

The proof is in the performance.

If you take two cards with similar GPU grunt one with HBM and one with GDDR the latter tends to perform better at 1080p.

This has been going on since the Fury X launch.

This even happens when you compare a Titan V (HBM2) to a 2080 Ti (GDDR6), it is not just an AMD thing.

You can quote all the technical papers you want but they can not beat actual benchmarks.

HBM is bad at 1080p because the clockspeed is low and huge bandwidth counts for nothing at low resolutions, at 2160p the argument is the other way round.
 
Yea me to which is a shame as i wanted to treat myself to a brand new build with latest GPU+CPU but that won't happen unless AMD release a Radeon VI at a lower price
I paid £350 for this 980ti 2 and half years ago and at the time I was worried I paid too much. Turns out it was a bit of a bargain as now you need double that to get an equivilant bracket modern card.

I honestly can’t see me buying a new card until they stop moving the tier pricing up and up. I’ll just let others pay the shiny tax.
 
The proof is in the performance.

If you take two cards with similar GPU grunt one with HBM and one with GDDR the latter tends to perform better at 1080p.

This has been going on since the Fury X launch.

This even happens when you compare a Titan V (HBM2) to a 2080 Ti (GDDR6), it is not just an AMD thing.

You can quote all the technical papers you want but they can not beat actual benchmarks.

HBM is bad at 1080p because the clockspeed is low and huge bandwidth counts for nothing at low resolutions, at 2160p the argument is the other way round.

It's a hugely parallel interface, who cares about clockspeed.

What I'm getting at is there's a big leap between graphics performance and the memory architecture in play. There's dozens of other factors and emergent behaviors of an architecture that could be causing worse performance at certain resolutions.

I definitely see your point re. Titan V not performing as well as the 2080Ti at 1080p. I would guess this as an artifact of Volta being NVidia's first HBM architecture; it takes a lot to optimise the whole rendering pipeline and drivers to work efficiently with a drastically different memory architecture. GDDR6/GDDR5X/GDDR5 are similar enough to not require such large optimisations.

Using the Fury X as an argument is a bit meh. It was just a ***** architecture from top to bottom, but I owned one for Freesync...

Anyways, I may be wrong but it does seem a big leap to 100% say that HBM memory causes poor performance at low resolutions when the system is so damn complex.
 
I won't be buying the Radeon VII. I recently went for a 1070 Ti, and none of the new releases in the last few months including the upcoming Radeon VII or RTX 2060 have given me cause for regret. In the ~£330-£400 bracket it seems the Vega 56/64/1070 Ti have been wise enough purchases for 1440p.
 
No, probably not unless I feel like using one in second machine.
2080 is perfect for my uses. I want RT and I want DLSS.

It has decent specs but seems more of a dual purpose card, therefore is not optimal for gaming. But if you do some of the stuff mentioned that will put the card to good use other than simply gaming it'll be more worth it.
 
Personal opinion based on guesswork alone and not reading articles, they've reworked their top end Vega 64 into 7nm, increased performance and wacked in double the memory. This being their top end product it's the one they release first for top whack money.

Mid range Navi in months to come is where I'm interested and I hope AdoredTV leaks are true albeit the prices seem too low. Companies seem to release top end products first and mid range later so there's no chance I'm interested in this behemoth of a card.
 
The proof is in the performance.

If you take two cards with similar GPU grunt one with HBM and one with GDDR the latter tends to perform better at 1080p.

This has been going on since the Fury X launch.

This even happens when you compare a Titan V (HBM2) to a 2080 Ti (GDDR6), it is not just an AMD thing.

You can quote all the technical papers you want but they can not beat actual benchmarks.

HBM is bad at 1080p because the clockspeed is low and huge bandwidth counts for nothing at low resolutions, at 2160p the argument is the other way round.

Except it doesn't and you keep playing this tune. People showed you multiple times that it was the API causing the performance but you chose to ignore it.
 
It's a hugely parallel interface, who cares about clockspeed.

I definitely see your point re. Titan V not performing as well as the 2080Ti at 1080p. I would guess this as an artifact of Volta being NVidia's first HBM architecture; it takes a lot to optimise the whole rendering pipeline and drivers to work efficiently with a drastically different memory architecture. GDDR6/GDDR5X/GDDR5 are similar enough to not require such large optimisations.

The reason Volta doesn't perform as well is due to it being a GPU based on task level parallelism rather than instruction level parallelism. They removed the instruction parallelism hardware from volta. This means it needs to be fed instructions faster to make use of it's hardware at lower resolutions compared to higher ones.
 
Back
Top Bottom