• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
Hmm... If Vega has good price for performance, I might just have to order one before miners get hold of them and the price goes up!
Mining will have crashed long before Vega is available.

I suspect we won't even be using PCs anymore. We will have evolved telepathic gaming.
 
giphy.gif
 
I'm surprised people are still using GFX cards when there is now ASIC miners that do a much better job.

There are more coins than just bitcoin, they mine coins that dont have asics and then convert to bitcoin. The alt coins rise and fall in value as people cycle through them and its too high risk to make asics for them as by the time they are developed people have moved on to something else.
 
ASIC Doesn't work for mining Ethereum. Which is what everyone is mining.
Ethereum is ASIC resistant, so it's all GPU and CPU.
There are more coins than just bitcoin, they mine coins that dont have asics and then convert to bitcoin. The alt coins rise and fall in value as people cycle through them and its too high risk to make asics for them as by the time they are developed people have moved on to something else.
Ahh right, i knew there was a new coin being mined but didn't realise you couldn't use ASIC miners. Makes sense to why Gfx cards are being used then i guess. Not in the loop with coin mining tbh.
 
Ahh right, i knew there was a new coin being mined but didn't realise you couldn't use ASIC miners. Makes sense to why Gfx cards are being used then i guess. Not in the loop with coin mining tbh.

Yup, just look at that spike in value. It perfectly coincides to the massive AMD card drought. It's doubled since I last even looked at it.

https://coinmarketcap.com/currencies/ethereum/
 
cryptocurrency mining on Vega.

That wont happen, if memory spikes ryzen the price:p greatly that makes cheap Vega unlikely any time before year end when various consumer types might appear.
If Vega is inefficient price wise it's immediately a fail for mining. If its not 4x the power of a 480 then they have no reason to pay 4x the price, people are arguing if it'll be 2x but I think it should be personally (at least). We then got to see if performance is uneven across dx 10,11,12, vulkan etc Just HBCC raising min frame rates would sell it for me, miners dont care about any of that.

The mining work is 100% divisible so they use 4 cards on a machine, whatever is cheapest. ETH may turn POS before Vega release, short story is this requires no GPU at all to confirm transactions/mine/stake/process.
When that happens, some kids are going to get a nice flood of 480 hardware for like £100. I think Gibbo gave a headsup on that possibility, I concur :D Its the most likely outcome rather then Vega being the best mining hardware ever. That would be good for AMD also as they want industrial users and we want a successful AMD vs Nvidia etc
Pity crossfire doesnt make that future target feasible for more ambitious users like 4k etc. I'm only going to be 1080p most likely but I want 200fps possibly minimum (in some games)

Mining Im 90% certain it wont be good enough to steal any sales but maybe in a year or so when its properly mass produced by partners and if theres a 4gb version thats just as good.

If we had a strong election result in some way, giving confidence to strengthen Sterling vs dollar that'd give an appearance of lower prices in UK. But nope that didnt happen, we know that effect was true of sales prior to brexit
 
Last edited:
The AMD Vega Memory Architecture Q&A With Jeffrey Cheng
We updated the article with a clarification of the difference between the AMD Vega’s 64-bit flat address space, and 512 TB addressable memory.

http://www.techarp.com/articles/amd-vega-memory-architecture/

  • AMD Vega was specifically architected to handle big datasets, with a heterogenous memory architecture, a wide and flat address space, and a High Bandwidth Cache Controller (see 1:34).
  • Large amounts of DRAM can be used to handle big datasets, but this is not the best solution because DRAM is costly and consumes lots of power (see 2:54).
  • AMD chose to design a heterogenous memory architecture to support various memory technologies like HBM2 and even non-volatile memory (e.g. Radeon Solid State Graphics) (see 4:40 and 8:13)
  • At any given moment, the amount of data processed by the GPU is limited, so it doesn’t make sense to store a large dataset in DRAM. It would be better to cache the data required by the GPU on very fast memory (e.g. HBM2), and intelligently move them according to the GPU’s requirements (see 5:40).
  • The AMD Vega’s heterogenous memory architecture allows for easy integration of future memory technologies like storage-class memory (flash memory that can be accessed in bytes, instead of blocks) (see 8:13).
  • The AMD Vega has a 64-bit flat address space for its shaders (see 12:08, 12:36 and 18:21), but like NVIDIA, AMD is (very likely) limiting the addressable memory to 49-bits, giving it 512 TB of addressable memory.
  • AMD Vega has full access to the CPU’s 48-bit address space, with additional bits beyond that used to handle its own internal memory, storage and registers (see 12:16). This ties back to the High Bandwidth Cache Controller and heterogenous memory architecture, which allows the use of different memory and storage types.
  • Game developers currently try to manage data and memory usage, often extremely conservatively to support graphics cards with limited amounts of graphics memory (see 16:29).
  • With the introduction of AMD Vega, AMD wants game developers to leave data and memory management to the GPU. Its High Bandwidth Cache Controller and heterogenous memory system will automatically handle it for them (see 17:19).
  • The memory architectural advantages of AMD Vega will initially have little impact on gaming performance (due to the current conservative approach of game developers). This will change when developers hand over data and memory management to the GPU. (see 24:42).
  • The improved memory architecture in AMD Vega will mainly benefit AI applications (e.g. deep machine learning) with their large datasets (see 24:52).
 
Last edited:
Need to watch the video, but from your notes it seems that developers don't need to program for HBCC, which should alleviate some concerns. I'm curious as to how much work AMD needs to do to optimise it per game
 
Need to watch the video, but from your notes it seems that developers don't need to program for HBCC, which should alleviate some concerns. I'm curious as to how much work AMD needs to do to optimise it per game

The driver can perform some optimisations with memory managment, they have seen some unexpectedly large improvements in min/avg through drivers alone. It was mentioned by Raja in the AMA. But to get the best out of anything, you need your software to be directly coded for it.
 
Need to watch the video, but from your notes it seems that developers don't need to program for HBCC, which should alleviate some concerns. I'm curious as to how much work AMD needs to do to optimise it per game
i said this from what i understood from what AMD said. I got shot down saying devs will need to develop for it and use it meaning it wont catch on bla bla bla. I just left it because sometimes you can't speak to some people in this forum. Glad the architect has shed some light on this technology. Upto AMD to integrate it well. No doubt the devs will have to do some minor coding for it but nothing major. Mainly just leave it to AMD driver to handle it.
 
i said this from what i understood from what AMD said. I got shot down saying devs will need to develop for it and use it meaning it wont catch on bla bla bla. I just left it because sometimes you can't speak to some people in this forum. Glad the architect has shed some light on this technology. Upto AMD to integrate it well. No doubt the devs will have to do some minor coding for it but nothing major. Mainly just leave it to AMD driver to handle it.

Thats not what it says. The devs will have to move from a model of managing memory themselves to handing it off to AMD. Thats not a small change. And if nvidia arent using the same approach then it still means 2 major branches to allow for both.

It says as much right there in the notes.
 
Yesterday after years of using two I installed a third widescreen monitor.

First thought: how is my neck going to cope with that?
Second: *groan* they only just fixed two-monitor power consumption on Fiji (Polaris still says "I need more power" when you plug in two). Third would push it into high power mode again.

What are the chances Vega can intelligently deal with three monitors?
 
Status
Not open for further replies.
Back
Top Bottom