• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
Thats not what it says. The devs will have to move from a model of managing memory themselves to handing it off to AMD. Thats not a small change. And if nvidia arent using the same approach then it still means 2 major branches to allow for both.

It says as much right there in the notes.
How is that not a small change. Let AMD manage memory or devs manage it them self? Sounds like less work for the devs tbh.
 
Yesterday after years of using two I installed a third widescreen monitor.

First thought: how is my neck going to cope with that?
Second: *groan* they only just fixed two-monitor power consumption on Fiji (Polaris still says "I need more power" when you plug in two). Third would push it into high power mode again.

What are the chances Vega can intelligently deal with three monitors?

I've found with my 7970, 285 and 290 setups that the cards drop to low clocks if all three monitors are identical, where the memory idled higher if they were mis-matched. I was only testing on 24" 1920x1080@60Hz monitors, no idea if higher resolutions/refresh rates affect it. I remember once noticing my cards were idling higher and couldn't figure it out - until I realised one monitor was running at 59Hz! :D

Are your monitors similar, or are you running different resolutions/refresh rates, mate?
 
Because they still have to do both, vega isnt suddenly going to be the only card being used.
But they allrdy do one. All they have to do for vega is allow the driver to manage it. Your making out as if they need to do what they allrdy are doing then a equal amount of work to let the vega drivers manage memory.
Sounds like less work to me. Devs should be welcoming it.
 
Thats not what it says. The devs will have to move from a model of managing memory themselves to handing it off to AMD. Thats not a small change. And if nvidia arent using the same approach then it still means 2 major branches to allow for both.

It says as much right there in the notes.
It's a major change sure, but in theory NOT having to do memory management is easy. It's not so much a different branch as just skipping a branch.

Whether it actually works out like that though who knows.
 
The AMD Vega Memory Architecture Q&A With Jeffrey Cheng
We updated the article with a clarification of the difference between the AMD Vega’s 64-bit flat address space, and 512 TB addressable memory.

http://www.techarp.com/articles/amd-vega-memory-architecture/

  • AMD Vega was specifically architected to handle big datasets, with a heterogenous memory architecture, a wide and flat address space, and a High Bandwidth Cache Controller (see 1:34).
  • Large amounts of DRAM can be used to handle big datasets, but this is not the best solution because DRAM is costly and consumes lots of power (see 2:54).
  • AMD chose to design a heterogenous memory architecture to support various memory technologies like HBM2 and even non-volatile memory (e.g. Radeon Solid State Graphics) (see 4:40 and 8:13)
  • At any given moment, the amount of data processed by the GPU is limited, so it doesn’t make sense to store a large dataset in DRAM. It would be better to cache the data required by the GPU on very fast memory (e.g. HBM2), and intelligently move them according to the GPU’s requirements (see 5:40).
  • The AMD Vega’s heterogenous memory architecture allows for easy integration of future memory technologies like storage-class memory (flash memory that can be accessed in bytes, instead of blocks) (see 8:13).
  • The AMD Vega has a 64-bit flat address space for its shaders (see 12:08, 12:36 and 18:21), but like NVIDIA, AMD is (very likely) limiting the addressable memory to 49-bits, giving it 512 TB of addressable memory.
  • AMD Vega has full access to the CPU’s 48-bit address space, with additional bits beyond that used to handle its own internal memory, storage and registers (see 12:16). This ties back to the High Bandwidth Cache Controller and heterogenous memory architecture, which allows the use of different memory and storage types.
  • Game developers currently try to manage data and memory usage, often extremely conservatively to support graphics cards with limited amounts of graphics memory (see 16:29).
  • With the introduction of AMD Vega, AMD wants game developers to leave data and memory management to the GPU. Its High Bandwidth Cache Controller and heterogenous memory system will automatically handle it for them (see 17:19).
  • The memory architectural advantages of AMD Vega will initially have little impact on gaming performance (due to the current conservative approach of game developers). This will change when developers hand over data and memory management to the GPU. (see 24:42).
  • The improved memory architecture in AMD Vega will mainly benefit AI applications (e.g. deep machine learning) with their large datasets (see 24:52).



As I said before: "The improved memory architecture in AMD Vega will mainly benefit AI applications (e.g. deep machine learning) with their large datasets (see 24:52)."

AMD will try and market this as an advantage for gaming but it was never designed for that in mind as the foremost application. As is blatantly obvious the main advantage comes when the total memory usage exceed the VRAM, but that wont happen unless AMD try to gimp the amount of VRAM available. "The memory architectural advantages of AMD Vega will initially have little impact on gaming performance"



Very wishful thinkin that game engine is suddenly going to have huge 50GB datasets and let AMD GPUs complete handle memory management while the only cards this can work for will be a tiny segment of the market share.




OF course it would nie nice if DX13 or whatever comes next required this to be a supported feature, then games would take advnatge of it.
 
As I said before: "The improved memory architecture in AMD Vega will mainly benefit AI applications (e.g. deep machine learning) with their large datasets (see 24:52)."

AMD will try and market this as an advantage for gaming but it was never designed for that in mind as the foremost application. As is blatantly obvious the main advantage comes when the total memory usage exceed the VRAM, but that wont happen unless AMD try to gimp the amount of VRAM available. "The memory architectural advantages of AMD Vega will initially have little impact on gaming performance"
Very wishful thinkin that game engine is suddenly going to have huge 50GB datasets and let AMD GPUs complete handle memory management while the only cards this can work for will be a tiny segment of the market share.

With HDR and 4k being pushed; 50GB datasets could be with us sooner than you think, but I doubt that we will have GPUs that can properly use such large amount of data in realtime. Heck i doubt that titan Xp can process 12GB of texture in realtime. Yeah you can bruteforce, stick more ram in call it job done. But it's extra cost and power. It's called being smart not gimping. Why stuff a card full of VRAM when you can switch out what you need, especially when most of it isn't used?
 
The driver can perform some optimisations with memory managment, they have seen some unexpectedly large improvements in min/avg through drivers alone. It was mentioned by Raja in the AMA. But to get the best out of anything, you need your software to be directly coded for it.
Aah okay. Hopefully the difference between optimised and not isn't in double figure percentages.
 
With HDR and 4k being pushed; 50GB datasets could be with us sooner than you think, but I doubt that we will have GPUs that can properly use such large amount of data in realtime. Heck i doubt that titan Xp can process 12GB of texture in realtime. Yeah you can bruteforce, stick more ram in call it job done. But it's extra cost and power. It's called being smart not gimping. Why stuff a card full of VRAM when you can switch out what you need, especially when most of it isn't used?


Games already switch out data that is not needed, look at Carmark's Megatexture for example.And in any open-world game like GTA there is always data streaming in and out as needed.
 
Games already switch out data that is not needed, look at Carmark's Megatexture for example.And in any open-world game like GTA there is always data streaming in and out as needed.
So in one post you criticise AMD for wanting developers leave data management and dataset switching to GPU drivers, and also saying that its useless for games then; in your next post you say that it is useful and that games already do it.
Why are you upset by AMD wanting to do it in drivers rather than having developers having to do it?

From the quick search i've done carmark's megatexture it appears to contradict what your trying to say
 
Status
Not open for further replies.
Back
Top Bottom