• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
According to AMD Vega uses Infinity Fabric.

in that video Koduri is talking about "the next 12 months, 24 months", that was posted march 2016. He then goes on about that AMD are already giving multi-gpu to universities for VR (that is the dual Fury s they sent out). That isn't Navi he is talking about.
When he talks about extending multi-gpu up and down the market-stack due to better yields of smaller chips, he isn't talking about modular GPUs with interconnected dies. He is talking about simply selling multiple graphics cards to consumers, and having crossfire actually work.. So instead of a 1080ti type card, he wants you to buy 4x RX480s. This is made clear in his follow up statements about getting developers working with crossfire right now 4min:54 (i.e. back in 2016). This is also clear from AMD's earlier marketing why they compared 2xRX40 as cheaper and faster than a 1080.
 
Sounds like a revolution. Take Vega or whatever develops by then, shrink it and have four of them able to work together is massive .

Yes, indeed, but the article is speculation, and just on the click-bait title, the implication is that AMD are giving up on making anything big. I guess they sort of are, but only in order to move to a new architecture paradigm and to continue to make more powerful graphics cards.
 
Navi sounds interesting if it is what people think it is (a load smaller sized gpu's connected together in one die using infinity fabric working as one). If it removes the need for devs having to make profiles for it to work then i'm all for it.
 
Sorry if this sounds like a daft question but Im a daft lad with this sort of stuff...

is my 2500K likely to bottleneck Vega? I'll only be looking at the Vega56 model and I play at 2560x1080 @75Hz.
 
When Raja is talking about beyond crossfire he is talking about trying to push explicit multi-adaptor in DX12 and future APIs. Unfortunately an uphill battle to get developers to even give it a passing thought let alone implement :s

They've intimated a few times that once they are upto speed on 7nm the approach will turn towards modular multi-package GPUs.
 
According to AMD Vega uses Infinity Fabric.

in that video Koduri is talking about "the next 12 months, 24 months", that was posted march 2016. He then goes on about that AMD are already giving multi-gpu to universities for VR (that is the dual Fury s they sent out). That isn't Navi he is talking about.
When he talks about extending multi-gpu up and down the market-stack due to better yields of smaller chips, he isn't talking about modular GPUs with interconnected dies. He is talking about simply selling multiple graphics cards to consumers, and having crossfire actually work.. So instead of a 1080ti type card, he wants you to buy 4x RX480s. This is made clear in his follow up statements about getting developers working with crossfire right now 4min:54 (i.e. back in 2016). This is also clear from AMD's earlier marketing why they compared 2xRX40 as cheaper and faster than a 1080.
yes he talked about regular multi-gpu, but when he talks about node difficulty/cost/size/yield etc, it's more or less clear what he means, later interviews touched slightly on the subject and all of it seem to converge on the same principle, multi-die gpu, even the white paper AMD eleased about their new gen HBM interposed directly on the GPU chips, shows multi-die gpu.
and nvidia didn't just feel like spending time researching the subject unless AMD was already working on it.
Ryzen/TR/Epyc & infinity fabric is another hint to where AMD is headed.
so no you cannot see or touch it, just have to believe...but seems quite obvious to me, and to be honest this seem to be the only way for AMD to be competitive again on the gpu side.
 
Sorry if this sounds like a daft question but Im a daft lad with this sort of stuff...

is my 2500K likely to bottleneck Vega? I'll only be looking at the Vega56 model and I play at 2560x1080 @75Hz.

Providing you've got a decent enough overclock on the CPU, I can't see it being a major problem.
 
Providing you've got a decent enough overclock on the CPU, I can't see it being a major problem.


My mobo is absolute pants and voltage doesn't hold steady at all which worries me. The auto overclock only takes it to around 3.7 aswell ? :/
 
Suppose it depends on what you consider "a lot", in 11 years that's not really all that many.

If you include CPU PhysX quite a lot of titles use it - i.e. Deus Ex: Mankind Divided has both PhysX 3 and ApeX components as well as some other GameWorks stuff. I think people would be surprised how many games actually use software PhysX but hardware titles are pretty few and far between sadly.

Game developers just don't like implementing this kind of thing even with nVidia's weight behind it.
 
My mobo is absolute pants and voltage doesn't hold steady at all which worries me. The auto overclock only takes it to around 3.7 aswell ? :/

I think you'll see bottlenecking in some games, but it'll probably vary quite a bit. Some will suffer just a little, others by quite a lot. You're running a reasonably high resolution which is your friend in a situation like this, as it will help to reduce the load on the CPU. Crank up the graphics settings to help ease things further still.
 
Navi sounds interesting if it is what people think it is (a load smaller sized gpu's connected together in one die using infinity fabric working as one). If it removes the need for devs having to make profiles for it to work then i'm all for it.

The trouble is there is absolutely zero credible evidence that that is what navi is. As I said above, even AMD is just talking about making sure developers get a better understanding of crossfire.
 
Not the point though, it was stated was not used in a lot of games, it was, grrr.

Grr all you want, the amount of games that hardware PhysX is in is often overstated, and most of the games that do use it only use it in a token gesture way or the basic physics are stripped back to create an artificially large gap between PhysX on and off.

Proprietary stuff doesn't get adopted, especially this sort of thing.

If you include CPU PhysX quite a lot of titles use it - i.e. Deus Ex: Mankind Divided has both PhysX 3 and ApeX components as well as some other GameWorks stuff. I think people would be surprised how many games actually use software PhysX but hardware titles are pretty few and far between sadly.

Game developers just don't like implementing this kind of thing even with nVidia's weight behind it.

Loads of games use PhysX, I think most people don't realise that it's mostly CPU PhysX though.
 
The trouble is there is absolutely zero credible evidence that that is what navi is. As I said above, even AMD is just talking about making sure developers get a better understanding of crossfire.

I think people hoping for it to emerge in Navi might be waiting awhile - all the technical documentation on that kind of approach published so far suggests you need DRAM shrunk atleast a node smaller than anything available now and modules manufactured on some kind of plus variant 7nm process to reduce issues due to trace length, etc.
 
Status
Not open for further replies.
Back
Top Bottom