• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
That's the problem it's going to have, Nvidia won't have any interest in helping push it's uptake, maybe even the exact opposite.

Yeah, if it the case for needing the devs, then it won't get used, wether its easy to do or not, take Mantle for example, AOTS, one guy, about an hr or 2 to do, they said it was **** easy, yet it got no interest at all, no one wanted to know about it, not even the owners of the PC gaming market, Nvidia, so AMD had to get rid of it, but look at it now, now that its not under AMD, Nvidia have jumped on board, and the games are a coming, as the devs are now also all over it now too :p

If it needs the devs, then forget about it, as it'll just go the way of Mantle (when it was just under AMD), TressFx, that audio thing, and every other tech that AMD have done, no Nvidia, no use.
 
Last edited:
As far as I am aware they haven't confirmed how it works. I can understand why you make that assumption but for all you know AMD may use the drivers to intercept game engine memory request, so that there is no need for developers. (Similar to what Nvidia does with DX11 drivers and thread scheduling)

Well computex will be here soon, so we'll get our answer then.

This article is what gives me doubts.
as AMD told it to me, “with the right knowledge” you can discard game based primitives at an incredible rate. This right knowledge though is the crucial component – it is something that has to be coded for directly and isn’t something that AMD or Vega will be able to do behind the scenes.

https://www.pcper.com/reviews/Graph...w-Redesigned-Memory-Architecture/Primitive-Sh

I've been talking out of my butt, so I need to read up a bit more.
 
Last edited:
Yeah, if it the case for needing the devs, then it won't get used, wether its easy to do or not, as take Mantle for example, AOTS, one guy, about an hr or 2 to do, they said it was **** easy, yet it got no interest at all, no one wanted to know about it, not even the owners of the PC gaming market, Nvidia, so AMD had to get rid of it, but look at it now, now that its not under AMD, Nvidia have jumped on board, and the games are a coming, as the devs are now also all over it now :p

Actually for PC Vulkan has around the same take up as Mantle or maybe even less so Nvidia is not having much effect if any. Some PC gaming ownership effect that.
 
So I just had an interesting thought, now we know Vega uses 2 stacks of HBM2 instead of 4.

I looked up how much power HBM2 uses compared to HBM1 - apparently 8% less per chip, if you scroll down to the bottom table and graphs here: https://www.skhynix.com/eng/product/dramHBM.jsp

Also checked what the power draw of the FuryX memory system is - 14.6W according to Anandtech (table about halfway down): http://www.anandtech.com/show/9969/jedec-publishes-hbm2-specification


And the 8GB Vega card will only use 2 4-Hi stacks (instead of 4 4-Hi stacks), because HBM2 has 4x the density as well.

So doesn't this mean the 8GB card's HBM2 system will only consume (14.6 / 2) * 0.92 = 6.7W?

Under 7W is pretty mad for 480 GB/s bandwidth (and lower latency than HBM1 too according to hynix). More power for the GPU cores themselves in the same power envelope.

This also means you could provide a laptop card (or APU graphics cores on die?) 4GB of VRAM, at 512 GB/s once HBM2 can do 2000 MHz, at approx 3.5W power draw :eek:
 
And I'm sure some will take advantage of it but it's all on a hope and a prayer.
Yet again AMD taking a chance on getting something adopted,
It doesn't work out of the box as many hoped and said it would.

If it's good and fairly easy to implement, it'll get used. AMD did all right replacing the next gen DX and OpenGL with Mantle, even if politics dictated name changes to DX12 and Vulkan.

This article is what gives me doubts.

https://www.pcper.com/reviews/Graph...w-Redesigned-Memory-Architecture/Primitive-Sh

Exactly, Nvidia do not have the hardware in place to support this and that could be enough to kill it in it's tracks.

There's a reason that AMD have announced big relationships with the likes of Bethseda. If it's a trivial driver check and a bit of code in front of existing shaders, it will be a no-brainer to implement.

You're making an awful lot of gloomy assumptions from one five month old tech preview.
 
If it's good and fairly easy to implement, it'll get used. AMD did all right replacing the next gen DX and OpenGL with Mantle, even if politics dictated name changes to DX12 and Vulkan.



There's a reason that AMD have announced big relationships with the likes of Bethseda. If it's a trivial driver check and a bit of code in front of existing shaders, it will be a no-brainer to implement.

You're making an awful lot of gloomy assumptions from one five month old tech preview.

Agreed :)
 
If it's a trivial driver check and a bit of code in front of existing shaders, it will be a no-brainer to implement.

I think its likely you can intercept your current vertex pipeline without additional changes to the game engine - but it will probably require some work once you've done that to sanity check/filter not just fire and forget.

I'll be surprised if it doesn't end up like first gen tessellation hardware sitting there unused while nVidia doesn't use it and no real pressure for nVidia to do something similar as their architecture can brute force past the inefficiency. Developers aren't a big fan of vendor specific "extensions" even on the nVidia side - I think like 3 games support nVidias own non-standard optimised hardware features for shadows for instance despite some potentially significant speed increases.
 
so what ha



For me the difference between medium and high or ultra is day and night. There is no question about it. I prefer both smooth play AND quality visuals. Can't stand consoles, PCs are so much better ...
If you think difference between high and ultra is day and night then you sir are seeing something i am not. To me its not apparent straight away. I even have to take screenshots stood still looking at the same scene at high settings then ultra settings to see the difference. If it was night and day i would not need to do that. I would be like turn that setting up... Apply... OHHH look at that, looks much better. But nope not a single game today. Maybe back in the day with the likes of farcry or crysis when they came out you could tell the difference but games today not a chance. But you can tell the difference it takes on your GPU when it taxes it and you get less FPS.

It's only when i go from say 1440p to 4k do i notice a improvement of visual quality. So i tend to go for 4k and high settings for the best visual experience and smooth gameplay. 1440p and ultra settings does not look as good for me. Sometimes there is some gimmicky visuals that ruin the experience like bloom, DOF and (movement blurr) forgot the name.

edit -

Also i said consoles are a baseline. Meaning PC get the better visuals but consoles are the baseline. Thats why medium with some settings on low tends to be what consoles are at. But you only notice a nice visual uptake from those settings to high. After this its just miniscule and tanks performance in most cases.
 
Last edited:
I think its likely you can intercept your current vertex pipeline without additional changes to the game engine - but it will probably require some work once you've done that to sanity check/filter not just fire and forget.

I'll be surprised if it doesn't end up like first gen tessellation hardware sitting there unused while nVidia doesn't use it and no real pressure for nVidia to do something similar as their architecture can brute force past the inefficiency. Developers aren't a big fan of vendor specific "extensions" even on the nVidia side - I think like 3 games support nVidias own non-standard optimised hardware features for shadows for instance despite some potentially significant speed increases.

I'm just trying to put things into perspective and manage expectations. People have been hyping up the tiled rasterisation as much as the HBCC thinking it will cause performance to jump by a fair margin. In reality based on what has been said so far:

1) Tiled rasterisation will require developers to code for it and so will take time to adopt.

2) The HBCC can work transparently and provide some improvement on day 1, but that's with regard to improving minimums and also power efficiency.

It seems to me that the most important thing is that Vega is able to clock higher relative to previous GCN iterations. This is FPS in the bank. The HBCC can also make the card score points in terms of keeping a tight FPS range. We should have a solid performer in Vega but I would be extremely pleasantly surprised if it ended up beating the 1080ti by a margin.

On the plus side, AMD will have a solid base to move on from. The big test will be to see if they can finally work with developers close enough to get them to use these features.
 
I'm just trying to put things into perspective and manage expectations. People have been hyping up the tiled rasterisation as much as the HBCC thinking it will cause performance to jump by a fair margin. In reality based on what has been said so far:

1) Tiled rasterisation will require developers to code for it and so will take time to adopt.

2) The HBCC can work transparently and provide some improvement on day 1, but that's with regard to improving minimums and also power efficiency.

It seems to me that the most important thing is that Vega is able to clock higher relative to previous GCN iterations. This is FPS in the bank. The HBCC can also make the card score points in terms of keeping a tight FPS range. We should have a solid performer in Vega but I would be extremely pleasantly surprised if it ended up beating the 1080ti by a margin.

On the plus side, AMD will have a solid base to move on from. The big test will be to see if they can finally work with developers close enough to get them to use these features.

Tiled based rendering is already used by NV so no extra work needed from the devs really.
 
Tiled based rendering is already used by NV so no extra work needed from the devs really.

Read my previous post:

So, I was digging up some old news and read this January article about Vega on PC Perspective:

One the subject of the new 'geometry primitive shader':


The new programmable geometry pipeline on Vega will offer up to 2x the peak throughput per clock compared to previous generations by utilizing a new “primitive shader.” This new shader combines the functions of vertex and geometry shader and, as AMD told it to me, “with the right knowledge” you can discard game based primitives at an incredible rate. This right knowledge though is the crucial component – it is something that has to be coded for directly and isn’t something that AMD or Vega will be able to do behind the scenes.

This primitive shader type could be implemented by developers by simply wrapping current vertex shader code that would speed up throughput (to that 2x rate) through recognition of the Vega 10 driver packages. Another way this could be utilized is with extensions to current APIs (Vulkan seems like an obvious choice) and the hope is that this kind of shader will be adopted and implemented officially by upcoming API revisions including the next DirectX. AMD views the primitive shader as the natural progression of the geometry engine and the end of standard vertex and geometry shaders. In the end, that will be the complication with this new feature (as well as others) – its benefit to consumers and game developers will be dependent on the integration and adoption rates from developers themselves. We have seen in the past that AMD can struggle with pushing its own standardized features on the industry (but in some cases has had success ala FreeSync).

It certainly seems like we will need to wait for devs to pick this up.

Raja acknowledged it during his talk that they messed up and lost focus: since RTG was formed they've been consistently delivering launch-day driver updates for games, as well as overall software improvements (speedups, wattman, relive). If they extend that to working with development teams they'll be in a good place. Like Roff/Steampunk said, the changes required seem manageable, so there's no reason AMD can't push to get them in:

If it's good and fairly easy to implement, it'll get used.
...
If it's a trivial driver check and a bit of code in front of existing shaders, it will be a no-brainer to implement.

I think its likely you can intercept your current vertex pipeline without additional changes to the game engine - but it will probably require some work once you've done that to sanity check/filter not just fire and forget.

I'll be surprised if it doesn't end up like first gen tessellation hardware sitting there unused while nVidia doesn't use it and no real pressure for nVidia to do something similar as their architecture can brute force past the inefficiency. Developers aren't a big fan of vendor specific "extensions" even on the nVidia side - I think like 3 games support nVidias own non-standard optimised hardware features for shadows for instance despite some potentially significant speed increases.

...but it's equally possible it gets ignored.
 
By the way, I'm no trying to rain on Vega's parade here. I'm quite hyped up about it, but at the same time trying to be realistic.

If Vega trades blows with the 1080ti in the same way the 480 does with the 1060, BUT does so with more competitive power consumption, that'd mean that AMD is on par with Nvidia. It'll be awesome. And when the new features start getting used it'll get even better.

Volta may be 'coming soon' but if that's not Q1 2018 then it's not soon enough...
 
By the way, I'm no trying to rain on Vega's parade here. I'm quite hyped up about it, but at the same time trying to be realistic.

If Vega trades blows with the 1080ti in the same way the 480 does with the 1060, BUT does so with more competitive power consumption, that'd mean that AMD is on par with Nvidia. It'll be awesome. And when the new features start getting used it'll get even better.

Volta may be 'coming soon' but if that's not Q1 2018 then it's not soon enough...

I'm not forgetting in AMDs video for Vega a poster said "Poor Volta" so clearly they are targeting Volta and not Pascal. So it really should easily beat the 1080Ti?
 
I'm just trying to put things into perspective and manage expectations. People have been hyping up the tiled rasterisation as much as the HBCC thinking it will cause performance to jump by a fair margin. In reality based on what has been said so far:

1) Tiled rasterisation will require developers to code for it and so will take time to adopt.

2) The HBCC can work transparently and provide some improvement on day 1, but that's with regard to improving minimums and also power efficiency.

It seems to me that the most important thing is that Vega is able to clock higher relative to previous GCN iterations. This is FPS in the bank. The HBCC can also make the card score points in terms of keeping a tight FPS range. We should have a solid performer in Vega but I would be extremely pleasantly surprised if it ended up beating the 1080ti by a margin.

On the plus side, AMD will have a solid base to move on from. The big test will be to see if they can finally work with developers close enough to get them to use these features.

It looks like you meant to mention primitive shaders rather than tiled rasterisation? Tiled rasterisation relies on using on die cache (supposedly L2) to store rasteriser data and reduce memory bandwidth use. The performance/power benefits come from keeping the data local on the chip. Afaik there is no requirement or benefit for the developer to be aware of it.
 
Realistically. They can't be releasing Vega to compete with Pascal. It just doesn't make sense. It's too late. And seeing as how Pascal is basically just Maxwell+. It wouldn't be impressive at all.
I'm thinking purely from a business point of view AMD must be trying to position themselves here. Thats would be how to make the most money.
cards.png
 
Last edited:
As a side note. Look how much higher Nvidia put Pascal than Maxwell.... yeah right! lol

Well when Pascal was being launched NVIDIA a originally stated x10 faster in tasks didn't they?

Was supposed to be a massive leap forward since they skipped 20nm which flopped and could go 16nm.

Looking at Titan X vs Titan Xp, they made great improvements, but nothing compared to what they were stating originally in slides.

I expect Vega to be the Pascal of AMDs Fiji really.
 
I still expect VEGA to be competitive with pascal. It's unlikely it will be anything more.

Just based on how far efficiency/performance wise, Polaris is behind pascal. FX vs TXP for example.

Pascal was a decent step up even for maxwell offering pretty much twice the performance down the line 1070/1080/1080Ti (970/980/980Ti).

I do hope AMD have something fast up their sleeve though. Cheaper cards all round :D.
 
Status
Not open for further replies.
Back
Top Bottom