• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
Well if you remember RX 480 was launched at Computex '16 for $199, so they are aiming Vega at two segments >$200 Enthusiast and >$500 Ultra Enthusiast. :)

EDIT: Also the giant silver arrow under the yellow shaded bit says Polaris!

Yup, So there should be a cut down Vega 10, or Vega 11 to fill in above the 580 range at >$200. Hopefully that's the GTX 1070 competitor
 
Still flogging that horse ah...

It has compute but CAN NOT do it asynchronously due to lack of hardware which is why it's an issue to use effectively. The driver/software has to do the scheduling.

Oh I forgot.it's D.P (Damage Patrol)

Take your trolling elsewhere.
 
Still flogging that horse ah...

It has compute but CAN NOT do it asynchronously due to lack of hardware which is why it's an issue to use effectively. The driver/software has to do the scheduling.

Oh I forgot.it's D.P (Damage Patrol)
It's not trolling just because you don't like what's being said. As an adult, you really should know this.
It kind of knocks his argument when he has to resort to name-calling. That's playground stuff rather than adult discussion.
 
some ppl think that vega FE have 8GB of HBM2 and another 8GB of GDDR, and i have to admit this could be the case with that weird 480GB bandwidth
 
Considering how long it took to move from 28nm is the next move really going to be so quick?

Yes, by all accounts GloFo is offering 7nm before the end of 2018. And it was designed by IBM originally, so people have high hopes it'll be good. AMD have also mentioned 48-core Zen-2 arch Server chips on 7nm in 2018/2019.


EDIT: This Anandtech article is good, and not really out of date http://www.anandtech.com/show/10704/globalfoundries-updates-roadmap-7-nm-in-2h-2018

Basically we're expecting 7nm from both GloFo and TSMC to start production 1H 2018 ish, and ship before the end of the year (so mid-Navi, and Volta 1080Ti). Then 7nm+ from GloFo in 2019 sometime (Navi Fury?). Then 5nm from TSMC in 2020, with GloFo also offering 5nm around then (Navi and Volta successors from both companies).

So after the 28nm stagnation, we're actually back on track with steroids by the looks of things.

Summary of edit:

Mid-2017 to Early-2018 14nm/12nm: Vega Fury, and mid-Volta (GTX 1080 size)

Mid-2018 to Late-2018 7nm: mid-Navi (RX480 to GTX 1080 size), and MAYBE large Volta (if not then Early-2019)

Early-2019 to Mid-2019 7nm+: Navi Fury, MAYBE later large Volta

Mid-2020 to Mid-2021 5nm: mid-sized successors to Navi and Volta from both companies. And we'll probably be complaining "where are the 8K capable cards" around now :p


I thought Vega 10 was still their consumer card, they don;t really have separated consumer and HPC products like nvidia Otherwise AMD should have put in some proper 1:2 FP64 support, instead they are just using a consumer card for profesional market as they have done with Fiji and Polaris.

I thought Vega 20 was a refresh of vega 10, not on 7nmbut a tweaked architecture on a tweaked process.

I was going off this: https://videocardz.com/65521/amd-vega-10-and-vega-20-slides-revealed
 
Last edited:
You're condoning name-calling as part of having an "adult" discussion, that's bizarre and says a lot.....
Can you show me where I condoned it? I literally said that it doesn't change the truth of what was said.

Just so that you don't confuse it again, calling names changes absolutely nothing about the truth of what he said. It's completely inconsequential to whether it was factual or not.
 
Yes, by all accounts GloFo is offering 7nm before the end of 2018. And it was designed by IBM originally, so people have high hopes it'll be good. AMD have also mentioned 48-core Zen-2 arch Server chips on 7nm in 2018/2019.


EDIT: This Anandtech article is good, and not really out of date http://www.anandtech.com/show/10704/globalfoundries-updates-roadmap-7-nm-in-2h-2018

Basically we're expecting 7nm from both GloFo and TSMC to start production 1H 2018 ish, and ship before the end of the year (so mid-Navi, and Volta 1080Ti). Then 7nm+ from GloFo in 2019 sometime (Navi Fury?). Then 5nm from TSMC in 2020, with GloFo also offering 5nm around then (Navi and Volta successors from both companies).

So after the 28nm stagnation, we're actually back on track with steroids by the looks of things.

Summary of edit:

Mid-2017 to Early-2018 14nm/12nm: Vega Fury, and mid-Volta (GTX 1080 size)

Mid-2018 to Late-2018 7nm: mid-Navi (RX480 to GTX 1080 size), and MAYBE large Volta (if not then Early-2019)

Early-2019 to Mid-2019 7nm+: Navi Fury, MAYBE later large Volta

Mid-2020 to Mid-2021 5nm: mid-sized successors to Navi and Volta from both companies. And we'll probably be complaining "where are the 8K capable cards" around now :p




I was going off this: https://videocardz.com/65521/amd-vega-10-and-vega-20-slides-revealed


One thing you can be sure of, will be slippage and two die shrinks these days are a halfway house compared to what they use to be.
 
Not true. If the HBCC was unaware of VRAM access patterns it would need to start transferring data after a 'cache miss' meaning it is too late already: there will be latency and min FPS drop.

Of course it's aware of vram access, how else would it know to do memory tiering. But it's independent of the AP/software used, this can be used for any application, not only games.
 
Can anyone guess how hard is pairing Vega with non-HBM2 memory? Complete chip redesign or a minor tweak?
If HBM2 shortage/poor clocks is a fundamental problem, how long would it take to jump to GDDR6 or GDDR5X?
 
Can anyone guess how hard is pairing Vega with non-HBM2 memory? Complete chip redesign or a minor tweak?
If HBM2 shortage/poor clocks is a fundamental problem, how long would it take to jump to GDDR6 or GDDR5X?

Huge job if Vega was not planned to use GDDR5X from the start.

It would just not be worth doing for AMD.
 
Not true. If the HBCC was unaware of VRAM access patterns it would need to start transferring data after a 'cache miss' meaning it is too late already: there will be latency and min FPS drop.

Of course it's aware of vram access, how else would it know to do memory tiering. But it's independent of the AP/software used, this can be used for any application, not only games.

I didn't say it isn't aware of vram access. I was talking about patterns: that is, knowing in ADVANCE which memory will be used.

A hardware-only generic solution acts after-the-fact: the game would try to render using texture X that is not in the HBM memory and the HBCC would start rushing to fetch the data from outside the card and be like: "Ok wait, I'm fetching those textures, don't go away, I'm aware the FPS is dropping while you wait but I'm going as fast as I can". This cannot provide the sort of improvement that AMD is claiming. It is good for compute but not gaming where latency is important (minimum FPS).

It is also why I was skeptical on whether it is tomorrow-tech (i.e. when games do X in a few years from now, then Vega's HBCC will...)

But yesterday I noticed for the first time that there's already new API features that allow games to do exactly that. I reference and quoted what it is:

Actually there is a feature added in DX12 as MANDATORY but also optionally available in DX11 called Tiled Resources. I quote from Microsoft:

Tiled Resources decouple a D3D Resource object from its backing memory (resources in the past had a 1:1 relationship with their backing memory). This allows for a variety of interesting scenarios such as streaming in texture data and reusing or reducing memory usage.

I have said several times that I would be surprised if HBCC worked without support from software and it seems to me that AMD are probably exploiting somethings like this. If the game supports this feature (whether DX11 or DX12) the AMD driver should be able to exploit the HBCC.

Using this feature the game tells the DX stack what parts of memory it WILL be using. The DX11/DX12 software implementation (i.e. the AMD driver) is then informed in advance and knows what data it is safe to move outside the HBM and what data should move into the HBM.

But this means that games that don't utilise this feature are unlikely to benefit from the HBCC's presence.
 
I didn't say it isn't aware of vram access. I was talking about patterns: that is, knowing in ADVANCE which memory will be used.

A hardware-only generic solution acts after-the-fact: the game would try to render using texture X that is not in the HBM memory and the HBCC would start rushing to fetch the data from outside the card and be like: "Ok wait, I'm fetching those textures, don't go away, I'm aware the FPS is dropping while you wait but I'm going as fast as I can". This cannot provide the sort of improvement that AMD is claiming. It is good for compute but not gaming where latency is important (minimum FPS).

It is also why I was skeptical on whether it is tomorrow-tech (i.e. when games do X in a few years from now, then Vega's HBCC will...)

But yesterday I noticed for the first time that there's already new API features that allow games to do exactly that. I reference and quoted what it is:



Using this feature the game tells the DX stack what parts of memory it WILL be using. The DX11/DX12 software implementation (i.e. the AMD driver) is then informed in advance and knows what data it is safe to move outside the HBM and what data should move into the HBM.

But this means that games that don't utilise this feature are unlikely to benefit from the HBCC's presence.

Not according to Raja (https://youtu.be/FwcUMZLvjYw?t=8m42s). This HBCC is a hardware solution, devs won't see any difference. It will work with any application, not just games using DX whatever. They demoed this with render applications beside the Deus Ex and Rotr demos.

There is no safe data to be moved outside the cache or data to not be moved. It's all determined based on how much it is accessed. This is a personal guess, as AMD did not actually specify how the fine grain managing works. Can't say if they ever will...
 
Status
Not open for further replies.
Back
Top Bottom