• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

AMD VEGA confirmed for 2017 H1

Status
Not open for further replies.
I was actually replying to that, the pictures I saw in that post from TPU were of FE. I am not interested in the FE and I am confident RX Vega for gamers won't be blue.

Look at the side of the picture - its part of one of the RX Vega cards. I looked at the pictures of the Fury Nano,Fury X and the Fury Pro Duo and the rear part and backplate is different on those cards. So I suspect the RX Vega might actually look like the Fury series cards when it comes to the colour scheme.
 
I'm struggling to get my head around this part since GPU's are already have 1000's of small cores already on the die I don't get what advantage of having multiple smaller die's on a single die would bring (although one possible disadvantage could be increase delays timings).

He means multiple gpu dies on an interposer. Just like we have with RyZen and Epyc right now. Having them all work as a single GPU sharing their HBC

In Rise of the Tomb Raider in Dx12 the old 295X2 can match the Titan XP. So the multi adapter when used right offers create scaling.

AMD is banking on all of that for the future.
Going small die strategy again like in the 4870 and 5870 days.
 
Last edited:
I'm struggling to get my head around this part since GPU's are already have 1000's of small cores already on the die I don't get what advantage of having multiple smaller die's on a single die would bring (although one possible disadvantage could be increase delays timings).
More functional chips from the same silicon wafer, for one.
 
Looks like we might have a partial picture of one of the RX Vega cards:
https://www.techpowerup.com/233478/amd-radeon-vega-frontier-edition-spotted-in-amds-labs
I audibly groaned. Forgot that the early adopters of RX Vega, will have to deal with blower style cooler. There are no good blower coolers, ever.
And it will take a good while more for custom coolers models from other manufacturers.

Guess I'm getting a FuryX style watercooled model then. Hope there is one.
 
gfxchiptweeter 4 points 6 minutes ago

To realize the full potential of HBCC, yes we will need to see content from game developers use larger datasets. But we have seen some interesting gains even on current software, particularly in min frame rates. Part of the goal of launching Radeon Vega Frontier edition, is to help speed up that process.

Oh dear, it'll never get used then, just as i suspected, as needs the developers on board, but they won't use it, seen as Nvidia own the PC gaming market, and developers wont bother doing something, just for a couple of people, we've seen it with Mantle (yes, i know its getting used now, but back when it was under AMD, no one wanted to know about it, so they had to get rid of it), that Audio thing (ive only heard that being used in Thief, for background sound or something), TressFx, which i only know about being in about 2x Tomb Raider games, and the other odd game or 2, Dx12, which is only patched into a couple of Dx11ers laters, and the many other things that AMD have done in the past, that have just never gotten used because of the big Green machine.

What a shame, thats the HBCC added to that long list of things then, that we're going to have to wait for Nvidia to add/do, before we get.:(
 
I'm struggling to get my head around this part since GPU's are already have 1000's of small cores already on the die I don't get what advantage of having multiple smaller die's on a single die would bring (although one possible disadvantage could be increase delays timings).

When making bigger chips you tend to have more loss per wafer than you do smaller chips. Because say you can only make 50 chips per one wafer for a big die however for a smaller die you could make significantly more per wafer. Loose 10% of the big dies and you have lost 5 out of 50 useable dies but loose 5% of the smaller dies and you have lost a lot less. Not to mention it tends to be cheaper to produce smaller dies. Navi is supposed to be 7nm and as the manufacturing process gets smaller and smaller, it gets significantly harder to produce chips. That's how i understand the manufacturing side of things anyways. But Navi is going to in laymen terms is going to merge multiple smaller dies so that it is roughly same size as one single big die or potentially bigger.

It's like minesweeper say you only have 7x7 blocks but 10 mines. That's going to be 39 useable big dies unless they try to salvage the bad chips by getting rid of all the bad parts of the die by reducing them usually how they did the Furyx -> Fury for example. Just a cut down chip.

With navi they can use much smaller dies so wafers that can have for example 15x15 blocks/dies but loose roughly same space on the wafer its going to take a similar amount of dies with potentially being slightly more. E.g 25 dies are bad. Still leaves 200 useable dies but if they use only 2 dies per GPU then they have 100GPUs vs 39.

That above is all made up numbers but the theoretical part is what i understand from Navi in production terms.

Im sure some one will come along and correct me or put this better. But from what i read before on Navi it's what i understood lol.
 
Oh dear, it'll never get used then, just as i suspected, as needs the developers on board, but they won't use it, seen as Nvidia own the PC gaming market, and developers wont bother doing something, just for a couple of people, we've seen it with Mantle (yes, i know its getting used now, but back when it was under AMD, no one wanted to know about it, so they had to get rid of it), that Audio thing (ive only heard that being used in Thief, for background sound or something), TressFx, which i only know about being in about 2x Tomb Raider games, and the other odd game or 2, Dx12, which is only patched into a couple of Dx11ers laters, and the many other things that AMD have done in the past, that have just never gotten used because of the big Green machine.

What a shame, thats the HBCC added to that long list of things then, that we're going to have to wait for Nvidia to add/do, before we get.:(

That's not how i understood that post.
 
Oh dear, it'll never get used then, just as i suspected, as needs the developers on board, but they won't use it, seen as Nvidia own the PC gaming market, and developers wont bother doing something, just for a couple of people, we've seen it with Mantle (yes, i know its getting used now, but back when it was under AMD, no one wanted to know about it, so they had to get rid of it), that Audio thing (ive only heard that being used in Thief, for background sound or something), TressFx, which i only know about being in about 2x Tomb Raider games, and the other odd game or 2, Dx12, which is only patched into a couple of Dx11ers laters, and the many other things that AMD have done in the past, that have just never gotten used because of the big Green machine.

What a shame, thats the HBCC added to that long list of things then, that we're going to have to wait for Nvidia to add/do, before we get.:(

From the way I understood it, it seems to me like he is saying that current games don't produce enough data to fully utilise HBCC, which would also explain his comment about older games. Potentially, the more data that is needed the more gains that can be seen from using vega.
 
I'm struggling to get my head around this part since GPU's are already have 1000's of small cores already on the die I don't get what advantage of having multiple smaller die's on a single die would bring (although one possible disadvantage could be increase delays timings).
well that would allow AMD to bring much higher performance at much lower price, yield for 200mm² chip are much better than 600mm².
let's say a wafer cost 5k$ and result in 50x 600mm² with yields of 30%, so you get 15 good chips, the cost would be 330$ and delivers 100% performance.
then you have the wafer cost 5k$ and get 150x200mm² with yields of 80%, that's 120 good chips, the cost would be 41$ and delivers 33% performance, now you put 2 on interposer, you get 66% performance for 82$ cost, and if you put 4 on interposer, you get 133% performance with a cost of 164$, you add in about 25$ for interposer cost.
the other thing is that it's harder to get smaller nodes, so it would allow AMD to stick to a node longer even if foundries do not deliver, so instead of going 800mm² or so, AMD can add more units, with 6 or 8 Cores.
 
Oh dear, it'll never get used then, just as i suspected, as needs the developers on board, but they won't use it, seen as Nvidia own the PC gaming market, and developers wont bother doing something, just for a couple of people, we've seen it with Mantle (yes, i know its getting used now, but back when it was under AMD, no one wanted to know about it, so they had to get rid of it), that Audio thing (ive only heard that being used in Thief, for background sound or something), TressFx, which i only know about being in about 2x Tomb Raider games, and the other odd game or 2, Dx12, which is only patched into a couple of Dx11ers laters, and the many other things that AMD have done in the past, that have just never gotten used because of the big Green machine.

What a shame, thats the HBCC added to that long list of things then, that we're going to have to wait for Nvidia to add/do, before we get.:(

You should probably re-read the statement, because it's not how you've interpreted it.

Take an 8GB card now, with current games most of them will not take advantage of HBCC since there is enough VRAM to store the data needed, except the odd title. However as newer games come out, with larger textures, and more data transfer needed, the 8GB that you have will still be sufficient as the HBCC then comes into effect, meaning that while newer cards using non-HBCC technology will need 12-16GB of VRAM, you'll be cruising on 8GB and wondering what's the deal with all the unnecessary VRAM on these new cards?

It does not mean that a developer HAS TO programme for HBCC, just that if they chose to use huge datasets, then you'll reap the benefit of it, while losers with 11GB cards will run out of VRAM ;)
 
You should probably re-read the statement, because it's not how you've interpreted it.

Take an 8GB card now, with current games most of them will not take advantage of HBCC since there is enough VRAM to store the data needed, except the odd title. However as newer games come out, with larger textures, and more data transfer needed, the 8GB that you have will still be sufficient as the HBCC then comes into effect, meaning that while newer cards using non-HBCC technology will need 12-16GB of VRAM, you'll be cruising on 8GB and wondering what's the deal with all the unnecessary VRAM on these new cards?

It does not mean that a developer HAS TO programme for HBCC, just that if they chose to use huge datasets, then you'll reap the benefit of it, while losers with 11GB cards will run out of VRAM ;)

Thanks for the explanation :)
 
MD will announce Vega at its 2017 Financial Analyst Day and of course none other than Raja Koduri will share a few more details with us.

The call and the event takes place later today at May 16, 2017 1:00 PM Pacific Time, 4:00 PM Eastern Time and 11 PM Central European time and will be webcast.

It is not roses, roses though, as Fudzilla has learned that Vega has experienced some delays and issues, and we are working to get a few more details. AMD has committed to ship the Vega first HBM 2 GPU before the end of the second quarter, but even if this happens, the numbers will be limited and its performance might not be as great as the fanboys expected. From what we know, Vega won't be able to beat the Geforce GTX 1080 Ti or Titan.

The original plan was to launch Vega earlier, but since this is highly complicated silicon, things tend to go bad. Let’s not underestimate the cost of HBM 2 memory as well as shortages.

AMD will have the following people at the call so expect to see the full company’s overview and a bit more about the desktop, notebook and server strategy too.

http://fudzilla.com/news/graphics/43666-amd-to-announce-vega-tonight
 
but still developers prefer gameworks. Every year there are 7-8 AAA gameworks title.

Gameworks is an adhoc solution for developers to differentiate their console versions from the PC. Otherwise in the majority of cases the only difference would be higher resolution and texture support with uncapped frame rates.

At worst it's added to try and hide that the PC version is visually inferior to the console one; as Batman Arkham Knight was. Often causing severe performance penalties for extremely minor visual improvements in cases.

With GPUOpen, there isn't any marketing associated with it, since it's all Open Source libraries; developers can modify and create their own solutions specifically for their titles. As what happened with Rise of the Tomb Raider, where TressFX was used to create PureHair.
 
Coming to VEGA. I seriously hope for AMD GPU future that they do not miss lead people like they did with Fury X. Keep it simple that is the only thing that can save RTG.
 
The same can be said about Nvidia and what they did with the 970 and how they have been milking us so hard.

Tell me, what should I care more about, getting milked hard, or about Roy?

You should care about the total lack of competition allowing Nvidia to charge what they like. AMD need to get competing cards out, be it on value or performance.
 
Status
Not open for further replies.
Back
Top Bottom