• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
I've had some really weird results with Doom and my cards - my 780GHz had all kinds of issues with it in Open GL never mind Vulkan and my 1070 gets pretty much identical performance IIRC between Open GL and Vulkan but slightly different as in one will do better in some scenes than the other and vice versa.

My 780 was all over the place - 1080p performance was great and handled everything from low to ultra settings fine - turn it upto 1440p and performance took an utter dump regardless of low or ultra settings it was exactly the same poor framerate with no signs of being fillrate or VRAM limited and in Vulkan it would adhere to strange multipliers of the refresh rate like some kind of crazy V-Sync effect and only jump between like 30, 45, 60 and 120 fps :s (I think that was solved in later drivers but I was on the 1070 by then and haven't revisited it on the 780).

EDIT: Oh and some people saw a ~30% framerate increase on Kepler cards going to Vulkan for some reason but most didn't for no apparently reproducible reason either.

Yeah I'm pretty sure the Nvidia drivers are meant to be slightly odd with the new APIs. Not saying they're 'bad' but appears to be fair to say they don't perform exactly as expected, and consistently. The DX11 driver is clearly far more meticulously made.

Seems like only the GTX 1080/Ti/Titan gain in Vulkan consistently. And the weaker cards stay still, or lose very slightly. And also in DX12 Nvidia cards mostly lose mildly, or stay still.

Quite likely this won't be a problem with Volta, and when Vulkan/DX12 becomes more mainstream.

And Vega is supposedly fantastic for both too. Hopefully we'll see a lot of DX12/Vulkan games reviewed when RX Vega comes out.
 
Just occurred to me. Everyone has been saying that RX vega can't be sold for cheap because HBM is expensive. But i think that HBM is not the most expensive part but the GPU die itself. Raja said that vega uses infinity fabric. Considering that with IF rumors are saying that AMD has 80% yield of full fat Ryzen 7s and 99% utilisation of ryzen die's. I think that AMD could sell Vega for cheaper than we think. If they decide to.

Ryzen has good yields because it's highly modular and the die is smaller because of it. Vega isn't like this, maybe Navi.
 
You've completely misunderstood why the cpu yields are good and wrongly extrapolated it to a completely unrelated GPU product

your just going to say "your wrong" with no explanation or is Sirroman your alternative account?

Ryzen has good yields because it's highly modular and the die is smaller because of it. Vega isn't like this, maybe Navi.
From my understanding the small size gives good yields but the modularity of the design gives good die utilisation per wafer.
Apart from from the die shot and die size we don't know much about the die itself, or what it comprises of. If Vega is not modular, then it means that IF is designed for communication with the CPU which would mean that there is an advantage from using vega with Ryzen CPUs.
 
Just occurred to me. Everyone has been saying that RX vega can't be sold for cheap because HBM is expensive. But i think that HBM is not the most expensive part but the GPU die itself. Raja said that vega uses infinity fabric. Considering that with IF rumors are saying that AMD has 80% yield of full fat Ryzen 7s and 99% utilisation of ryzen die's. I think that AMD could sell Vega for cheaper than we think. If they decide to.

Good reason to believe that vega will be priced more competitively than we think.. am taking the FE price as base here.. they manufactured and assembled all FE parts in US and it mustve been executed as less than EOQ contract.. so that alone should shave of $200 from RX cost.. further the 8GB excess HBM maybe another $200. total impact on cost $350-400... if FE is making 30% GM we are looking at per piece cost of $700 --> for RX that means anything between $300-350.. again extrapolating the 30% GM (Actually given the larger scale amd should be okay with lower margins). the RRP is approx. $428-500.

Vega is not currently good enough for a price that can allow amd to recoup their R&D. they can only hope to make good GM's...R&D is a sunk-cost amd can now only plan for the future
 
Been away doing a boring ITIL course so I haven't been keeping up with the Vega news. Can someone give me a quick update please, we still on track for end of July release?
 
Yeah but Nvidia are a bit weird with the new APIs and Pascal.

The RX 460 jumps from 47 to 62 fps meanwhile.
You did give the 1050 as an example of Vulcan improvements in your earlier post though;)

Either way vulcan does appear to Be relevant to only one game and even then it can be played in a different api and offers slightly better or worse fps depending on where you look and which card vendor you use.

I think this is why people like me don't get Vulcan because it really only offers improvement for one card vendor that has a tiny market share and in only one game which already runs well without Vulcan.
 
I explained enough for you to be able to fact check your own assumptions. Posting the same wrong info over and over doesnt make it correct it just makes the poster look silly.
Stop being so arrogant. If your time is too important to explain yourself (because your current explanation is ambiguous) then you shouldn't be on these forums.

Once again what do you disagree with and why?
 
Yes but it my example shows that IF has allowed ryzen to be very cheap to manfucature. If that advantage transfers over to the GPU market we potentially can see cheap GPUs.

If they can sell it cheaper the cheaper manufacturing costs will be IMO a result of that wafer supply agreement AMD has with GF. Lets face it if AMD is contracted to buy X amount of wafers from GF who's process isn't as good as TSMC then the upside for AMD should be cheaper wafers.
 
Stop being so arrogant. If your time is too important to explain yourself (because your current explanation is ambiguous) then you shouldn't be on these forums.

Once again what do you disagree with and why?

Plenty of people have already given you a more detailed explanation, certainly enough for you to be able to google die sizes and yields and to work out why what applies to a moduled CPU doesnt apply to a much larger GPU.

If you think my post was against forum rules you are free to report me, but last time i checked it was a pretty open set of rules, nothing that requires me to do what you're saying i need to.

My time is mine to do what I want with, expecting me to hand feed you basic information on a subject you seem to want to know about isnt within my remit. Thats what google is for.
 
Last edited:
I was lazy and wrote 1 liners, so it doesn't make any sense.
What I was trying to say is that in his video he points to the videocardz article where they mention that the benchmark doesn't detect overclocked unreleased cards properly (Hence some of them have a "+" sign). That to me means that it would detect the base clock of the card not the maximum boost clock (since that is an overclock). Coupled with the graph adored showed where he put the runs in chronological order (and it making no sense to overclock then downclock); I'm speculating that it has a base clock of 1630 MHz; and it seems AMD are trying to get power delivery working well so that it can boost higher.
If the benchmark can properly detect boost clock of an unreleased card, then it seems that AMD is able to get the card to consistently boost to its max speed on at least an open air test bench (could be in a caase for all we know), which is better than what we saw the FE do at PCper. but then it leaves a big question as to what the overclocked scores are.

Sorry about the delayed reply :)

In the video Adored talks about the RX having a 30MHz increase compared to the FEs 1600MHz so he was clearly not talking about base clocks.

Videocardz does talk about 3DMark11 not detecting overclocked unreleased cards properly but I don't think he's saying the base clock is 1630MHz as the performance would be higher -

The highest score the 687F:C1 has achieved is an overclocked chip. 3DMark11 does not recognize unreleased overclocked graphics cards very well. The good news is that this puts RX Vega above overclocked GTX 1070, bad news, it might still be slower than overclocked GTX 1080. I guess time will verify those results.
 
How many times.... just forget about nvidia for low level APIs until volta :p

Not only does nvidia have performance issues under vulkan in doom but they also have issues with the rendering of the textures i.e. slow to render


You tell me what one you would choose out of these Results :p Them Frame Time Though!!! Smooth
Vulkan
Vulkan.jpg

OpenGL
20170709223725_1.jpg

Vulkan 2
Vulkan2.jpg

OpenGL2
20170709223744_1.jpg

Preach brother! Enjoy that silky smooth gameplay :cool:
 
Status
Not open for further replies.
Back
Top Bottom