• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

THE REAL REASON VEGA FAILED & RAJA QUIT AMD!

I wouldn't say it was "below standard" but arguably RTG set the standard quite low. Objectively, Vega did exactly what it was intended to do: Vega 56 beat the GTX 1070, Vega 64 beat the 1080, and Nvidia responded; would we have seen the 1080 Ti if they hadn't caught wind that Vega 64 could take the performance crown? Was the 1070 Ti released to beat Vega 56, or just use up the 1080 dies lying around because GDDR5 was scarce?.

The problem though comes from the overall package and what was ultimately released:
  • Vega was a year late so after all the hype it was disappointing (and Nvidia stole more thunder by releasing the 1080 Ti, so automatically Vega was no longer competing at the top end)
  • RTG weren't discerning enough with their yields, so set a stupidly high reference power requirement just to get every functioning die working
  • MSRP was totally wrong, essentially charging the next performance bracket up for each card (1080 money for the 1070 competitor, 1080 Ti money for the 1080 competitor). HBM's cost couldn't have helped in that regard either.
  • Launch prices were a total lie.
Bundle everything together and that's what makes it below standard, the cards themselves are really good once tuned to where they should really have been in the first place.

Vega also relied too heavily on features that realistically were never going to be leveraged and I can't believe yet again no one at AMD could see that :( quite a lot of things simply aren't enabled/fully working at driver level to do with draw streams, culling, etc. that on paper would give Vega a significant advantage but realistically will never be utilised or not for another few generations yet.

It is kind of like tessellation all over again - they've tried to force those features through but the reality is no one will actually put them into use until everything is good and ready and Vega by then will be a distant, largely forgotten, memory.
 
They scrapped their manual Primitive Shader API that they were working on.

The issue sparking the most controversy today is the status of the Next-Generation Geometry Engine, better known as "primitive shaders" in enthusiast shorthand. AMD emphasized that the Next-Generation Geometry path has several components, not just the more flexible programming model exposed through primitive shaders. One of those is the Intelligent Workgroup Distributor, a feature that ensures the GPU avoids performance-harming operations like context rolls (or context switches) and enjoys maximum occupancy of its execution units. AMD confirmed that the IWD is enabled, and that any performance benefits from that feature are already being demonstrated by Vega cards today.

Primitive shaders have remained a point of interest for Vega GPUs because of the potential performance increases that AMD's own materials have promised. The Vega architectural whitepaper notes that Vega's next-generation geometry path can process more than 17 primitives per clock, compared to the four primitives per clock that the Vega 10 GPU can process using the traditional Direct3D rendering path. That whitepaper figure came from internal AMD tests with a pre-release driver, and justifiably or not, the common expectation among enthusiasts has been that primitive shader support is a driver feature waiting to be enabled.

In the "justifiably" column, other GPU reviewers have confirmed in conversations with AMD that a manual primitive shader API was in the works, and that the graphics driver could have had a method to invoke the next-generation geometry path automatically. AMD employees have also confirmed that the graphics driver would handle invocation of the Next-Generation Geometry path automatically on Twitter.

At its pre-CES tech day, AMD turned this expectation on its ear a bit. The company explained that instead of being an "it just works" affair, full support for primitive shaders will require explicit inclusion in future versions of APIs like Direct3D 12 and Vulkan. Unlike some Vega features, AMD says that "primitive shader support isn't something we can just turn on overnight and it happens," noting instead that it'll need to work with the development community to bring this feature to future API versions. Game developers would then presumably need to take advantage of the feature when programming new games in order to enjoy the full performance benefits of the Next-Generation Geometry path.

https://techreport.com/news/33153/radeon-rx-vega-primitive-shaders-will-need-api-support

Also, anyone wonder why the HBCC, isn't anything like it was in those demos they were showing off.

The High Bandwidth Cache Controller, or HBCC, has also been available for users to toy with from (or near) the start of RX Vega's tenure on the desktop. AMD notes that most applications simply don't need more than the 8 GB of VRAM on board RX Vega cards, so this feature has a limited impact on performance in today's titles

and those demos were ran on a 4GB Vega card that they knocked up.

 
Last edited:
I wouldn't say it was "below standard" but arguably RTG set the standard quite low. Objectively, Vega did exactly what it was intended to do: Vega 56 beat the GTX 1070, Vega 64 beat the 1080,

This isnl;t really true at all. AMD intended Vega to be much more powerful than the 1070/80, it was aimed fully at the 1080ti/Titan level. Just look at the giant die size and expensive HBM2 memory. ANd that is why they prices of Vega don't make any sense, because the BoM puts it at above 1080ti costs. IF AMD aimed at a 1080 competitor and Vega is what they required, then they still failed.

and Nvidia responded; would we have seen the 1080 Ti if they hadn't caught wind that Vega 64 could take the performance crown? Was the 1070 Ti released to beat Vega 56, or just use up the 1080 dies lying around because GDDR5 was scarce?.
.
Nvidia didn;t respond at all, there was nothing to respond to. OF course there would be a 108ti, there has always been higher end parts released later. Nvidia coudln;t just magic up a much faster and bigger chip out of thin air just because Vega was released. the GP102 chip used in the 1080ti would have been in development for years and years, before Kepler was even released.
 
They scrapped their manual Primitive Shader API that they were working on.



https://techreport.com/news/33153/radeon-rx-vega-primitive-shaders-will-need-api-support

Also, anyone wonder why the HBCC, isn't anything like it was in those demos they were showing off.



and those demos were ran on a 4GB Vega card that they knocked up.



This is exactly what I said in the thread about AMD's comeback.
They design architectures in ways that they think is best, regardless of what game developers actually want, care about, or have resources for.
Nvidi designed their GPUs around what developers are currently doing, will do in the short term, and what is possible with current APIs.
 
Why? Vega plays all my games at high fps, I often have to limit fps to 144. All that extra money in my pocket from not paying G-Sync tax is not painful at all.

I personally don't see this Gsync tax.

When I bought my monitor I had a £500 budget (which I loosely set) and that was for a FreeSync or Gysnc monitor.

There was no extra for the Gsync display.

---

What I meant was whilst the Nvidia cards set the bar with each release, the AMD cards are not pushing the boundaries and another mid range after Vega would be disappointing.

But I get the point that it matters nought if at your resolution a 'mid range' card does the the job perfectly.

I mean more from an enthusiast perspective.
 
Last edited:
People keep banging on about Gsync tax. There is no Gsync Tax.

When I bought my monitor I had a £500 budget (which I loosely set) and that was for a FreeSync or Gysnc monitor.

There was no extra for the Gsync display.

Isn't there? Nvidia charges someone for it.

Your anecdote is your example, we weren't there, we didn't see all the details involved. But we've got OCUKs monitor selection here: https://www.overclockers.co.uk/monitors with a filter for freesync and gsync... does anything there support your anecdote?
 
I personally don't see this Gsync tax.

When I bought my monitor I had a £500 budget (which I loosely set) and that was for a FreeSync or Gysnc monitor.

There was no extra for the Gsync display.

---

What I meant was whilst the Nvidia cards set the bar with each release, the AMD cards are not pushing the boundaries and another mid range after Vega would be disappointing.

But I get the point that it matters nought if at your resolution a 'mid range' card does the the job perfectly.

I mean more from an enthusiast perspective.

How is your mid range card working out ?
 
But it's also beside the point.

When the next Nvidia series launch your will have cards all the way up to the ultra high end.

When Navi launches it will be disappointing for it just to be mid range.
 
Last edited:
I love ❤❤❤ my VEGA failure xD



───▄▄▄▄▄▄─────▄▄▄▄▄▄
─▄█▓▓▓▓▓▓█▄─▄█▓▓▓▓▓▓█▄
▐█▓▓▒▒▒▒▒▓▓█▓▓▒▒▒▒▒▓▓█▌
█▓▓▒▒░╔╗╔═╦═╦═╦═╗░▒▒▓▓█
█▓▓▒▒░║╠╣╬╠╗║╔╣╩╣░▒▒▓▓█
▐█▓▓▒▒╚═╩═╝╚═╝╚═╝▒▒▓▓█▌
─▀█▓▓▒▒░░░░░░░░░▒▒▓▓█▀
───▀█▓▓▒▒░░░░░▒▒▓▓█▀
─────▀█▓▓▒▒░▒▒▓▓█▀
──────▀█▓▓▒▓▓█▀
────────▀█▓█▀
──────────▀




╔══╗
╚╗╔╝
╔╝(¯`v´¯)
╚══`.¸.[VEGA]



▀██▀─▄███▄─▀██─██▀██▀▀█
─██─███─███─██─██─██▄█
─██─▀██▄██▀─▀█▄█▀─██▀█
▄██▄▄█▀▀▀─────▀──▄██▄▄█



(•_•) ( •_•)>⌐■-■ (⌐■_■)
 
Last edited:
I love how they do a big song and dance about a new mega performance driver they are going to release, zomg! 200+ more frames with this driver, incredible increase in performance!!!, and how it gets reported across the board, its that time of the year again, and AMD are going to release yet another massive performance increase driver blah blah blah............

Its compared to an ancient driver they released over a year ago, so theres bound to be a good increase in performance since then, trouble is, no ones running that ancient driver, everyones on the one that got released a few weeks before this zomg! mega performance increase one, so they hardly see a difference at all, if any!.

Always makes me laugh that :D

You do see a descent increase sometimes, I remember getting a 4 or 5 fps increase in a few titles (Woop Woop) when we got the Orange? Satsuma? Tango? (I've forget the name) Super Driver a few years back... :D

EDIT: I just had a look and it was called the Omega driver not Orange, Back in 2014. I thought it was related to a colour like today we have Crimson.
 
OK, are you cherry-picking parts of my post, or did you not read everything as a singular piece?

...AMD intended Vega to be much more powerful than the 1070/80, it was aimed fully at the 1080ti/Titan level. Just look at the giant die size and expensive HBM2 memory.

Cite your source? Who says Vega was intended to be a Ti killer? A massive die as your single reference is not sufficient evidence.

ANd that is why they prices of Vega don't make any sense, because the BoM puts it at above 1080ti costs..

Well yeah, I already said that. It's a major factor to the overall Vega product being a bit of a failure in gaming circles.

Nvidia didn;t respond at all, there was nothing to respond to.

Rubbish. Vega 56 can best the 1070, Vega 64 can best the 1080. Plus, was the 1070 Ti released purely for shts and giggles then? There wasn't a big enough price/performance gap betwixt the 1070 and 1080 to warrant a new product, but then Vega 56, for all its flaws and woes, pips the 1070. Then a 1070 Ti shows up...and reviewers get an itchy chin moment saying "hmm, I wonder if this is a response to Vega 56 beating the 1070".

OF course there would be a 108ti, there has always been higher end parts released later. Nvidia coudln;t just magic up a much faster and bigger chip out of thin air just because Vega was released. the GP102 chip used in the 1080ti would have been in development for years and years, before Kepler was even released.

That's not even remotely what I said or suggested. All I said was the release of the 1080 Ti before Vega even came out kicked it in the teeth; Vega wasn't even going to be a top-end part when it was finally released because Nvidia preemptively moved the goal posts. Not that's a bad thing of course, but it's another negative mark against Vega.
 
I love ❤❤❤ my VEGA failure xD



───▄▄▄▄▄▄─────▄▄▄▄▄▄
─▄█▓▓▓▓▓▓█▄─▄█▓▓▓▓▓▓█▄
▐█▓▓▒▒▒▒▒▓▓█▓▓▒▒▒▒▒▓▓█▌
█▓▓▒▒░╔╗╔═╦═╦═╦═╗░▒▒▓▓█
█▓▓▒▒░║╠╣╬╠╗║╔╣╩╣░▒▒▓▓█
▐█▓▓▒▒╚═╩═╝╚═╝╚═╝▒▒▓▓█▌
─▀█▓▓▒▒░░░░░░░░░▒▒▓▓█▀
───▀█▓▓▒▒░░░░░▒▒▓▓█▀
─────▀█▓▓▒▒░▒▒▓▓█▀
──────▀█▓▓▒▓▓█▀
────────▀█▓█▀
──────────▀




╔══╗
╚╗╔╝
╔╝(¯`v´¯)
╚══`.¸.[VEGA]



▀██▀─▄███▄─▀██─██▀██▀▀█
─██─███─███─██─██─██▄█
─██─▀██▄██▀─▀█▄█▀─██▀█
▄██▄▄█▀▀▀─────▀──▄██▄▄█



(•_•) ( •_•)>⌐■-■ (⌐■_■)


I love mine too! :D
 
This is exactly what I said in the thread about AMD's comeback.
They design architectures in ways that they think is best, regardless of what game developers actually want, care about, or have resources for.
Nvidi designed their GPUs around what developers are currently doing, will do in the short term, and what is possible with current APIs.

There should be a balance. Between what developers want, what is possible for AMD to deliver, and what AMD delivers but developers need time to work over.
We are definitely missing more break-through games like Crysis.

Regardless, exactly because of the aforementioned, the Radeons become more future-proof, and AMD leads the way for the industry progress.
While nVidia just plays the catching up game.

Ultimately, it is about who gives more cash to the developers.
 
I think VEGA has been a success for AMD as they seem to be targeting data centers and machine learning more than games, it's also good for miners. The 7nm version should give a nice boost in performance and reduce power consumption and get it more inline with the competition but games are not the primary target. Also, when games start using fp16 more VEGA should do well, don't know what Nvidia cards are like with fp16.
 
Last edited:
Back
Top Bottom