• Competitor rules

    Please remember that any mention of competitors, hinting at competitors or offering to provide details of competitors will result in an account suspension. The full rules can be found under the 'Terms and Rules' link in the bottom right corner of your screen. Just don't mention competitors in any way, shape or form and you'll be OK.

Poll: ** The AMD VEGA Thread **

On or off the hype train?

  • (off) Train has derailed

    Votes: 207 39.2%
  • (on) Overcrowding, standing room only

    Votes: 100 18.9%
  • (never ever got on) Chinese escalator

    Votes: 221 41.9%

  • Total voters
    528
Status
Not open for further replies.
AMD ******* up a GPU release? Surely not!

EY9cuum.png
 

Raja was brought back in September 2015 for Radeon Technologies.

Before that he was in Visual Computing from 2013-2015 AMD, and Before that Apple Director of Graphics Architecture since 2009.

Fiji was entirely done, and Polaris was essentially done before he even took the job to lead AMD GPU design.
Considering it takes well over 2 years for a GPU architecture, he'd probably been working his arse off just to try and salvage Vega from previous designs based on Fiji.

He clearly stated at several conventions and presentations that by 2015 AMD's higher ups considered Discrete GPUs to be dead, and they should focus on APUs and Console SoCs.

We'll see how Navi turns out under him, as that should entirely be under his watch from start to finish.

Raja was also CTO for Graphics Product Design & Director for Advanced Technology Development for AMD & ATI from 2001 to 2009, and we know they only had one screw up back then, 2900XT; and his Small die strategy with Terrascale saved AMD with the 4870-6970s
 
Last edited:
As I understand it, FastSync eliminates tearing but it doesn't give you the smoothness and responsiveness of having each frame appear on the screen as it's drawn because it's still locked to the display's fixed refresh rate. A very nice compromise for monitors without FreeSync or G-Sync (or those with nVidia GPUs) but not a full solution.

Yeah i agree with that.

So how do you set up Fast sync? Is it already done by default?

You can say that again !

None of it add's up right
 
Considering it takes well over 2 years for a GPU architecture, he'd probably been working his arse off just to try and salvage Vega from previous designs based on Fiji.

Thing is though in terms of this current era Fiji isn't obsolete - move a few things from discrete hardware blocks like triangle/primitive setup to proper utilise the wider compute capabilities, narrow down some of the pipelines that realistically aren't going to be fully utilised for atleast another generation yet minimum, forget about HBM(2) on consumer cards, stick it on 14nm with the clock speed advantage and there is no reason why you shouldn't have a competitive GPU that can take the fight to anything nVidia have with Pascal.

Yet again AMD have over-reached themselves trying to push a paradigm in software and hardware approach way way before there is any sane reason to do it and Vega isn't looking any different if it requires a completely different approach from game developers to maximise its potential and some potentially significant changes to support things like primitive discard fully.

So how do you set up Fast sync? Is it already done by default?

Largely you can just set Vertical sync in the nVidia control panel to Fast. There are some things you can do to make it work a bit better though i.e. some games will run better with different levels of pre-rendered frame limits, etc. and you might find that enabling or disabling the V-Sync setting ingame has some effect as well - ideally you'd want to turn it off to remove any complications from how the game works (in theory the control panel setting should override it but doesn't always work 100%). You also need to be a bit careful with framerate caps as well due to the way FastSync works.
 
Yet again AMD have over-reached themselves trying to push a paradigm in software and hardware approach way way before there is any sane reason to do it and Vega isn't looking any different if it requires a completely different approach from game developers to maximise its potential and some potentially significant changes to support things like primitive discard fully.

It's like they love inventing fad's, Which wouldn't be a bad thing if they were in a better position, but they're not.

Largely you can just set Vertical sync in the nVidia control panel to Fast. There are some things you can do to make it work a bit better though i.e. some games will run better with different levels of pre-rendered frame limits, etc. and you might find that enabling or disabling the V-Sync setting ingame has some effect as well - ideally you'd want to turn it off to remove any complications from how the game works (in theory the control panel setting should override it but doesn't always work 100%). You also need to be a bit careful with framerate caps as well due to the way FastSync works.

My monitor's a 75hz Freesync one but I'm using a 1060 at the moment so I've just turned it to fast in the panel and I'll see if it makes any difference before playing around more, Thank's for the info.

No it isn't, like this....
fast.png
Cheer's.
 
Thing is though in terms of this current era Fiji isn't obsolete - move a few things from discrete hardware blocks like triangle/primitive setup to proper utilise the wider compute capabilities, narrow down some of the pipelines that realistically aren't going to be fully utilised for atleast another generation yet minimum, forget about HBM(2) on consumer cards, stick it on 14nm with the clock speed advantage and there is no reason why you shouldn't have a competitive GPU that can take the fight to anything nVidia have with Pascal.

Yet again AMD have over-reached themselves trying to push a paradigm in software and hardware approach way way before there is any sane reason to do it and Vega isn't looking any different if it requires a completely different approach from game developers to maximise its potential and some potentially significant changes to support things like primitive discard fully.

Exactly, Raja wasn't even there when GCN was design as a base. His babies were R200- Terrascale, and that gave us some of the most successful GPUs from AMD.
He was also at ATI as a designer prior to that for major successful designs, such as R200-R400 ( 9700 Pro - X850 )
The designs back then were focussed and effective, with the only major issue being Terrascale 1, aka 2900XT series.

The move to GCN while great at the time, is now very problematic. It doesn't need more revision; it needs to be replaced.

Add in funds being stripped from the GPU division because "Discrete graphics was dead"; and you end up with the messes we have now. Raja was extremely successful in ATI, and AMD; and they made a wise choice bringing him back. Now he needs the funds and resources to actually get things done.

Hope Navi is the last of GCN we see honestly ( it being Vega based ); and they get a new architecture finally built from the ground up like they managed with Zen.
 
Yet again AMD have over-reached themselves trying to push a paradigm in software and hardware approach way way before there is any sane reason to do it and Vega isn't looking any different if it requires a completely different approach from game developers to maximise its potential and some potentially significant changes to support things like primitive discard fully.

Whats your problem with it?

I tend to see these things as an extra, where not used nothing gained nothing lost, when it is used AMD have provided something we otherwise would not have had, it takes some doing to twist that into a bad thing.
 
Raja was brought back in September 2015 for Radeon Technologies.

Before that he was in Visual Computing from 2013-2015 AMD, and Before that Apple Director of Graphics Architecture since 2009.

Fiji was entirely done, and Polaris was essentially done before he even took the job to lead AMD GPU design.
Considering it takes well over 2 years for a GPU architecture, he'd probably been working his arse off just to try and salvage Vega from previous designs based on Fiji.

He clearly stated at several conventions and presentations that by 2015 AMD's higher ups considered Discrete GPUs to be dead, and they should focus on APUs and Console SoCs.

We'll see how Navi turns out under him, as that should entirely be under his watch from start to finish.

Raja was also CTO for Graphics Product Design & Director for Advanced Technology Development for AMD & ATI from 2001 to 2009, and we know they only had one screw up back then, 2900XT; and his Small die strategy with Terrascale saved AMD with the 4870-6970s

I personally have faith in the guy. That said i also think Polaris is a very good chip and i dont really understand why people would bash that with the current drivers(not talking launch :P ). My rx 480 was extremely good.
 
My monitor's a 75hz Freesync one but I'm using a 1060 at the moment so I've just turned it to fast in the panel and I'll see if it makes any difference before playing around more, Thank's for the info.

I found 75Hz is the sweet spot if you are setup right for FastSync to work - I use it with an overclocked Dell U2913WM @ 75Hz as a secondary monitor and it works an absolute treat as long as I'm rendering decently above the refresh rate - no more dealing with very noticeable input latency or tearing as a compromise - occasionally some stutter but its a small compromise for the overall better experience.

Whats your problem with it?

I tend to see these things as an extra, where not used nothing gained nothing lost, when it is used AMD have provided something we otherwise would not have had, it takes some doing to twist that into a bad thing.

Usually these things end up being a fad as mentioned above while resulting in some compromise or attention pulled away from the other stuff in doing so.
 
Urgh this all looks pretty shocking... let's hope it's not quite as bad as it looks. Personally not going to be buying one but I hope there is some competition at the 1080Ti level with a good price.

Can't help but think team green knew this is where it would sit and that's why the 1080Ti price was actually quite reasonable (in terms of the current high prices that is ;) )
 
Usually these things end up being a fad as mentioned above while resulting in some compromise or attention pulled away from the other stuff in doing so.

What like Mantle? which is now Vulkan and some would argue forced if not also gave birth to DX12, like FreeSync? Like Tessellation? even TressFX has its place in history, X86_64..... and so on.

Some AMD things were a fad yes, like TrueAudio, not just AMD things, some nVidia and Intel things were a fad too....

Not everything was a fad, i rather like the fact that AMD are not as defeatist as you would like them to be, if they were there wouldn't have been any and will not see any innovation.
 
Why are people here going all woopy over some cards that are not even the gaming cards?

Because there is nothing better to do?

But i know what you mean, its not a gaming card and yet when you read in here you'd think it was, even reviewers are reviewing it as a gaming card first and workstation card second if at all.

The Internet has gone bonkers over this card. keeps AMD in the news tho.....
 
What like Mantle? which is now Vulkan and some would argue forced if not also gave birth to DX12, like FreeSync? Like Tessellation? even TressFX has its place in history, X86_64..... and so on.

Some AMD things were a fad yes, like TrueAudio, not just AMD things, some nVidia and Intel things were a fad too....

Not everything was a fad, i rather like the fact that AMD are not as defeatist as you would like them to be, if they were would wouldn't have and will not see any innovation.

My post was about hardware direction and implementation - even Tessellation they added a block of hardware and shifted some software development focus to it years before it became a realistic technology to implement that could have been better utilised to either add in more of more relevant hardware features like ROPs and SPs or a slightly smaller core to increase performance and power efficiency.

Many companies do things that are fads sure but not many chase them blindly like AMD seem to do - I'm not asking AMD to be defeatist just find a better balance on how they pursue new paradigms and not betting the horse so much on them when its foreseeably long before their time like they can force the system by pushing it out there - it just doesn't work like that and has been proved so many times. Plenty of companies manage to innovate by exploring technology in side projects and encouraging skunkworks stuff, etc. until the right time to push them into mainstream.
 
My post was about hardware direction and implementation - even Tessellation they added a block of hardware and shifted some software development focus to it years before it became a realistic technology to implement that could have been better utilised to either add in more of more relevant hardware features like ROPs and SPs or a slightly smaller core to increase performance and power efficiency.

I thought it took so long because nVidia hardware could not support it for so long.

Thats rhetorical.
 
I still can't see how it's going to be no better than a Fury x, surely over the time in development someone on the team would have said, here hang on this gpu is a bit rubbish how the hell are we going to sell this.

But then what do I know, I only buy the stuff
 
I thought it took so long because nVidia hardware could not support it for so long.

Thats rhetorical.

nVidia didn't have any trouble developing support for it they had a working software implementation (for feature testing) many years back and the theories on it were all there in their whitepapers/development library - notice how they surged ahead when the time became right where APIs and performance had developed to a point where developers could spare time and cycles for it - they didn't hold back their GPUs by compromising to support it before the time was right to introduce it.

nVidia held off introducing it until GPUs had the geometry shader implementation and performance to support it because it makes sense to do so rather than implement it in a fixed function block which is a backward looking approach.
 
I still can't see how it's going to be no better than a Fury x, surely over the time in development someone on the team would have said, here hang on this gpu is a bit rubbish how the hell are we going to sell this.

But then what do I know, I only buy the stuff

Even if it was just a die shrunk Fury-X its clocked 60% higher on the core and 95% higher on the memory, and yet if you compare its performance to the Fury-X there is little between them.

As has been said before, it doesn't add up.

Maybe because this particular one is not setup as a gamers card, not just its drivers.

The one thing we do know for sure, it is not the gaming variant, to insist "oh they are all the same" is just an assertion.
 
Status
Not open for further replies.
Back
Top Bottom